diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md index 7822d8ef2b328..1e4d449a9a551 100644 --- a/.github/ISSUE_TEMPLATE.md +++ b/.github/ISSUE_TEMPLATE.md @@ -1,40 +1,46 @@ - + + +**Describe the feature**: + + + **Elasticsearch version**: **Plugins installed**: [] -**JVM version**: +**JVM version** (`java -version`): -**OS version**: +**OS version** (`uname -a` if on a Unix-like system): **Description of the problem including expected versus actual behavior**: **Steps to reproduce**: + +Please include a *minimal* but *complete* recreation of the problem, including +(e.g.) index creation, mappings, settings, query etc. The easier you make for +us to reproduce it, the more likely that somebody will take the time to look at it. + 1. 2. 3. **Provide logs (if relevant)**: - - -**Describe the feature**: diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 92b35e97baa05..6a4531f1bdefa 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -11,3 +11,4 @@ attention. - If submitting code, have you built your formula locally prior to submission with `gradle check`? - If submitting code, is your pull request against master? Unless there is a good reason otherwise, we prefer pull requests against master and will backport as needed. - If submitting code, have you checked that your submission is for an [OS that we support](https://www.elastic.co/support/matrix#show_os)? +- If you are submitting this code for a class then read our [policy](https://github.com/elastic/elasticsearch/blob/master/CONTRIBUTING.md#contributing-as-part-of-a-class) for that. diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 5885bf9def7eb..0192ab13a5557 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -88,8 +88,8 @@ Contributing to the Elasticsearch codebase **Repository:** [https://github.com/elastic/elasticsearch](https://github.com/elastic/elasticsearch) Make sure you have [Gradle](http://gradle.org) installed, as -Elasticsearch uses it as its build system. Gradle must be version 2.13 _exactly_ in -order to build successfully. +Elasticsearch uses it as its build system. Gradle must be at least +version 3.3 in order to build successfully. Eclipse users can automatically configure their IDE: `gradle eclipse` then `File: Import: Existing Projects into Workspace`. Select the @@ -101,7 +101,11 @@ IntelliJ users can automatically configure their IDE: `gradle idea` then `File->New Project From Existing Sources`. Point to the root of the source directory, select `Import project from external model->Gradle`, enable -`Use auto-import`. +`Use auto-import`. Additionally, in order to run tests directly from +IDEA 2017.1 and above it is required to disable IDEA run launcher, +which can be achieved by adding `-Didea.no.launcher=true` +[JVM option](https://intellij-support.jetbrains.com/hc/en-us/articles/206544869-Configuring-JVM-options-and-platform-properties) + The Elasticsearch codebase makes heavy use of Java `assert`s and the test runner requires that assertions be enabled within the JVM. This @@ -139,3 +143,32 @@ Before submitting your changes, run the test suite to make sure that nothing is ```sh gradle check ``` + +Contributing as part of a class +------------------------------- +In general Elasticsearch is happy to accept contributions that were created as +part of a class but strongly advise against making the contribution as part of +the class. So if you have code you wrote for a class feel free to submit it. + +Please, please, please do not assign contributing to Elasticsearch as part of a +class. If you really want to assign writing code for Elasticsearch as an +assignment then the code contributions should be made to your private clone and +opening PRs against the primary Elasticsearch clone must be optional, fully +voluntary, not for a grade, and without any deadlines. + +Because: + +* While the code review process is likely very educational, it can take wildly +varying amounts of time depending on who is available, where the change is, and +how deep the change is. There is no way to predict how long it will take unless +we rush. +* We do not rush reviews without a very, very good reason. Class deadlines +aren't a good enough reason for us to rush reviews. +* We deeply discourage opening a PR you don't intend to work through the entire +code review process because it wastes our time. +* We don't have the capacity to absorb an entire class full of new contributors, +especially when they are unlikely to become long time contributors. + +Finally, we require that you run `gradle check` before submitting a +non-documentation contribution. This is mentioned above, but it is worth +repeating in this section because it has come up in this context. diff --git a/NOTICE.txt b/NOTICE.txt index c99b958193198..643a060cd05c4 100644 --- a/NOTICE.txt +++ b/NOTICE.txt @@ -1,5 +1,5 @@ Elasticsearch -Copyright 2009-2016 Elasticsearch +Copyright 2009-2017 Elasticsearch This product includes software developed by The Apache Software Foundation (http://www.apache.org/). diff --git a/README.textile b/README.textile index dc3a263cd7ce2..9c2b2c5d91e2c 100644 --- a/README.textile +++ b/README.textile @@ -50,16 +50,16 @@ h3. Indexing Let's try and index some twitter like information. First, let's create a twitter user, and add some tweets (the @twitter@ index will be created automatically):
-curl -XPUT 'http://localhost:9200/twitter/user/kimchy?pretty' -d '{ "name" : "Shay Banon" }'
+curl -XPUT 'http://localhost:9200/twitter/user/kimchy?pretty' -H 'Content-Type: application/json' -d '{ "name" : "Shay Banon" }'
 
-curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty' -d '
+curl -XPUT 'http://localhost:9200/twitter/tweet/1?pretty' -H 'Content-Type: application/json' -d '
 {
     "user": "kimchy",
     "post_date": "2009-11-15T13:12:00",
     "message": "Trying out Elasticsearch, so far so good?"
 }'
 
-curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty' -d '
+curl -XPUT 'http://localhost:9200/twitter/tweet/2?pretty' -H 'Content-Type: application/json' -d '
 {
     "user": "kimchy",
     "post_date": "2009-11-15T14:12:12",
@@ -87,7 +87,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=tru
 We can also use the JSON query language Elasticsearch provides instead of a query string:
 
 
-curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -H 'Content-Type: application/json' -d '
 {
     "query" : {
         "match" : { "user": "kimchy" }
@@ -98,7 +98,7 @@ curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
 Just for kicks, let's get all the documents stored (we should see the user as well):
 
 
-curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
 {
     "query" : {
         "match_all" : {}
@@ -109,7 +109,7 @@ curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
 We can also do range search (the @postDate@ was automatically identified as date)
 
 
-curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
 {
     "query" : {
         "range" : {
@@ -130,16 +130,16 @@ Elasticsearch supports multiple indices, as well as multiple types per index. In
 Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
 
 
-curl -XPUT 'http://localhost:9200/kimchy/info/1?pretty' -d '{ "name" : "Shay Banon" }'
+curl -XPUT 'http://localhost:9200/kimchy/info/1?pretty' -H 'Content-Type: application/json' -d '{ "name" : "Shay Banon" }'
 
-curl -XPUT 'http://localhost:9200/kimchy/tweet/1?pretty' -d '
+curl -XPUT 'http://localhost:9200/kimchy/tweet/1?pretty' -H 'Content-Type: application/json' -d '
 {
     "user": "kimchy",
     "post_date": "2009-11-15T13:12:00",
     "message": "Trying out Elasticsearch, so far so good?"
 }'
 
-curl -XPUT 'http://localhost:9200/kimchy/tweet/2?pretty' -d '
+curl -XPUT 'http://localhost:9200/kimchy/tweet/2?pretty' -H 'Content-Type: application/json' -d '
 {
     "user": "kimchy",
     "post_date": "2009-11-15T14:12:12",
@@ -152,7 +152,7 @@ The above will index information into the @kimchy@ index, with two types, @info@
 Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):
 
 
-curl -XPUT http://localhost:9200/another_user?pretty -d '
+curl -XPUT http://localhost:9200/another_user?pretty -H 'Content-Type: application/json' -d '
 {
     "index" : {
         "number_of_shards" : 1,
@@ -165,7 +165,7 @@ Search (and similar operations) are multi index aware. This means that we can ea
 index (twitter user), for example:
 
 
-curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -H 'Content-Type: application/json' -d '
 {
     "query" : {
         "match_all" : {}
@@ -176,7 +176,7 @@ curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
 Or on all the indices:
 
 
-curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
+curl -XGET 'http://localhost:9200/_search?pretty=true' -H 'Content-Type: application/json' -d '
 {
     "query" : {
         "match_all" : {}
@@ -200,7 +200,7 @@ We have just covered a very small portion of what Elasticsearch is all about. Fo
 
 h3. Building from Source
 
-Elasticsearch uses "Gradle":https://gradle.org for its build system. You'll need to have version 2.13 of Gradle installed.
+Elasticsearch uses "Gradle":https://gradle.org for its build system. You'll need to have at least version 3.3 of Gradle installed.
 
 In order to create a distribution, simply run the @gradle assemble@ command in the cloned directory.
 
diff --git a/TESTING.asciidoc b/TESTING.asciidoc
index dcd6c9981be3a..d9fb3daac98c7 100644
--- a/TESTING.asciidoc
+++ b/TESTING.asciidoc
@@ -25,12 +25,6 @@ run it using Gradle:
 gradle run
 -------------------------------------
 
-or to attach a remote debugger, run it as:
-
--------------------------------------
-gradle run --debug-jvm
--------------------------------------
-
 === Test case filtering.
 
 - `tests.class` is a class-filtering shell-like glob pattern,
@@ -351,24 +345,23 @@ VM running trusty by running
 
 These are the linux flavors the Vagrantfile currently supports:
 
-* ubuntu-1204 aka precise
 * ubuntu-1404 aka trusty
 * ubuntu-1604 aka xenial
 * debian-8 aka jessie, the current debian stable distribution
 * centos-6
 * centos-7
-* fedora-24
+* fedora-25
 * oel-6 aka Oracle Enterprise Linux 6
 * oel-7 aka Oracle Enterprise Linux 7
 * sles-12
-* opensuse-13
+* opensuse-42 aka Leap
 
 We're missing the following from the support matrix because there aren't high
 quality boxes available in vagrant atlas:
 
 * sles-11
 
-We're missing the follow because our tests are very linux/bash centric:
+We're missing the following because our tests are very linux/bash centric:
 
 * Windows Server 2012
 
@@ -424,21 +417,59 @@ sudo -E bats $BATS_TESTS/*rpm*.bats
 If you wanted to retest all the release artifacts on a single VM you could:
 
 -------------------------------------------------
-gradle vagrantSetUp
-vagrant up ubuntu-1404 --provider virtualbox && vagrant ssh ubuntu-1404
+gradle setupBats
+cd qa/vagrant; vagrant up ubuntu-1404 --provider virtualbox && vagrant ssh ubuntu-1404
 cd $BATS_ARCHIVES
 sudo -E bats $BATS_TESTS/*.bats
 -------------------------------------------------
 
+You can also use Gradle to prepare the test environment and then starts a single VM:
+
+-------------------------------------------------
+gradle vagrantFedora25#up
+-------------------------------------------------
+
+Or any of vagrantCentos6#up, vagrantCentos7#up, vagrantDebian8#up,
+vagrantFedora25#up, vagrantOel6#up, vagrantOel7#up, vagrantOpensuse13#up,
+vagrantSles12#up, vagrantUbuntu1404#up, vagrantUbuntu1604#up.
+
+Once up, you can then connect to the VM using SSH from the elasticsearch directory:
+
+-------------------------------------------------
+vagrant ssh fedora-25
+-------------------------------------------------
+
+Or from another directory:
+
+-------------------------------------------------
+VAGRANT_CWD=/path/to/elasticsearch vagrant ssh fedora-25
+-------------------------------------------------
+
 Note: Starting vagrant VM outside of the elasticsearch folder requires to
 indicates the folder that contains the Vagrantfile using the VAGRANT_CWD
-environment variable:
+environment variable.
+
+== Testing backwards compatibility
+
+Backwards compatibility tests exist to test upgrading from each supported version
+to the current version. To run all backcompat tests use:
+
+-------------------------------------------------
+gradle bwcTest
+-------------------------------------------------
+
+A specific version can be tested as well. For example, to test backcompat with
+version 5.3.2 run:
 
 -------------------------------------------------
-gradle vagrantSetUp
-VAGRANT_CWD=/path/to/elasticsearch vagrant up centos-7 --provider virtualbox
+gradle v5.3.2#bwcTest
 -------------------------------------------------
 
+When running `gradle check`, some minimal backcompat checks are run. Which version
+is tested depends on the branch. On master, this will test against the current
+stable branch. On the stable branch, it will test against the latest release
+branch. Finally, on a release branch, it will test against the most recent release.
+
 == Coverage analysis
 
 Tests can be run instrumented with jacoco to produce a coverage report in
@@ -462,7 +493,7 @@ Combined (Unit+Integration) coverage:
 mvn -Dtests.coverage verify jacoco:report
 ---------------------------------------------------------------------------
 
-== Debugging from an IDE
+== Launching and debugging from an IDE
 
 If you want to run elasticsearch from your IDE, the `gradle run` task
 supports a remote debugging option:
@@ -471,6 +502,17 @@ supports a remote debugging option:
 gradle run --debug-jvm
 ---------------------------------------------------------------------------
 
+== Debugging remotely from an IDE
+
+If you want to run Elasticsearch and be able to remotely attach the process
+for debugging purposes from your IDE, can start Elasticsearch using `ES_JAVA_OPTS`:
+
+---------------------------------------------------------------------------
+ES_JAVA_OPTS="-Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=4000,suspend=y" ./bin/elasticsearch
+---------------------------------------------------------------------------
+
+Read your IDE documentation for how to attach a debugger to a JVM process.
+
 == Building with extra plugins
 Additional plugins may be built alongside elasticsearch, where their
 dependency on elasticsearch will be substituted with the local elasticsearch
@@ -482,4 +524,3 @@ included as part of the build by checking the projects of the build.
 ---------------------------------------------------------------------------
 gradle projects
 ---------------------------------------------------------------------------
-
diff --git a/Vagrantfile b/Vagrantfile
index 806d39cc16067..a4dc935f15d65 100644
--- a/Vagrantfile
+++ b/Vagrantfile
@@ -22,10 +22,6 @@
 # under the License.
 
 Vagrant.configure(2) do |config|
-  config.vm.define "ubuntu-1204" do |config|
-    config.vm.box = "elastic/ubuntu-12.04-x86_64"
-    ubuntu_common config
-  end
   config.vm.define "ubuntu-1404" do |config|
     config.vm.box = "elastic/ubuntu-14.04-x86_64"
     ubuntu_common config
@@ -42,7 +38,7 @@ Vagrant.configure(2) do |config|
   # debian and it works fine.
   config.vm.define "debian-8" do |config|
     config.vm.box = "elastic/debian-8-x86_64"
-    deb_common config, 'echo deb http://cloudfront.debian.net/debian jessie-backports main > /etc/apt/sources.list.d/backports.list', 'backports'
+    deb_common config
   end
   config.vm.define "centos-6" do |config|
     config.vm.box = "elastic/centos-6-x86_64"
@@ -60,12 +56,12 @@ Vagrant.configure(2) do |config|
     config.vm.box = "elastic/oraclelinux-7-x86_64"
     rpm_common config
   end
-  config.vm.define "fedora-24" do |config|
-    config.vm.box = "elastic/fedora-24-x86_64"
+  config.vm.define "fedora-25" do |config|
+    config.vm.box = "elastic/fedora-25-x86_64"
     dnf_common config
   end
-  config.vm.define "opensuse-13" do |config|
-    config.vm.box = "elastic/opensuse-13-x86_64"
+  config.vm.define "opensuse-42" do |config|
+    config.vm.box = "elastic/opensuse-42-x86_64"
     opensuse_common config
   end
   config.vm.define "sles-12" do |config|
@@ -108,16 +104,22 @@ SOURCE_PROMPT
 source /etc/profile.d/elasticsearch_prompt.sh
 SOURCE_PROMPT
       SHELL
+      # Creates a file to mark the machine as created by vagrant. Tests check
+      # for this file and refuse to run if it is not present so that they can't
+      # be run unexpectedly.
+      config.vm.provision "markerfile", type: "shell", inline: <<-SHELL
+        touch /etc/is_vagrant_vm
+      SHELL
     end
     config.config_procs.push ['2', set_prompt]
   end
 end
 
 def ubuntu_common(config, extra: '')
-  deb_common config, 'apt-add-repository -y ppa:openjdk-r/ppa > /dev/null 2>&1', 'openjdk-r-*', extra: extra
+  deb_common config, extra: extra
 end
 
-def deb_common(config, add_openjdk_repository_command, openjdk_list, extra: '')
+def deb_common(config, extra: '')
   # http://foo-o-rama.com/vagrant--stdin-is-not-a-tty--fix.html
   config.vm.provision "fix-no-tty", type: "shell" do |s|
       s.privileged = false
@@ -127,24 +129,14 @@ def deb_common(config, add_openjdk_repository_command, openjdk_list, extra: '')
     update_command: "apt-get update",
     update_tracking_file: "/var/cache/apt/archives/last_update",
     install_command: "apt-get install -y",
-    java_package: "openjdk-8-jdk",
-    extra: <<-SHELL
-      export DEBIAN_FRONTEND=noninteractive
-      ls /etc/apt/sources.list.d/#{openjdk_list}.list > /dev/null 2>&1 ||
-        (echo "==> Importing java-8 ppa" &&
-          #{add_openjdk_repository_command} &&
-          apt-get update)
-      #{extra}
-SHELL
-  )
+    extra: extra)
 end
 
 def rpm_common(config)
   provision(config,
     update_command: "yum check-update",
     update_tracking_file: "/var/cache/yum/last_update",
-    install_command: "yum install -y",
-    java_package: "java-1.8.0-openjdk-devel")
+    install_command: "yum install -y")
 end
 
 def dnf_common(config)
@@ -152,8 +144,7 @@ def dnf_common(config)
     update_command: "dnf check-update",
     update_tracking_file: "/var/cache/dnf/last_update",
     install_command: "dnf install -y",
-    install_command_retries: 5,
-    java_package: "java-1.8.0-openjdk-devel")
+    install_command_retries: 5)
   if Vagrant.has_plugin?("vagrant-cachier")
     # Autodetect doesn't work....
     config.cache.auto_detect = false
@@ -170,17 +161,12 @@ def suse_common(config, extra)
     update_command: "zypper --non-interactive list-updates",
     update_tracking_file: "/var/cache/zypp/packages/last_update",
     install_command: "zypper --non-interactive --quiet install --no-recommends",
-    java_package: "java-1_8_0-openjdk-devel",
     extra: extra)
 end
 
 def sles_common(config)
   extra = <<-SHELL
-    zypper rr systemsmanagement_puppet
-    zypper addrepo -t yast2 http://demeter.uni-regensburg.de/SLES12-x64/DVD1/ dvd1 || true
-    zypper addrepo -t yast2 http://demeter.uni-regensburg.de/SLES12-x64/DVD2/ dvd2 || true
-    zypper addrepo http://download.opensuse.org/repositories/Java:Factory/SLE_12/Java:Factory.repo || true
-    zypper --no-gpg-checks --non-interactive refresh
+    zypper rr systemsmanagement_puppet puppetlabs-pc1
     zypper --non-interactive install git-core
 SHELL
   suse_common config, extra
@@ -195,7 +181,6 @@ end
 #   is cached by vagrant-cachier.
 # @param install_command [String] The command used to install a package.
 #   Required. Think `apt-get install #{package}`.
-# @param java_package [String] The name of the java package. Required.
 # @param extra [String] Extra provisioning commands run before anything else.
 #   Optional. Used for things like setting up the ppa for Java 8.
 def provision(config,
@@ -203,14 +188,20 @@ def provision(config,
     update_tracking_file: 'required',
     install_command: 'required',
     install_command_retries: 0,
-    java_package: 'required',
     extra: '')
   # Vagrant run ruby 2.0.0 which doesn't have required named parameters....
   raise ArgumentError.new('update_command is required') if update_command == 'required'
   raise ArgumentError.new('update_tracking_file is required') if update_tracking_file == 'required'
   raise ArgumentError.new('install_command is required') if install_command == 'required'
-  raise ArgumentError.new('java_package is required') if java_package == 'required'
-  config.vm.provision "bats dependencies", type: "shell", inline: <<-SHELL
+  config.vm.provider "virtualbox" do |v|
+    # Give the box more memory and cpu because our tests are beasts!
+    v.memory = Integer(ENV['VAGRANT_MEMORY'] || 8192)
+    v.cpus = Integer(ENV['VAGRANT_CPUS'] || 4)
+  end
+  config.vm.synced_folder "#{Dir.home}/.gradle/caches", "/home/vagrant/.gradle/caches",
+    create: true,
+    owner: "vagrant"
+  config.vm.provision "dependencies", type: "shell", inline: <<-SHELL
     set -e
     set -o pipefail
 
@@ -256,7 +247,10 @@ def provision(config,
 
     #{extra}
 
-    installed java || install #{java_package}
+    installed java || {
+      echo "==> Java is not installed on vagrant box ${config.vm.box}"
+      return 1
+    }
     ensure tar
     ensure curl
     ensure unzip
@@ -270,6 +264,18 @@ def provision(config,
       /tmp/bats/install.sh /usr
       rm -rf /tmp/bats
     }
+
+    installed gradle || {
+      echo "==> Installing Gradle"
+      curl -sS -o /tmp/gradle.zip -L https://services.gradle.org/distributions/gradle-3.3-bin.zip
+      unzip /tmp/gradle.zip -d /opt
+      rm -rf /tmp/gradle.zip
+      ln -s /opt/gradle-3.3/bin/gradle /usr/bin/gradle
+      # make nfs mounted gradle home dir writeable
+      chown vagrant:vagrant /home/vagrant/.gradle
+    }
+
+
     cat \<\ /etc/profile.d/elasticsearch_vars.sh
 export ZIP=/elasticsearch/distribution/zip/build/distributions
 export TAR=/elasticsearch/distribution/tar/build/distributions
@@ -279,6 +285,7 @@ export BATS=/project/build/bats
 export BATS_UTILS=/project/build/bats/utils
 export BATS_TESTS=/project/build/bats/tests
 export BATS_ARCHIVES=/project/build/bats/archives
+export GRADLE_HOME=/opt/gradle-3.3
 VARS
     cat \<\ /etc/sudoers.d/elasticsearch_vars
 Defaults   env_keep += "ZIP"
diff --git a/benchmarks/build.gradle b/benchmarks/build.gradle
index 36732215d43fb..5a508fa106537 100644
--- a/benchmarks/build.gradle
+++ b/benchmarks/build.gradle
@@ -37,10 +37,7 @@ apply plugin: 'application'
 archivesBaseName = 'elasticsearch-benchmarks'
 mainClassName = 'org.openjdk.jmh.Main'
 
-// never try to invoke tests on the benchmark project - there aren't any
-check.dependsOn.remove(test)
-// explicitly override the test task too in case somebody invokes 'gradle test' so it won't trip
-task test(type: Test, overwrite: true)
+test.enabled = false
 
 dependencies {
     compile("org.elasticsearch:elasticsearch:${version}") {
@@ -55,11 +52,10 @@ dependencies {
     runtime 'org.apache.commons:commons-math3:3.2'
 }
 
-compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"
+compileJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked,-processing"
 // enable the JMH's BenchmarkProcessor to generate the final benchmark classes
 // needs to be added separately otherwise Gradle will quote it and javac will fail
 compileJava.options.compilerArgs.addAll(["-processor", "org.openjdk.jmh.generators.BenchmarkProcessor"])
-compileTestJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-try,-unchecked"
 
 forbiddenApis {
     // classes generated by JMH can use all sorts of forbidden APIs but we have no influence at all and cannot exclude these classes
diff --git a/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java b/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java
index 4d8f7cfeaac99..591fa400d18da 100644
--- a/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java
+++ b/benchmarks/src/main/java/org/elasticsearch/benchmark/routing/allocation/Allocators.java
@@ -36,8 +36,6 @@
 import org.elasticsearch.gateway.GatewayAllocator;
 
 import java.lang.reflect.InvocationTargetException;
-import java.net.InetAddress;
-import java.net.UnknownHostException;
 import java.util.Collection;
 import java.util.Collections;
 import java.util.List;
@@ -49,7 +47,7 @@ private static class NoopGatewayAllocator extends GatewayAllocator {
         public static final NoopGatewayAllocator INSTANCE = new NoopGatewayAllocator();
 
         protected NoopGatewayAllocator() {
-            super(Settings.EMPTY, null, null);
+            super(Settings.EMPTY);
         }
 
         @Override
diff --git a/build.gradle b/build.gradle
index 1159352cd5dec..00d1730a26cb2 100644
--- a/build.gradle
+++ b/build.gradle
@@ -17,20 +17,25 @@
  * under the License.
  */
 
+import java.nio.file.Path
+import java.util.regex.Matcher
 import org.eclipse.jgit.lib.Repository
 import org.eclipse.jgit.lib.RepositoryBuilder
 import org.gradle.plugins.ide.eclipse.model.SourceFolder
 import org.apache.tools.ant.taskdefs.condition.Os
+import org.elasticsearch.gradle.VersionProperties
+import org.elasticsearch.gradle.Version
 
 // common maven publishing configuration
 subprojects {
   group = 'org.elasticsearch'
-  version = org.elasticsearch.gradle.VersionProperties.elasticsearch
+  version = VersionProperties.elasticsearch
   description = "Elasticsearch subproject ${project.path}"
 }
 
+Path rootPath = rootDir.toPath()
 // setup pom license info, but only for artifacts that are part of elasticsearch
-configure(subprojects.findAll { it.path.startsWith(':x-plugins') == false }) {
+configure(subprojects.findAll { it.projectDir.toPath().startsWith(rootPath) }) {
 
   // we only use maven publish to add tasks for pom generation
   plugins.withType(MavenPublishPlugin).whenPluginAdded {
@@ -57,15 +62,102 @@ configure(subprojects.findAll { it.path.startsWith(':x-plugins') == false }) {
   }
 }
 
+/* Introspect all versions of ES that may be tested agains for backwards
+ * compatibility. It is *super* important that this logic is the same as the
+ * logic in VersionUtils.java, modulo alphas, betas, and rcs which are ignored
+ * in gradle because they don't have any backwards compatibility guarantees
+ * but are not ignored in VersionUtils.java because the tests expect them not
+ * to be. */
+Version currentVersion = Version.fromString(VersionProperties.elasticsearch.minus('-SNAPSHOT'))
+int prevMajor = currentVersion.major - 1
+File versionFile = file('core/src/main/java/org/elasticsearch/Version.java')
+List versionLines = versionFile.readLines('UTF-8')
+List versions = []
+// keep track of the previous major version's last minor, so we know where wire compat begins
+int prevMinorIndex = -1 // index in the versions list of the last minor from the prev major
+int lastPrevMinor = -1 // the minor version number from the prev major we most recently seen
+for (String line : versionLines) {
+  /* Note that this skips alphas and betas which is fine because they aren't
+   * compatible with anything. */
+  Matcher match = line =~ /\W+public static final Version V_(\d+)_(\d+)_(\d+) .*/
+  if (match.matches()) {
+    int major = Integer.parseInt(match.group(1))
+    int minor = Integer.parseInt(match.group(2))
+    int bugfix = Integer.parseInt(match.group(3))
+    Version foundVersion = new Version(major, minor, bugfix, false)
+    if (currentVersion != foundVersion) {
+      versions.add(foundVersion)
+    }
+    if (major == prevMajor && minor > lastPrevMinor) {
+      prevMinorIndex = versions.size() - 1
+      lastPrevMinor = minor
+    }
+  }
+}
+if (versions.toSorted { it.id } != versions) {
+  println "Versions: ${versions}"
+  throw new GradleException("Versions.java contains out of order version constants")
+}
+if (currentVersion.bugfix == 0) {
+  // If on a release branch, after the initial release of that branch, the bugfix version will
+  // be bumped, and will be != 0. On master and N.x branches, we want to test against the
+  // unreleased version of closest branch. So for those cases, the version includes -SNAPSHOT,
+  // and the bwc distribution will checkout and build that version.
+  Version last = versions[-1]
+  versions[-1] = new Version(last.major, last.minor, last.bugfix, true)
+  if (last.bugfix == 0) {
+    versions[-2] = new Version(
+        versions[-2].major, versions[-2].minor, versions[-2].bugfix, true)
+  }
+}
+
+// injecting groovy property variables into all projects
 allprojects {
-  // injecting groovy property variables into all projects
   project.ext {
     // for ide hacks...
     isEclipse = System.getProperty("eclipse.launcher") != null || gradle.startParameter.taskNames.contains('eclipse') || gradle.startParameter.taskNames.contains('cleanEclipse')
     isIdea = System.getProperty("idea.active") != null || gradle.startParameter.taskNames.contains('idea') || gradle.startParameter.taskNames.contains('cleanIdea')
+    // for backcompat testing
+    indexCompatVersions = versions
+    wireCompatVersions = versions.subList(prevMinorIndex, versions.size())
+  }
+}
+
+task verifyVersions {
+  doLast {
+    if (gradle.startParameter.isOffline()) {
+      throw new GradleException("Must run in online mode to verify versions")
+    }
+    // Read the list from maven central
+    Node xml
+    new URL('https://repo1.maven.org/maven2/org/elasticsearch/elasticsearch/maven-metadata.xml').openStream().withStream { s ->
+        xml = new XmlParser().parse(s)
+    }
+    Set knownVersions = new TreeSet<>(xml.versioning.versions.version.collect { it.text() }.findAll { it ==~ /\d\.\d\.\d/ }.collect { Version.fromString(it) })
+
+    // Limit the known versions to those that should be index compatible, and are not future versions
+    knownVersions = knownVersions.findAll { it.major >= prevMajor && it.before(VersionProperties.elasticsearch) }
+
+    /* Limit the listed versions to those that have been marked as released.
+     * Versions not marked as released don't get the same testing and we want
+     * to make sure that we flip all unreleased versions to released as soon
+     * as possible after release. */
+    Set actualVersions = new TreeSet<>(indexCompatVersions.findAll { false == it.snapshot })
+
+    // Finally, compare!
+    if (knownVersions.equals(actualVersions) == false) {
+      throw new GradleException("out-of-date released versions\nActual  :" + actualVersions + "\nExpected:" + knownVersions +
+        "\nUpdate Version.java. Note that Version.CURRENT doesn't count because it is not released.")
+    }
   }
 }
 
+task branchConsistency {
+  description 'Ensures this branch is internally consistent. For example, that versions constants match released versions.'
+  group 'Verification'
+  dependsOn verifyVersions
+}
+
 subprojects {
   project.afterEvaluate {
     // include license and notice in jars
@@ -119,12 +211,33 @@ subprojects {
     "org.elasticsearch.plugin:transport-netty4-client:${version}": ':modules:transport-netty4',
     "org.elasticsearch.plugin:reindex-client:${version}": ':modules:reindex',
     "org.elasticsearch.plugin:lang-mustache-client:${version}": ':modules:lang-mustache',
+    "org.elasticsearch.plugin:parent-join-client:${version}": ':modules:parent-join',
+    "org.elasticsearch.plugin:aggs-matrix-stats-client:${version}": ':modules:aggs-matrix-stats',
     "org.elasticsearch.plugin:percolator-client:${version}": ':modules:percolator',
   ]
-  configurations.all {
-    resolutionStrategy.dependencySubstitution { DependencySubstitutions subs ->
-      projectSubstitutions.each { k,v ->
-        subs.substitute(subs.module(k)).with(subs.project(v))
+  if (indexCompatVersions[-1].snapshot) {
+    /* The last and second to last versions can be snapshots. Rather than use
+     * snapshots built by CI we connect these versions to projects that build
+     * those those versions from the HEAD of the appropriate branch. */
+    if (indexCompatVersions[-1].bugfix == 0) {
+      ext.projectSubstitutions["org.elasticsearch.distribution.deb:elasticsearch:${indexCompatVersions[-1]}"] = ':distribution:bwc-stable-snapshot'
+      ext.projectSubstitutions["org.elasticsearch.distribution.rpm:elasticsearch:${indexCompatVersions[-1]}"] = ':distribution:bwc-stable-snapshot'
+      ext.projectSubstitutions["org.elasticsearch.distribution.zip:elasticsearch:${indexCompatVersions[-1]}"] = ':distribution:bwc-stable-snapshot'
+      ext.projectSubstitutions["org.elasticsearch.distribution.deb:elasticsearch:${indexCompatVersions[-2]}"] = ':distribution:bwc-release-snapshot'
+      ext.projectSubstitutions["org.elasticsearch.distribution.rpm:elasticsearch:${indexCompatVersions[-2]}"] = ':distribution:bwc-release-snapshot'
+      ext.projectSubstitutions["org.elasticsearch.distribution.zip:elasticsearch:${indexCompatVersions[-2]}"] = ':distribution:bwc-release-snapshot'
+    } else {
+      ext.projectSubstitutions["org.elasticsearch.distribution.deb:elasticsearch:${indexCompatVersions[-1]}"] = ':distribution:bwc-release-snapshot'
+      ext.projectSubstitutions["org.elasticsearch.distribution.rpm:elasticsearch:${indexCompatVersions[-1]}"] = ':distribution:bwc-release-snapshot'
+      ext.projectSubstitutions["org.elasticsearch.distribution.zip:elasticsearch:${indexCompatVersions[-1]}"] = ':distribution:bwc-release-snapshot'
+    }
+  }
+  project.afterEvaluate {
+    configurations.all {
+      resolutionStrategy.dependencySubstitution { DependencySubstitutions subs ->
+        projectSubstitutions.each { k,v ->
+          subs.substitute(subs.module(k)).with(subs.project(v))
+        }
       }
     }
   }
diff --git a/buildSrc/build.gradle b/buildSrc/build.gradle
index 0e8c2dc1412dd..0839b8a22f8fa 100644
--- a/buildSrc/build.gradle
+++ b/buildSrc/build.gradle
@@ -23,14 +23,12 @@ apply plugin: 'groovy'
 
 group = 'org.elasticsearch.gradle'
 
-// TODO: remove this when upgrading to a version that supports ProgressLogger
-// gradle 2.14 made internal apis unavailable to plugins, and gradle considered
-// ProgressLogger to be an internal api. Until this is made available again,
-// we can't upgrade without losing our nice progress logging
-// NOTE that this check duplicates that in BuildPlugin, but we need to check
-// early here before trying to compile the broken classes in buildSrc
-if (GradleVersion.current() != GradleVersion.version('2.13')) {
-  throw new GradleException('Gradle 2.13 is required to build elasticsearch')
+if (GradleVersion.current() < GradleVersion.version('3.3')) {
+  throw new GradleException('Gradle 3.3+ is required to build elasticsearch')
+}
+
+if (JavaVersion.current() < JavaVersion.VERSION_1_8) {
+  throw new GradleException('Java 1.8 is required to build elasticsearch gradle tools')
 }
 
 if (project == rootProject) {
@@ -94,11 +92,17 @@ dependencies {
   compile 'com.netflix.nebula:gradle-info-plugin:3.0.3'
   compile 'org.eclipse.jgit:org.eclipse.jgit:3.2.0.201312181205-r'
   compile 'com.perforce:p4java:2012.3.551082' // THIS IS SUPPOSED TO BE OPTIONAL IN THE FUTURE....
-  compile 'de.thetaphi:forbiddenapis:2.2'
+  compile 'de.thetaphi:forbiddenapis:2.3'
   compile 'org.apache.rat:apache-rat:0.11'
-  compile 'ru.vyarus:gradle-animalsniffer-plugin:1.0.1'
 }
 
+// Gradle 2.14+ removed ProgressLogger(-Factory) classes from the public APIs
+// Use logging dependency instead
+
+dependencies {
+  compileOnly "org.gradle:gradle-logging:${GradleVersion.current().getVersion()}"
+  compile 'ru.vyarus:gradle-animalsniffer-plugin:1.2.0' // Gradle 2.14 requires a version > 1.0.1
+}
 
 /*****************************************************************************
  *                         Bootstrap repositories                            *
@@ -107,6 +111,9 @@ dependencies {
 if (project == rootProject) {
 
   repositories {
+    if (System.getProperty("repos.mavenLocal") != null) {
+      mavenLocal()
+    }
     mavenCentral()
   }
   test.exclude 'org/elasticsearch/test/NamingConventionsCheckBadClasses*'
@@ -149,4 +156,11 @@ if (project != rootProject) {
     testClass = 'org.elasticsearch.test.NamingConventionsCheckBadClasses$UnitTestCase'
     integTestClass = 'org.elasticsearch.test.NamingConventionsCheckBadClasses$IntegTestCase'
   }
+
+  task namingConventionsMain(type: org.elasticsearch.gradle.precommit.NamingConventionsTask) {
+    checkForTestsInMain = true
+    testClass = namingConventions.testClass
+    integTestClass = namingConventions.integTestClass
+  }
+  precommit.dependsOn namingConventionsMain
 }
diff --git a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy
index e2230b116c714..d3d07db0d2072 100644
--- a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy
+++ b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingPlugin.groovy
@@ -12,10 +12,38 @@ import org.gradle.api.tasks.testing.Test
 class RandomizedTestingPlugin implements Plugin {
 
     void apply(Project project) {
+        setupSeed(project)
         replaceTestTask(project.tasks)
         configureAnt(project.ant)
     }
 
+    /**
+     * Pins the test seed at configuration time so it isn't different on every
+     * {@link RandomizedTestingTask} execution. This is useful if random
+     * decisions in one run of {@linkplain RandomizedTestingTask} influence the
+     * outcome of subsequent runs. Pinning the seed up front like this makes
+     * the reproduction line from one run be useful on another run.
+     */
+    static void setupSeed(Project project) {
+        if (project.rootProject.ext.has('testSeed')) {
+            /* Skip this if we've already pinned the testSeed. It is important
+             * that this checks the rootProject so that we know we've only ever
+             * initialized one time. */
+            return
+        }
+        String testSeed = System.getProperty('tests.seed')
+        if (testSeed == null) {
+            long seed = new Random(System.currentTimeMillis()).nextLong()
+            testSeed = Long.toUnsignedString(seed, 16).toUpperCase(Locale.ROOT)
+        }
+        /* Set the testSeed on the root project first so other projects can use
+         * it during initialization. */
+        project.rootProject.ext.testSeed = testSeed
+        project.rootProject.subprojects {
+            project.ext.testSeed = testSeed
+        }
+    }
+
     static void replaceTestTask(TaskContainer tasks) {
         Test oldTestTask = tasks.findByPath('test')
         if (oldTestTask == null) {
diff --git a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy
index b28e7210ea41d..1817ea57e7abe 100644
--- a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy
+++ b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/RandomizedTestingTask.groovy
@@ -9,6 +9,7 @@ import org.apache.tools.ant.DefaultLogger
 import org.apache.tools.ant.RuntimeConfigurable
 import org.apache.tools.ant.UnknownElement
 import org.gradle.api.DefaultTask
+import org.gradle.api.InvalidUserDataException
 import org.gradle.api.file.FileCollection
 import org.gradle.api.file.FileTreeElement
 import org.gradle.api.internal.tasks.options.Option
@@ -19,7 +20,7 @@ import org.gradle.api.tasks.Optional
 import org.gradle.api.tasks.TaskAction
 import org.gradle.api.tasks.util.PatternFilterable
 import org.gradle.api.tasks.util.PatternSet
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.internal.logging.progress.ProgressLoggerFactory
 import org.gradle.util.ConfigureUtil
 
 import javax.inject.Inject
@@ -69,6 +70,10 @@ class RandomizedTestingTask extends DefaultTask {
     @Input
     String ifNoTests = 'ignore'
 
+    @Optional
+    @Input
+    String onNonEmptyWorkDirectory = 'fail'
+
     TestLoggingConfiguration testLoggingConfig = new TestLoggingConfiguration()
 
     BalancersConfiguration balancersConfig = new BalancersConfiguration(task: this)
@@ -81,6 +86,7 @@ class RandomizedTestingTask extends DefaultTask {
     String argLine = null
 
     Map systemProperties = new HashMap<>()
+    Map environmentVariables = new HashMap<>()
     PatternFilterable patternSet = new PatternSet()
 
     RandomizedTestingTask() {
@@ -91,7 +97,7 @@ class RandomizedTestingTask extends DefaultTask {
 
     @Inject
     ProgressLoggerFactory getProgressLoggerFactory() {
-        throw new UnsupportedOperationException();
+        throw new UnsupportedOperationException()
     }
 
     void jvmArgs(Iterable arguments) {
@@ -106,6 +112,10 @@ class RandomizedTestingTask extends DefaultTask {
         systemProperties.put(property, value)
     }
 
+    void environment(String key, Object value) {
+        environmentVariables.put(key, value)
+    }
+
     void include(String... includes) {
         this.patternSet.include(includes);
     }
@@ -194,7 +204,9 @@ class RandomizedTestingTask extends DefaultTask {
             haltOnFailure: true, // we want to capture when a build failed, but will decide whether to rethrow later
             shuffleOnSlave: shuffleOnSlave,
             leaveTemporary: leaveTemporary,
-            ifNoTests: ifNoTests
+            ifNoTests: ifNoTests,
+            onNonEmptyWorkDirectory: onNonEmptyWorkDirectory,
+            newenvironment: true
         ]
 
         DefaultLogger listener = null
@@ -248,8 +260,16 @@ class RandomizedTestingTask extends DefaultTask {
                     }
                 }
                 for (Map.Entry prop : systemProperties) {
+                    if (prop.getKey().equals('tests.seed')) {
+                        throw new InvalidUserDataException('Seed should be ' +
+                            'set on the project instead of a system property')
+                    }
                     sysproperty key: prop.getKey(), value: prop.getValue().toString()
                 }
+                systemProperty 'tests.seed', project.testSeed
+                for (Map.Entry envvar : environmentVariables) {
+                    env key: envvar.getKey(), value: envvar.getValue().toString()
+                }
                 makeListeners()
             }
         } catch (BuildException e) {
diff --git a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy
index 14f5d476be3cb..da25afa938916 100644
--- a/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy
+++ b/buildSrc/src/main/groovy/com/carrotsearch/gradle/junit4/TestProgressLogger.groovy
@@ -25,8 +25,8 @@ import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedStartEvent
 import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedSuiteResultEvent
 import com.carrotsearch.ant.tasks.junit4.events.aggregated.AggregatedTestResultEvent
 import com.carrotsearch.ant.tasks.junit4.listeners.AggregatedEventListener
-import org.gradle.logging.ProgressLogger
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.internal.logging.progress.ProgressLogger
+import org.gradle.internal.logging.progress.ProgressLoggerFactory
 
 import static com.carrotsearch.ant.tasks.junit4.FormattingUtils.formatDurationInSeconds
 import static com.carrotsearch.ant.tasks.junit4.events.aggregated.TestStatus.ERROR
@@ -77,7 +77,7 @@ class TestProgressLogger implements AggregatedEventListener {
     /** Have we finished a whole suite yet? */
     volatile boolean suiteFinished = false
     /* Note that we probably overuse volatile here but it isn't hurting us and
-      lets us move things around without worying about breaking things. */
+       lets us move things around without worrying about breaking things. */
 
     @Subscribe
     void onStart(AggregatedStartEvent e) throws IOException {
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy
index 01bab85b0199a..af7716804bf86 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/BuildPlugin.groovy
@@ -18,7 +18,9 @@
  */
 package org.elasticsearch.gradle
 
+import com.carrotsearch.gradle.junit4.RandomizedTestingTask
 import nebula.plugin.extraconfigurations.ProvidedBasePlugin
+import org.apache.tools.ant.taskdefs.condition.Os
 import org.elasticsearch.gradle.precommit.PrecommitTasks
 import org.gradle.api.GradleException
 import org.gradle.api.InvalidUserDataException
@@ -118,9 +120,10 @@ class BuildPlugin implements Plugin {
                 println "  JDK Version           : ${gradleJavaVersionDetails}"
                 println "  JAVA_HOME             : ${gradleJavaHome}"
             }
+            println "  Random Testing Seed   : ${project.testSeed}"
 
             // enforce gradle version
-            GradleVersion minGradle = GradleVersion.version('2.13')
+            GradleVersion minGradle = GradleVersion.version('3.3')
             if (GradleVersion.current() < minGradle) {
                 throw new GradleException("${minGradle} or above is required to build elasticsearch")
             }
@@ -201,19 +204,28 @@ class BuildPlugin implements Plugin {
 
     /** Runs the given javascript using jjs from the jdk, and returns the output */
     private static String runJavascript(Project project, String javaHome, String script) {
-        File tmpScript = File.createTempFile('es-gradle-tmp', '.js')
-        tmpScript.setText(script, 'UTF-8')
-        ByteArrayOutputStream output = new ByteArrayOutputStream()
+        ByteArrayOutputStream stdout = new ByteArrayOutputStream()
+        ByteArrayOutputStream stderr = new ByteArrayOutputStream()
+        if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+            // gradle/groovy does not properly escape the double quote for windows
+            script = script.replace('"', '\\"')
+        }
+        File jrunscriptPath = new File(javaHome, 'bin/jrunscript')
         ExecResult result = project.exec {
-            executable = new File(javaHome, 'bin/jjs')
-            args tmpScript.toString()
-            standardOutput = output
-            errorOutput = new ByteArrayOutputStream()
-            ignoreExitValue = true // we do not fail so we can first cleanup the tmp file
+            executable = jrunscriptPath
+            args '-e', script
+            standardOutput = stdout
+            errorOutput = stderr
+            ignoreExitValue = true
+        }
+        if (result.exitValue != 0) {
+            project.logger.error("STDOUT:")
+            stdout.toString('UTF-8').eachLine { line -> project.logger.error(line) }
+            project.logger.error("STDERR:")
+            stderr.toString('UTF-8').eachLine { line -> project.logger.error(line) }
+            result.rethrowFailure()
         }
-        java.nio.file.Files.delete(tmpScript.toPath())
-        result.assertNormalExitValue()
-        return output.toString('UTF-8').trim()
+        return stdout.toString('UTF-8').trim()
     }
 
     /** Return the configuration name used for finding transitive deps of the given dependency. */
@@ -309,7 +321,6 @@ class BuildPlugin implements Plugin {
      * 
      */
     private static Closure fixupDependencies(Project project) {
-        // TODO: revisit this when upgrading to Gradle 2.14+, see Javadoc comment above
         return { XmlProvider xml ->
             // first find if we have dependencies at all, and grab the node
             NodeList depsNodes = xml.asNode().get('dependencies')
@@ -332,6 +343,13 @@ class BuildPlugin implements Plugin {
                     depNode.scope*.value = 'compile'
                 }
 
+                // remove any exclusions added by gradle, they contain wildcards and systems like ivy have bugs with wildcards
+                // see https://github.com/elastic/elasticsearch/issues/24490
+                NodeList exclusionsNode = depNode.get('exclusions')
+                if (exclusionsNode.size() > 0) {
+                    depNode.remove(exclusionsNode.get(0))
+                }
+
                 // collect the transitive deps now that we know what this dependency is
                 String depConfig = transitiveDepConfigName(groupId, artifactId, version)
                 Configuration configuration = project.configurations.findByName(depConfig)
@@ -418,8 +436,10 @@ class BuildPlugin implements Plugin {
                     // hack until gradle supports java 9's new "--release" arg
                     assert minimumJava == JavaVersion.VERSION_1_8
                     options.compilerArgs << '--release' << '8'
-                    project.sourceCompatibility = null
-                    project.targetCompatibility = null
+                    doFirst{
+                        sourceCompatibility = null
+                        targetCompatibility = null
+                    }
                 }
             }
         }
@@ -466,7 +486,7 @@ class BuildPlugin implements Plugin {
                         'Build-Java-Version': project.javaVersion)
                 if (jarTask.manifest.attributes.containsKey('Change') == false) {
                     logger.warn('Building without git revision id.')
-                    jarTask.manifest.attributes('Change': 'N/A')
+                    jarTask.manifest.attributes('Change': 'Unknown')
                 }
             }
         }
@@ -478,16 +498,12 @@ class BuildPlugin implements Plugin {
             jvm "${project.javaHome}/bin/java"
             parallelism System.getProperty('tests.jvms', 'auto')
             ifNoTests 'fail'
+            onNonEmptyWorkDirectory 'wipe'
             leaveTemporary true
 
             // TODO: why are we not passing maxmemory to junit4?
             jvmArg '-Xmx' + System.getProperty('tests.heap.size', '512m')
             jvmArg '-Xms' + System.getProperty('tests.heap.size', '512m')
-            if (JavaVersion.current().isJava7()) {
-                // some tests need a large permgen, but that only exists on java 7
-                jvmArg '-XX:MaxPermSize=128m'
-            }
-            jvmArg '-XX:MaxDirectMemorySize=512m'
             jvmArg '-XX:+HeapDumpOnOutOfMemoryError'
             File heapdumpDir = new File(project.buildDir, 'heapdump')
             heapdumpDir.mkdirs()
@@ -510,16 +526,19 @@ class BuildPlugin implements Plugin {
             systemProperty 'tests.logger.level', 'WARN'
             for (Map.Entry property : System.properties.entrySet()) {
                 if (property.getKey().startsWith('tests.') ||
-                    property.getKey().startsWith('es.')) {
+                        property.getKey().startsWith('es.')) {
+                    if (property.getKey().equals('tests.seed')) {
+                        /* The seed is already set on the project so we
+                         * shouldn't attempt to override it. */
+                        continue;
+                    }
                     systemProperty property.getKey(), property.getValue()
                 }
             }
 
-            // System assertions (-esa) are disabled for now because of what looks like a
-            // JDK bug triggered by Groovy on JDK7. We should look at re-enabling system
-            // assertions when we upgrade to a new version of Groovy (currently 2.4.4) or
-            // require JDK8. See https://issues.apache.org/jira/browse/GROOVY-7528.
-            enableSystemAssertions false
+            boolean assertionsEnabled = Boolean.parseBoolean(System.getProperty('tests.asserts', 'true'))
+            enableSystemAssertions assertionsEnabled
+            enableAssertions assertionsEnabled
 
             testLogging {
                 showNumFailuresAtEnd 25
@@ -560,11 +579,22 @@ class BuildPlugin implements Plugin {
 
     /** Configures the test task */
     static Task configureTest(Project project) {
-        Task test = project.tasks.getByName('test')
+        RandomizedTestingTask test = project.tasks.getByName('test')
         test.configure(commonTestConfig(project))
         test.configure {
             include '**/*Tests.class'
         }
+
+        // Add a method to create additional unit tests for a project, which will share the same
+        // randomized testing setup, but by default run no tests.
+        project.extensions.add('additionalTest', { String name, Closure config ->
+            RandomizedTestingTask additionalTest = project.tasks.create(name, RandomizedTestingTask.class)
+            additionalTest.classpath = test.classpath
+            additionalTest.testClassesDir = test.testClassesDir
+            additionalTest.configure(commonTestConfig(project))
+            additionalTest.configure(config)
+            test.dependsOn(additionalTest)
+        });
         return test
     }
 
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/NoticeTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/NoticeTask.groovy
new file mode 100644
index 0000000000000..928298db7bfc2
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/NoticeTask.groovy
@@ -0,0 +1,99 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.gradle
+
+import org.gradle.api.DefaultTask
+import org.gradle.api.Project
+import org.gradle.api.artifacts.Configuration
+import org.gradle.api.tasks.InputFile
+import org.gradle.api.tasks.OutputFile
+import org.gradle.api.tasks.TaskAction
+
+/**
+ * A task to create a notice file which includes dependencies' notices.
+ */
+public class NoticeTask extends DefaultTask {
+
+    @InputFile
+    File inputFile = project.rootProject.file('NOTICE.txt')
+
+    @OutputFile
+    File outputFile = new File(project.buildDir, "notices/${name}/NOTICE.txt")
+
+    /** Directories to include notices from */
+    private List licensesDirs = new ArrayList<>()
+
+    public NoticeTask() {
+        description = 'Create a notice file from dependencies'
+        // Default licenses directory is ${projectDir}/licenses (if it exists)
+        File licensesDir = new File(project.projectDir, 'licenses')
+        if (licensesDir.exists()) {
+            licensesDirs.add(licensesDir)
+        }
+    }
+
+    /** Add notices from the specified directory. */
+    public void licensesDir(File licensesDir) {
+        licensesDirs.add(licensesDir)
+    }
+
+    @TaskAction
+    public void generateNotice() {
+        StringBuilder output = new StringBuilder()
+        output.append(inputFile.getText('UTF-8'))
+        output.append('\n\n')
+        // This is a map rather than a set so that the sort order is the 3rd
+        // party component names, unaffected by the full path to the various files
+        Map seen = new TreeMap<>()
+        for (File licensesDir : licensesDirs) {
+            licensesDir.eachFileMatch({ it ==~ /.*-NOTICE\.txt/ }) { File file ->
+                String name = file.name.substring(0, file.name.length() - '-NOTICE.txt'.length())
+                if (seen.containsKey(name)) {
+                    File prevFile = seen.get(name)
+                    if (prevFile.text != file.text) {
+                        throw new RuntimeException("Two different notices exist for dependency '" +
+                                name + "': " + prevFile + " and " + file)
+                    }
+                } else {
+                    seen.put(name, file)
+                }
+            }
+        }
+        for (Map.Entry entry : seen.entrySet()) {
+            String name = entry.getKey()
+            File file = entry.getValue()
+            appendFile(file, name, 'NOTICE', output)
+            appendFile(new File(file.parentFile, "${name}-LICENSE.txt"), name, 'LICENSE', output)
+        }
+        outputFile.setText(output.toString(), 'UTF-8')
+    }
+
+    static void appendFile(File file, String name, String type, StringBuilder output) {
+        String text = file.getText('UTF-8')
+        if (text.trim().isEmpty()) {
+            return
+        }
+        output.append('================================================================================\n')
+        output.append("${name} ${type}\n")
+        output.append('================================================================================\n')
+        output.append(text)
+        output.append('\n\n')
+    }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy
new file mode 100644
index 0000000000000..b59f26381f2f3
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/Version.groovy
@@ -0,0 +1,78 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.gradle
+
+import groovy.transform.Sortable
+
+/**
+ * Encapsulates comparison and printing logic for an x.y.z version.
+ */
+@Sortable(includes=['id'])
+public class Version {
+
+    final int major
+    final int minor
+    final int bugfix
+    final int id
+    final boolean snapshot
+
+    public Version(int major, int minor, int bugfix, boolean snapshot) {
+        this.major = major
+        this.minor = minor
+        this.bugfix = bugfix
+        this.snapshot = snapshot
+        this.id = major * 100000 + minor * 1000 + bugfix * 10 +
+            (snapshot ? 1 : 0)
+    }
+
+    public static Version fromString(String s) {
+        String[] parts = s.split('\\.')
+        String bugfix = parts[2]
+        boolean snapshot = false
+        if (bugfix.contains('-')) {
+            snapshot = bugfix.endsWith('-SNAPSHOT')
+            bugfix = bugfix.split('-')[0]
+        }
+        return new Version(parts[0] as int, parts[1] as int, bugfix as int,
+            snapshot)
+    }
+
+    @Override
+    public String toString() {
+        String snapshotStr = snapshot ? '-SNAPSHOT' : ''
+        return "${major}.${minor}.${bugfix}${snapshotStr}"
+    }
+
+    public boolean before(String compareTo) {
+        return id < fromString(compareTo).id
+    }
+
+    public boolean onOrBefore(String compareTo) {
+        return id <= fromString(compareTo).id
+    }
+
+    public boolean onOrAfter(String compareTo) {
+        return id >= fromString(compareTo).id
+    }
+
+    public boolean after(String compareTo) {
+        return id > fromString(compareTo).id
+    }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy
index 9270edbb5690e..f126839a8d48a 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy
@@ -167,6 +167,9 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
                  * warning every time. */
                 current.println("  - skip:")
                 current.println("      features: ")
+                current.println("        - stash_in_key")
+                current.println("        - stash_in_path")
+                current.println("        - stash_path_replace")
                 current.println("        - warnings")
             }
             if (test.skipTest) {
@@ -179,12 +182,14 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
             }
             if (test.setup != null) {
                 // Insert a setup defined outside of the docs
-                String setup = setups[test.setup]
-                if (setup == null) {
-                    throw new InvalidUserDataException("Couldn't find setup "
-                        + "for $test")
+                for (String setupName : test.setup.split(',')) {
+                    String setup = setups[setupName]
+                    if (setup == null) {
+                        throw new InvalidUserDataException("Couldn't find setup "
+                                + "for $test")
+                    }
+                    current.println(setup)
                 }
-                current.println(setup)
             }
 
             body(test, false)
@@ -295,7 +300,7 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
             Path dest = outputRoot().toPath().resolve(test.path)
             // Replace the extension
             String fileName = dest.getName(dest.nameCount - 1)
-            dest = dest.parent.resolve(fileName.replace('.asciidoc', '.yaml'))
+            dest = dest.parent.resolve(fileName.replace('.asciidoc', '.yml'))
 
             // Now setup the writer
             Files.createDirectories(dest.parent)
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy
index 518b4da439cf0..94af22f4aa279 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/SnippetsTask.groovy
@@ -90,6 +90,7 @@ public class SnippetsTask extends DefaultTask {
                      * tests cleaner.
                      */
                     subst = subst.replace('$body', '\\$body')
+                    subst = subst.replace('$_path', '\\$_path')
                     // \n is a new line....
                     subst = subst.replace('\\n', '\n')
                     snippet.contents = snippet.contents.replaceAll(
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy
index d5295519ad294..2e11fdc2681bc 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginBuildPlugin.groovy
@@ -19,6 +19,7 @@
 package org.elasticsearch.gradle.plugin
 
 import org.elasticsearch.gradle.BuildPlugin
+import org.elasticsearch.gradle.NoticeTask
 import org.elasticsearch.gradle.test.RestIntegTestTask
 import org.elasticsearch.gradle.test.RunTask
 import org.gradle.api.Project
@@ -62,15 +63,16 @@ public class PluginBuildPlugin extends BuildPlugin {
                 project.ext.set("nebulaPublish.maven.jar", false)
             }
 
-            project.integTest.dependsOn(project.bundlePlugin)
+            project.integTestCluster.dependsOn(project.bundlePlugin)
             project.tasks.run.dependsOn(project.bundlePlugin)
             if (isModule) {
-                project.integTest.clusterConfig.module(project)
+                project.integTestCluster.module(project)
                 project.tasks.run.clusterConfig.module(project)
             } else {
-                project.integTest.clusterConfig.plugin(project.path)
+                project.integTestCluster.plugin(project.path)
                 project.tasks.run.clusterConfig.plugin(project.path)
                 addZipPomGeneration(project)
+                addNoticeGeneration(project)
             }
 
             project.namingConventions {
@@ -94,7 +96,7 @@ public class PluginBuildPlugin extends BuildPlugin {
             provided "com.vividsolutions:jts:${project.versions.jts}"
             provided "org.apache.logging.log4j:log4j-api:${project.versions.log4j}"
             provided "org.apache.logging.log4j:log4j-core:${project.versions.log4j}"
-            provided "net.java.dev.jna:jna:${project.versions.jna}"
+            provided "org.elasticsearch:jna:${project.versions.jna}"
         }
     }
 
@@ -118,12 +120,15 @@ public class PluginBuildPlugin extends BuildPlugin {
         // add the plugin properties and metadata to test resources, so unit tests can
         // know about the plugin (used by test security code to statically initialize the plugin in unit tests)
         SourceSet testSourceSet = project.sourceSets.test
-        testSourceSet.output.dir(buildProperties.generatedResourcesDir, builtBy: 'pluginProperties')
+        testSourceSet.output.dir(buildProperties.descriptorOutput.parentFile, builtBy: 'pluginProperties')
         testSourceSet.resources.srcDir(pluginMetadata)
 
         // create the actual bundle task, which zips up all the files for the plugin
         Zip bundle = project.tasks.create(name: 'bundlePlugin', type: Zip, dependsOn: [project.jar, buildProperties]) {
-            from buildProperties // plugin properties file
+            from(buildProperties.descriptorOutput.parentFile) {
+                // plugin properties file
+                include(buildProperties.descriptorOutput.name)
+            }
             from pluginMetadata // metadata (eg custom security policy)
             from project.jar // this plugin's jar
             from project.configurations.runtime - project.configurations.provided // the dep jars
@@ -244,4 +249,19 @@ public class PluginBuildPlugin extends BuildPlugin {
             }
         }
     }
+
+    protected void addNoticeGeneration(Project project) {
+        File licenseFile = project.pluginProperties.extension.licenseFile
+        if (licenseFile != null) {
+            project.bundlePlugin.from(licenseFile.parentFile) {
+                include(licenseFile.name)
+            }
+        }
+        File noticeFile = project.pluginProperties.extension.noticeFile
+        if (noticeFile != null) {
+            NoticeTask generateNotice = project.tasks.create('generateNotice', NoticeTask.class)
+            generateNotice.inputFile = noticeFile
+            project.bundlePlugin.from(generateNotice)
+        }
+    }
 }
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy
index 5502266693653..1251be265da9a 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesExtension.groovy
@@ -39,10 +39,24 @@ class PluginPropertiesExtension {
     @Input
     String classname
 
+    @Input
+    boolean hasNativeController = false
+
     /** Indicates whether the plugin jar should be made available for the transport client. */
     @Input
     boolean hasClientJar = false
 
+    /** A license file that should be included in the built plugin zip. */
+    @Input
+    File licenseFile = null
+
+    /**
+     * A notice file that should be included in the built plugin zip. This will be
+     * extended with notices from the {@code licenses/} directory.
+     */
+    @Input
+    File noticeFile = null
+
     PluginPropertiesExtension(Project project) {
         name = project.name
         version = project.version
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy
index 7156c2650cbe0..91efe247a016b 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/plugin/PluginPropertiesTask.groovy
@@ -22,6 +22,7 @@ import org.elasticsearch.gradle.VersionProperties
 import org.gradle.api.InvalidUserDataException
 import org.gradle.api.Task
 import org.gradle.api.tasks.Copy
+import org.gradle.api.tasks.OutputFile
 
 /**
  * Creates a plugin descriptor.
@@ -29,20 +30,22 @@ import org.gradle.api.tasks.Copy
 class PluginPropertiesTask extends Copy {
 
     PluginPropertiesExtension extension
-    File generatedResourcesDir = new File(project.buildDir, 'generated-resources')
+
+    @OutputFile
+    File descriptorOutput = new File(project.buildDir, 'generated-resources/plugin-descriptor.properties')
 
     PluginPropertiesTask() {
-        File templateFile = new File(project.buildDir, 'templates/plugin-descriptor.properties')
+        File templateFile = new File(project.buildDir, "templates/${descriptorOutput.name}")
         Task copyPluginPropertiesTemplate = project.tasks.create('copyPluginPropertiesTemplate') {
             doLast {
-                InputStream resourceTemplate = PluginPropertiesTask.getResourceAsStream('/plugin-descriptor.properties')
+                InputStream resourceTemplate = PluginPropertiesTask.getResourceAsStream("/${descriptorOutput.name}")
                 templateFile.parentFile.mkdirs()
                 templateFile.setText(resourceTemplate.getText('UTF-8'), 'UTF-8')
             }
         }
+
         dependsOn(copyPluginPropertiesTemplate)
         extension = project.extensions.create('esplugin', PluginPropertiesExtension, project)
-        project.clean.delete(generatedResourcesDir)
         project.afterEvaluate {
             // check require properties are set
             if (extension.name == null) {
@@ -55,8 +58,8 @@ class PluginPropertiesTask extends Copy {
                 throw new InvalidUserDataException('classname is a required setting for esplugin')
             }
             // configure property substitution
-            from(templateFile)
-            into(generatedResourcesDir)
+            from(templateFile.parentFile).include(descriptorOutput.name)
+            into(descriptorOutput.parentFile)
             Map properties = generateSubstitutions()
             expand(properties)
             inputs.properties(properties)
@@ -76,7 +79,8 @@ class PluginPropertiesTask extends Copy {
             'version': stringSnap(extension.version),
             'elasticsearchVersion': stringSnap(VersionProperties.elasticsearch),
             'javaVersion': project.targetCompatibility as String,
-            'classname': extension.classname
+            'classname': extension.classname,
+            'hasNativeController': extension.hasNativeController
         ]
     }
 }
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy
index 6fa37be309ec1..4d292d87ec39c 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/DependencyLicensesTask.groovy
@@ -86,6 +86,9 @@ public class DependencyLicensesTask extends DefaultTask {
     /** A map of patterns to prefix, used to find the LICENSE and NOTICE file. */
     private LinkedHashMap mappings = new LinkedHashMap<>()
 
+    /** Names of dependencies whose shas should not exist. */
+    private Set ignoreShas = new HashSet<>()
+
     /**
      * Add a mapping from a regex pattern for the jar name, to a prefix to find
      * the LICENSE and NOTICE file for that jar.
@@ -106,6 +109,15 @@ public class DependencyLicensesTask extends DefaultTask {
         mappings.put(from, to)
     }
 
+    /**
+     * Add a rule which will skip SHA checking for the given dependency name. This should be used for
+     * locally build dependencies, which cause the sha to change constantly.
+     */
+    @Input
+    public void ignoreSha(String dep) {
+        ignoreShas.add(dep)
+    }
+
     @TaskAction
     public void checkDependencies() {
         if (dependencies.isEmpty()) {
@@ -139,19 +151,27 @@ public class DependencyLicensesTask extends DefaultTask {
 
         for (File dependency : dependencies) {
             String jarName = dependency.getName()
-            logger.info("Checking license/notice/sha for " + jarName)
-            checkSha(dependency, jarName, shaFiles)
+            String depName = jarName - ~/\-\d+.*/
+            if (ignoreShas.contains(depName)) {
+                // local deps should not have sha files!
+                if (getShaFile(jarName).exists()) {
+                    throw new GradleException("SHA file ${getShaFile(jarName)} exists for ignored dependency ${depName}")
+                }
+            } else {
+                logger.info("Checking sha for " + jarName)
+                checkSha(dependency, jarName, shaFiles)
+            }
 
-            String name = jarName - ~/\-\d+.*/
-            Matcher match = mappingsPattern.matcher(name)
+            logger.info("Checking license/notice for " + depName)
+            Matcher match = mappingsPattern.matcher(depName)
             if (match.matches()) {
                 int i = 0
                 while (i < match.groupCount() && match.group(i + 1) == null) ++i;
-                logger.info("Mapped dependency name ${name} to ${mapped.get(i)} for license check")
-                name = mapped.get(i)
+                logger.info("Mapped dependency name ${depName} to ${mapped.get(i)} for license check")
+                depName = mapped.get(i)
             }
-            checkFile(name, jarName, licenses, 'LICENSE')
-            checkFile(name, jarName, notices, 'NOTICE')
+            checkFile(depName, jarName, licenses, 'LICENSE')
+            checkFile(depName, jarName, notices, 'NOTICE')
         }
 
         licenses.each { license, count ->
@@ -169,8 +189,12 @@ public class DependencyLicensesTask extends DefaultTask {
         }
     }
 
+    private File getShaFile(String jarName) {
+        return new File(licensesDir, jarName + SHA_EXTENSION)
+    }
+
     private void checkSha(File jar, String jarName, Set shaFiles) {
-        File shaFile = new File(licensesDir, jarName + SHA_EXTENSION)
+        File shaFile = getShaFile(jarName)
         if (shaFile.exists() == false) {
             throw new GradleException("Missing SHA for ${jarName}. Run 'gradle updateSHAs' to create")
         }
@@ -215,6 +239,10 @@ public class DependencyLicensesTask extends DefaultTask {
             }
             for (File dependency : parentTask.dependencies) {
                 String jarName = dependency.getName()
+                String depName = jarName - ~/\-\d+.*/
+                if (parentTask.ignoreShas.contains(depName)) {
+                    continue
+                }
                 File shaFile = new File(parentTask.licensesDir, jarName + SHA_EXTENSION)
                 if (shaFile.exists() == false) {
                     logger.lifecycle("Adding sha for ${jarName}")
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/NamingConventionsTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/NamingConventionsTask.groovy
index 52de7dac2d5a3..2711a0e38f23b 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/NamingConventionsTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/NamingConventionsTask.groovy
@@ -38,17 +38,7 @@ public class NamingConventionsTask extends LoggedExec {
      * inputs (ie the jars/class files).
      */
     @OutputFile
-    File successMarker = new File(project.buildDir, 'markers/namingConventions')
-
-    /**
-     * The classpath to run the naming conventions checks against. Must contain the files in the test
-     * output directory and everything required to load those classes.
-     *
-     * We don't declare the actual test files as a dependency or input because if they change then
-     * this will change.
-     */
-    @InputFiles
-    FileCollection classpath = project.sourceSets.test.runtimeClasspath
+    File successMarker = new File(project.buildDir, "markers/${this.name}")
 
     /**
      * Should we skip the integ tests in disguise tests? Defaults to true because only core names its
@@ -69,18 +59,35 @@ public class NamingConventionsTask extends LoggedExec {
     @Input
     String integTestClass = 'org.elasticsearch.test.ESIntegTestCase'
 
+    /**
+     * Should the test also check the main classpath for test classes instead of
+     * doing the usual checks to the test classpath.
+     */
+    @Input
+    boolean checkForTestsInMain = false;
+
     public NamingConventionsTask() {
         // Extra classpath contains the actual test
-        project.configurations.create('namingConventions')
-        Dependency buildToolsDep = project.dependencies.add('namingConventions',
-                "org.elasticsearch.gradle:build-tools:${VersionProperties.elasticsearch}")
-        buildToolsDep.transitive = false // We don't need gradle in the classpath. It conflicts.
+        if (false == project.configurations.names.contains('namingConventions')) {
+            project.configurations.create('namingConventions')
+            Dependency buildToolsDep = project.dependencies.add('namingConventions',
+                    "org.elasticsearch.gradle:build-tools:${VersionProperties.elasticsearch}")
+            buildToolsDep.transitive = false // We don't need gradle in the classpath. It conflicts.
+        }
         FileCollection extraClasspath = project.configurations.namingConventions
         dependsOn(extraClasspath)
 
-        description = "Runs NamingConventionsCheck on ${classpath}"
+        FileCollection classpath = project.sourceSets.test.runtimeClasspath
+        inputs.files(classpath)
+        description = "Tests that test classes aren't misnamed or misplaced"
         executable = new File(project.javaHome, 'bin/java')
-        onlyIf { project.sourceSets.test.output.classesDir.exists() }
+        if (false == checkForTestsInMain) {
+            /* This task is created by default for all subprojects with this
+             * setting and there is no point in running it if the files don't
+             * exist. */
+            onlyIf { project.sourceSets.test.output.classesDir.exists() }
+        }
+
         /*
          * We build the arguments in a funny afterEvaluate/doFirst closure so that we can wait for the classpath to be
          * ready for us. Strangely neither one on their own are good enough.
@@ -104,7 +111,14 @@ public class NamingConventionsTask extends LoggedExec {
                 if (':build-tools'.equals(project.path)) {
                     args('--self-test')
                 }
-                args('--', project.sourceSets.test.output.classesDir.absolutePath)
+                if (checkForTestsInMain) {
+                    args('--main')
+                    args('--')
+                    args(project.sourceSets.main.output.classesDir.absolutePath)
+                } else {
+                    args('--')
+                    args(project.sourceSets.test.output.classesDir.absolutePath)
+                }
             }
         }
         doLast { successMarker.setText("", 'UTF-8') }
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/PrecommitTasks.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/PrecommitTasks.groovy
index f451beeceb826..f7b30e774e340 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/PrecommitTasks.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/PrecommitTasks.groovy
@@ -91,6 +91,7 @@ class PrecommitTasks {
         if (testForbidden != null) {
             testForbidden.configure {
                 signaturesURLs += getClass().getResource('/forbidden/es-test-signatures.txt')
+                signaturesURLs += getClass().getResource('/forbidden/http-signatures.txt')
             }
         }
         Task forbiddenApis = project.tasks.findByName('forbiddenApis')
@@ -139,6 +140,7 @@ class PrecommitTasks {
             configProperties = [
                 suppressions: checkstyleSuppressions
             ]
+            toolVersion = 7.5
         }
         for (String taskName : ['checkstyleMain', 'checkstyleTest']) {
             Task task = project.tasks.findByName(taskName)
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ThirdPartyAuditTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ThirdPartyAuditTask.groovy
index 018f9fde2f2c4..33ca6dccfa32e 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ThirdPartyAuditTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/precommit/ThirdPartyAuditTask.groovy
@@ -209,9 +209,11 @@ public class ThirdPartyAuditTask extends AntTask {
         try {
             ant.thirdPartyAudit(failOnUnsupportedJava: false,
                             failOnMissingClasses: false,
-                            signaturesFile: new File(getClass().getResource('/forbidden/third-party-audit.txt').toURI()),
                             classpath: classpath.asPath) {
                 fileset(dir: tmpDir)
+                signatures {
+                    string(value: getClass().getResourceAsStream('/forbidden/third-party-audit.txt').getText('UTF-8'))
+                }
             }
         } catch (BuildException ignore) {}
 
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/AntFixture.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/AntFixture.groovy
new file mode 100644
index 0000000000000..34c3046aa2b6b
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/AntFixture.groovy
@@ -0,0 +1,291 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.gradle.test
+
+import org.apache.tools.ant.taskdefs.condition.Os
+import org.elasticsearch.gradle.AntTask
+import org.elasticsearch.gradle.LoggedExec
+import org.gradle.api.GradleException
+import org.gradle.api.Task
+import org.gradle.api.tasks.Exec
+import org.gradle.api.tasks.Input
+
+/**
+ * A fixture for integration tests which runs in a separate process launched by Ant.
+ */
+public class AntFixture extends AntTask implements Fixture {
+
+    /** The path to the executable that starts the fixture. */
+    @Input
+    String executable
+
+    private final List arguments = new ArrayList<>()
+
+    @Input
+    public void args(Object... args) {
+        arguments.addAll(args)
+    }
+
+    /**
+     * Environment variables for the fixture process. The value can be any object, which
+     * will have toString() called at execution time.
+     */
+    private final Map environment = new HashMap<>()
+
+    @Input
+    public void env(String key, Object value) {
+        environment.put(key, value)
+    }
+
+    /** A flag to indicate whether the command should be executed from a shell. */
+    @Input
+    boolean useShell = false
+
+    /**
+     * A flag to indicate whether the fixture should be run in the foreground, or spawned.
+     * It is protected so subclasses can override (eg RunTask).
+     */
+    protected boolean spawn = true
+
+    /**
+     * A closure to call before the fixture is considered ready. The closure is passed the fixture object,
+     * as well as a groovy AntBuilder, to enable running ant condition checks. The default wait
+     * condition is for http on the http port.
+     */
+    @Input
+    Closure waitCondition = { AntFixture fixture, AntBuilder ant ->
+        File tmpFile = new File(fixture.cwd, 'wait.success')
+        ant.get(src: "http://${fixture.addressAndPort}",
+                dest: tmpFile.toString(),
+                ignoreerrors: true, // do not fail on error, so logging information can be flushed
+                retries: 10)
+        return tmpFile.exists()
+    }
+
+    private final Task stopTask
+
+    public AntFixture() {
+        stopTask = createStopTask()
+        finalizedBy(stopTask)
+    }
+
+    @Override
+    public Task getStopTask() {
+        return stopTask
+    }
+
+    @Override
+    protected void runAnt(AntBuilder ant) {
+        project.delete(baseDir) // reset everything
+        cwd.mkdirs()
+        final String realExecutable
+        final List realArgs = new ArrayList<>()
+        final Map realEnv = environment
+        // We need to choose which executable we are using. In shell mode, or when we
+        // are spawning and thus using the wrapper script, the executable is the shell.
+        if (useShell || spawn) {
+            if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+                realExecutable = 'cmd'
+                realArgs.add('/C')
+                realArgs.add('"') // quote the entire command
+            } else {
+                realExecutable = 'sh'
+            }
+        } else {
+            realExecutable = executable
+            realArgs.addAll(arguments)
+        }
+        if (spawn) {
+            writeWrapperScript(executable)
+            realArgs.add(wrapperScript)
+            realArgs.addAll(arguments)
+        }
+        if (Os.isFamily(Os.FAMILY_WINDOWS) && (useShell || spawn)) {
+            realArgs.add('"')
+        }
+        commandString.eachLine { line -> logger.info(line) }
+
+        ant.exec(executable: realExecutable, spawn: spawn, dir: cwd, taskname: name) {
+            realEnv.each { key, value -> env(key: key, value: value) }
+            realArgs.each { arg(value: it) }
+        }
+
+        String failedProp = "failed${name}"
+        // first wait for resources, or the failure marker from the wrapper script
+        ant.waitfor(maxwait: '30', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond', timeoutproperty: failedProp) {
+            or {
+                resourceexists {
+                    file(file: failureMarker.toString())
+                }
+                and {
+                    resourceexists {
+                        file(file: pidFile.toString())
+                    }
+                    resourceexists {
+                        file(file: portsFile.toString())
+                    }
+                }
+            }
+        }
+
+        if (ant.project.getProperty(failedProp) || failureMarker.exists()) {
+            fail("Failed to start ${name}")
+        }
+
+        // the process is started (has a pid) and is bound to a network interface
+        // so now wait undil the waitCondition has been met
+        // TODO: change this to a loop?
+        boolean success
+        try {
+            success = waitCondition(this, ant) == false
+        } catch (Exception e) {
+            String msg = "Wait condition caught exception for ${name}"
+            logger.error(msg, e)
+            fail(msg, e)
+        }
+        if (success == false) {
+            fail("Wait condition failed for ${name}")
+        }
+    }
+
+    /** Returns a debug string used to log information about how the fixture was run. */
+    protected String getCommandString() {
+        String commandString = "\n${name} configuration:\n"
+        commandString += "-----------------------------------------\n"
+        commandString += "  cwd: ${cwd}\n"
+        commandString += "  command: ${executable} ${arguments.join(' ')}\n"
+        commandString += '  environment:\n'
+        environment.each { k, v -> commandString += "    ${k}: ${v}\n" }
+        if (spawn) {
+            commandString += "\n  [${wrapperScript.name}]\n"
+            wrapperScript.eachLine('UTF-8', { line -> commandString += "    ${line}\n"})
+        }
+        return commandString
+    }
+
+    /**
+     * Writes a script to run the real executable, so that stdout/stderr can be captured.
+     * TODO: this could be removed if we do use our own ProcessBuilder and pump output from the process
+     */
+    private void writeWrapperScript(String executable) {
+        wrapperScript.parentFile.mkdirs()
+        String argsPasser = '"$@"'
+        String exitMarker = "; if [ \$? != 0 ]; then touch run.failed; fi"
+        if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+            argsPasser = '%*'
+            exitMarker = "\r\n if \"%errorlevel%\" neq \"0\" ( type nul >> run.failed )"
+        }
+        wrapperScript.setText("\"${executable}\" ${argsPasser} > run.log 2>&1 ${exitMarker}", 'UTF-8')
+    }
+
+    /** Fail the build with the given message, and logging relevant info*/
+    private void fail(String msg, Exception... suppressed) {
+        if (logger.isInfoEnabled() == false) {
+            // We already log the command at info level. No need to do it twice.
+            commandString.eachLine { line -> logger.error(line) }
+        }
+        logger.error("${name} output:")
+        logger.error("-----------------------------------------")
+        logger.error("  failure marker exists: ${failureMarker.exists()}")
+        logger.error("  pid file exists: ${pidFile.exists()}")
+        logger.error("  ports file exists: ${portsFile.exists()}")
+        // also dump the log file for the startup script (which will include ES logging output to stdout)
+        if (runLog.exists()) {
+            logger.error("\n  [log]")
+            runLog.eachLine { line -> logger.error("    ${line}") }
+        }
+        logger.error("-----------------------------------------")
+        GradleException toThrow = new GradleException(msg)
+        for (Exception e : suppressed) {
+            toThrow.addSuppressed(e)
+        }
+        throw toThrow
+    }
+
+    /** Adds a task to kill an elasticsearch node with the given pidfile */
+    private Task createStopTask() {
+        final AntFixture fixture = this
+        final Object pid = "${ -> fixture.pid }"
+        Exec stop = project.tasks.create(name: "${name}#stop", type: LoggedExec)
+        stop.onlyIf { fixture.pidFile.exists() }
+        stop.doFirst {
+            logger.info("Shutting down ${fixture.name} with pid ${pid}")
+        }
+        if (Os.isFamily(Os.FAMILY_WINDOWS)) {
+            stop.executable = 'Taskkill'
+            stop.args('/PID', pid, '/F')
+        } else {
+            stop.executable = 'kill'
+            stop.args('-9', pid)
+        }
+        stop.doLast {
+            project.delete(fixture.pidFile)
+        }
+        return stop
+    }
+
+    /**
+     * A path relative to the build dir that all configuration and runtime files
+     * will live in for this fixture
+     */
+    protected File getBaseDir() {
+        return new File(project.buildDir, "fixtures/${name}")
+    }
+
+    /** Returns the working directory for the process. Defaults to "cwd" inside baseDir. */
+    protected File getCwd() {
+        return new File(baseDir, 'cwd')
+    }
+
+    /** Returns the file the process writes its pid to. Defaults to "pid" inside baseDir. */
+    protected File getPidFile() {
+        return new File(baseDir, 'pid')
+    }
+
+    /** Reads the pid file and returns the process' pid */
+    public int getPid() {
+        return Integer.parseInt(pidFile.getText('UTF-8').trim())
+    }
+
+    /** Returns the file the process writes its bound ports to. Defaults to "ports" inside baseDir. */
+    protected File getPortsFile() {
+        return new File(baseDir, 'ports')
+    }
+
+    /** Returns an address and port suitable for a uri to connect to this node over http */
+    public String getAddressAndPort() {
+        return portsFile.readLines("UTF-8").get(0)
+    }
+
+    /** Returns a file that wraps around the actual command when {@code spawn == true}. */
+    protected File getWrapperScript() {
+        return new File(cwd, Os.isFamily(Os.FAMILY_WINDOWS) ? 'run.bat' : 'run')
+    }
+
+    /** Returns a file that the wrapper script writes when the command failed. */
+    protected File getFailureMarker() {
+        return new File(cwd, 'run.failed')
+    }
+
+    /** Returns a file that the wrapper script writes when the command failed. */
+    protected File getRunLog() {
+        return new File(cwd, 'run.log')
+    }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy
index 57adaa2576dd1..ab618a0fdc7f7 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterConfiguration.groovy
@@ -46,11 +46,11 @@ class ClusterConfiguration {
     int transportPort = 0
 
     /**
-     * An override of the data directory. This may only be used with a single node.
-     * The value is lazily evaluated at runtime as a String path.
+     * An override of the data directory. Input is the node number and output
+     * is the override data directory.
      */
     @Input
-    Object dataDir = null
+    Closure dataDir = null
 
     /** Optional override of the cluster name. */
     @Input
@@ -72,11 +72,17 @@ class ClusterConfiguration {
     boolean useMinimumMasterNodes = true
 
     @Input
-    String jvmArgs = "-ea" +
-        " " + "-Xms" + System.getProperty('tests.heap.size', '512m') +
+    String jvmArgs = "-Xms" + System.getProperty('tests.heap.size', '512m') +
         " " + "-Xmx" + System.getProperty('tests.heap.size', '512m') +
         " " + System.getProperty('tests.jvm.argline', '')
 
+    /**
+     * Should the shared environment be cleaned on cluster startup? Defaults
+     * to {@code true} so we run with a clean cluster but some tests wish to
+     * preserve snapshots between clusters so they set this to true.
+     */
+    @Input
+    boolean cleanShared = true
 
     /**
      * A closure to call which returns the unicast host to connect to for cluster formation.
@@ -90,7 +96,7 @@ class ClusterConfiguration {
         if (seedNode == node) {
             return null
         }
-        ant.waitfor(maxwait: '20', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond') {
+        ant.waitfor(maxwait: '40', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond') {
             resourceexists {
                 file(file: seedNode.transportPortsFile.toString())
             }
@@ -127,6 +133,8 @@ class ClusterConfiguration {
 
     Map settings = new HashMap<>()
 
+    Map keystoreSettings = new HashMap<>()
+
     // map from destination path, to source file
     Map extraConfigFiles = new HashMap<>()
 
@@ -136,6 +144,8 @@ class ClusterConfiguration {
 
     LinkedHashMap setupCommands = new LinkedHashMap<>()
 
+    List dependencies = new ArrayList<>()
+
     @Input
     void systemProperty(String property, String value) {
         systemProperties.put(property, value)
@@ -146,6 +156,11 @@ class ClusterConfiguration {
         settings.put(name, value)
     }
 
+    @Input
+    void keystoreSetting(String name, String value) {
+        keystoreSettings.put(name, value)
+    }
+
     @Input
     void plugin(String path) {
         Project pluginProject = project.project(path)
@@ -174,4 +189,10 @@ class ClusterConfiguration {
         }
         extraConfigFiles.put(path, sourceFile)
     }
+
+    /** Add dependencies that must be run before the first task setting up the cluster. */
+    @Input
+    void dependsOn(Object... deps) {
+        dependencies.addAll(deps)
+    }
 }
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy
index 756c05b07d523..4dbf3efe595f9 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/ClusterFormationTasks.groovy
@@ -38,6 +38,7 @@ import org.gradle.api.tasks.Copy
 import org.gradle.api.tasks.Delete
 import org.gradle.api.tasks.Exec
 
+import java.nio.charset.StandardCharsets
 import java.nio.file.Paths
 import java.util.concurrent.TimeUnit
 
@@ -51,22 +52,28 @@ class ClusterFormationTasks {
      *
      * Returns a list of NodeInfo objects for each node in the cluster.
      */
-    static List setup(Project project, Task task, ClusterConfiguration config) {
-        if (task.getEnabled() == false) {
-            // no need to add cluster formation tasks if the task won't run!
-            return
-        }
+    static List setup(Project project, String prefix, Task runner, ClusterConfiguration config) {
         File sharedDir = new File(project.buildDir, "cluster/shared")
-        // first we remove everything in the shared cluster directory to ensure there are no leftovers in repos or anything
-        // in theory this should not be necessary but repositories are only deleted in the cluster-state and not on-disk
-        // such that snapshots survive failures / test runs and there is no simple way today to fix that.
-        Task cleanup = project.tasks.create(name: "${task.name}#prepareCluster.cleanShared", type: Delete, dependsOn: task.dependsOn.collect()) {
-            delete sharedDir
-            doLast {
-                sharedDir.mkdirs()
-            }
-        }
-        List startTasks = [cleanup]
+        Object startDependencies = config.dependencies
+        /* First, if we want a clean environment, we remove everything in the
+         * shared cluster directory to ensure there are no leftovers in repos
+         * or anything in theory this should not be necessary but repositories
+         * are only deleted in the cluster-state and not on-disk such that
+         * snapshots survive failures / test runs and there is no simple way
+         * today to fix that. */
+        if (config.cleanShared) {
+          Task cleanup = project.tasks.create(
+            name: "${prefix}#prepareCluster.cleanShared",
+            type: Delete,
+            dependsOn: startDependencies) {
+              delete sharedDir
+              doLast {
+                  sharedDir.mkdirs()
+              }
+          }
+          startDependencies = cleanup
+        }
+        List startTasks = []
         List nodes = []
         if (config.numNodes < config.numBwcNodes) {
             throw new GradleException("numNodes must be >= numBwcNodes [${config.numNodes} < ${config.numBwcNodes}]")
@@ -75,25 +82,25 @@ class ClusterFormationTasks {
             throw new GradleException("bwcVersion must not be null if numBwcNodes is > 0")
         }
         // this is our current version distribution configuration we use for all kinds of REST tests etc.
-        String distroConfigName = "${task.name}_elasticsearchDistro"
-        Configuration currentDistro = project.configurations.create(distroConfigName)
+        Configuration currentDistro = project.configurations.create("${prefix}_elasticsearchDistro")
+        Configuration bwcDistro = project.configurations.create("${prefix}_elasticsearchBwcDistro")
+        Configuration bwcPlugins = project.configurations.create("${prefix}_elasticsearchBwcPlugins")
         configureDistributionDependency(project, config.distribution, currentDistro, VersionProperties.elasticsearch)
-        if (config.bwcVersion != null && config.numBwcNodes > 0) {
+        if (config.numBwcNodes > 0) {
+            if (config.bwcVersion == null) {
+                throw new IllegalArgumentException("Must specify bwcVersion when numBwcNodes > 0")
+            }
             // if we have a cluster that has a BWC cluster we also need to configure a dependency on the BWC version
             // this version uses the same distribution etc. and only differs in the version we depend on.
             // from here on everything else works the same as if it's the current version, we fetch the BWC version
             // from mirrors using gradles built-in mechanism etc.
-            project.configurations {
-                elasticsearchBwcDistro
-                elasticsearchBwcPlugins
-            }
-            configureDistributionDependency(project, config.distribution, project.configurations.elasticsearchBwcDistro, config.bwcVersion)
+
+            configureDistributionDependency(project, config.distribution, bwcDistro, config.bwcVersion)
             for (Map.Entry entry : config.plugins.entrySet()) {
-                configureBwcPluginDependency("${task.name}_elasticsearchBwcPlugins", project, entry.getValue(),
-                        project.configurations.elasticsearchBwcPlugins, config.bwcVersion)
+                configureBwcPluginDependency("${prefix}_elasticsearchBwcPlugins", project, entry.getValue(), bwcPlugins, config.bwcVersion)
             }
-            project.configurations.elasticsearchBwcDistro.resolutionStrategy.cacheChangingModulesFor(0, TimeUnit.SECONDS)
-            project.configurations.elasticsearchBwcPlugins.resolutionStrategy.cacheChangingModulesFor(0, TimeUnit.SECONDS)
+            bwcDistro.resolutionStrategy.cacheChangingModulesFor(0, TimeUnit.SECONDS)
+            bwcPlugins.resolutionStrategy.cacheChangingModulesFor(0, TimeUnit.SECONDS)
         }
         for (int i = 0; i < config.numNodes; i++) {
             // we start N nodes and out of these N nodes there might be M bwc nodes.
@@ -102,15 +109,16 @@ class ClusterFormationTasks {
             Configuration distro = currentDistro
             if (i < config.numBwcNodes) {
                 elasticsearchVersion = config.bwcVersion
-                distro = project.configurations.elasticsearchBwcDistro
+                distro = bwcDistro
             }
-            NodeInfo node = new NodeInfo(config, i, project, task, elasticsearchVersion, sharedDir)
+            NodeInfo node = new NodeInfo(config, i, project, prefix, elasticsearchVersion, sharedDir)
             nodes.add(node)
-            startTasks.add(configureNode(project, task, cleanup, node, distro, nodes.get(0)))
+            Object dependsOn = startTasks.empty ? startDependencies : startTasks.get(0)
+            startTasks.add(configureNode(project, prefix, runner, dependsOn, node, config, distro, nodes.get(0)))
         }
 
-        Task wait = configureWaitTask("${task.name}#wait", project, nodes, startTasks)
-        task.dependsOn(wait)
+        Task wait = configureWaitTask("${prefix}#wait", project, nodes, startTasks)
+        runner.dependsOn(wait)
 
         return nodes
     }
@@ -150,59 +158,71 @@ class ClusterFormationTasks {
      *
      * @return a task which starts the node.
      */
-    static Task configureNode(Project project, Task task, Object dependsOn, NodeInfo node, Configuration configuration, NodeInfo seedNode) {
+    static Task configureNode(Project project, String prefix, Task runner, Object dependsOn, NodeInfo node, ClusterConfiguration config,
+                              Configuration distribution, NodeInfo seedNode) {
 
         // tasks are chained so their execution order is maintained
-        Task setup = project.tasks.create(name: taskName(task, node, 'clean'), type: Delete, dependsOn: dependsOn) {
+        Task setup = project.tasks.create(name: taskName(prefix, node, 'clean'), type: Delete, dependsOn: dependsOn) {
             delete node.homeDir
             delete node.cwd
             doLast {
                 node.cwd.mkdirs()
             }
         }
-        setup = configureCheckPreviousTask(taskName(task, node, 'checkPrevious'), project, setup, node)
-        setup = configureStopTask(taskName(task, node, 'stopPrevious'), project, setup, node)
-        setup = configureExtractTask(taskName(task, node, 'extract'), project, setup, node, configuration)
-        setup = configureWriteConfigTask(taskName(task, node, 'configure'), project, setup, node, seedNode)
+
+        setup = configureCheckPreviousTask(taskName(prefix, node, 'checkPrevious'), project, setup, node)
+        setup = configureStopTask(taskName(prefix, node, 'stopPrevious'), project, setup, node)
+        setup = configureExtractTask(taskName(prefix, node, 'extract'), project, setup, node, distribution)
+        setup = configureWriteConfigTask(taskName(prefix, node, 'configure'), project, setup, node, seedNode)
+        setup = configureCreateKeystoreTask(taskName(prefix, node, 'createKeystore'), project, setup, node)
+        setup = configureAddKeystoreSettingTasks(prefix, project, setup, node)
+
         if (node.config.plugins.isEmpty() == false) {
             if (node.nodeVersion == VersionProperties.elasticsearch) {
-                setup = configureCopyPluginsTask(taskName(task, node, 'copyPlugins'), project, setup, node)
+                setup = configureCopyPluginsTask(taskName(prefix, node, 'copyPlugins'), project, setup, node, prefix)
             } else {
-                setup = configureCopyBwcPluginsTask(taskName(task, node, 'copyBwcPlugins'), project, setup, node)
+                setup = configureCopyBwcPluginsTask(taskName(prefix, node, 'copyBwcPlugins'), project, setup, node, prefix)
             }
         }
 
         // install modules
         for (Project module : node.config.modules) {
             String actionName = pluginTaskName('install', module.name, 'Module')
-            setup = configureInstallModuleTask(taskName(task, node, actionName), project, setup, node, module)
+            setup = configureInstallModuleTask(taskName(prefix, node, actionName), project, setup, node, module)
         }
 
         // install plugins
         for (Map.Entry plugin : node.config.plugins.entrySet()) {
             String actionName = pluginTaskName('install', plugin.getKey(), 'Plugin')
-            setup = configureInstallPluginTask(taskName(task, node, actionName), project, setup, node, plugin.getValue())
+            setup = configureInstallPluginTask(taskName(prefix, node, actionName), project, setup, node, plugin.getValue(), prefix)
         }
 
         // sets up any extra config files that need to be copied over to the ES instance;
         // its run after plugins have been installed, as the extra config files may belong to plugins
-        setup = configureExtraConfigFilesTask(taskName(task, node, 'extraConfig'), project, setup, node)
+        setup = configureExtraConfigFilesTask(taskName(prefix, node, 'extraConfig'), project, setup, node)
 
         // extra setup commands
         for (Map.Entry command : node.config.setupCommands.entrySet()) {
             // the first argument is the actual script name, relative to home
             Object[] args = command.getValue().clone()
             args[0] = new File(node.homeDir, args[0].toString())
-            setup = configureExecTask(taskName(task, node, command.getKey()), project, setup, node, args)
+            setup = configureExecTask(taskName(prefix, node, command.getKey()), project, setup, node, args)
         }
 
-        Task start = configureStartTask(taskName(task, node, 'start'), project, setup, node)
+        Task start = configureStartTask(taskName(prefix, node, 'start'), project, setup, node)
 
         if (node.config.daemonize) {
-            Task stop = configureStopTask(taskName(task, node, 'stop'), project, [], node)
+            Task stop = configureStopTask(taskName(prefix, node, 'stop'), project, [], node)
             // if we are running in the background, make sure to stop the server when the task completes
-            task.finalizedBy(stop)
+            runner.finalizedBy(stop)
             start.finalizedBy(stop)
+            for (Object dependency : config.dependencies) {
+                if (dependency instanceof Fixture) {
+                    def depStop = ((Fixture)dependency).stopTask
+                    runner.finalizedBy(depStop)
+                    start.finalizedBy(depStop)
+                }
+            }
         }
         return start
     }
@@ -276,8 +296,7 @@ class ClusterFormationTasks {
                 'path.repo'                    : "${node.sharedDir}/repo",
                 'path.shared_data'             : "${node.sharedDir}/",
                 // Define a node attribute so we can test that it exists
-                'node.attr.testattr'           : 'test',
-                'repositories.url.allowed_urls': 'http://snapshot.test*'
+                'node.attr.testattr'           : 'test'
         ]
         // we set min master nodes to the total number of nodes in the cluster and
         // basically skip initial state recovery to allow the cluster to form using a realistic master election
@@ -307,6 +326,33 @@ class ClusterFormationTasks {
         }
     }
 
+    /** Adds a task to create keystore */
+    static Task configureCreateKeystoreTask(String name, Project project, Task setup, NodeInfo node) {
+        if (node.config.keystoreSettings.isEmpty()) {
+            return setup
+        } else {
+            File esKeystoreUtil = Paths.get(node.homeDir.toString(), "bin/" + "elasticsearch-keystore").toFile()
+            return configureExecTask(name, project, setup, node, esKeystoreUtil, 'create')
+        }
+    }
+
+    /** Adds tasks to add settings to the keystore */
+    static Task configureAddKeystoreSettingTasks(String parent, Project project, Task setup, NodeInfo node) {
+        Map kvs = node.config.keystoreSettings
+        File esKeystoreUtil = Paths.get(node.homeDir.toString(), "bin/" + "elasticsearch-keystore").toFile()
+        Task parentTask = setup
+        for (Map.Entry entry in kvs) {
+            String key = entry.getKey()
+            String name = taskName(parent, node, 'addToKeystore#' + key)
+            Task t = configureExecTask(name, project, parentTask, node, esKeystoreUtil, 'add', key, '-x')
+            t.doFirst {
+                standardInput = new ByteArrayInputStream(entry.getValue().getBytes(StandardCharsets.UTF_8))
+            }
+            parentTask = t
+        }
+        return parentTask
+    }
+
     static Task configureExtraConfigFilesTask(String name, Project project, Task setup, NodeInfo node) {
         if (node.config.extraConfigFiles.isEmpty()) {
             return setup
@@ -343,7 +389,7 @@ class ClusterFormationTasks {
      * For each plugin, if the plugin has rest spec apis in its tests, those api files are also copied
      * to the test resources for this project.
      */
-    static Task configureCopyPluginsTask(String name, Project project, Task setup, NodeInfo node) {
+    static Task configureCopyPluginsTask(String name, Project project, Task setup, NodeInfo node, String prefix) {
         Copy copyPlugins = project.tasks.create(name: name, type: Copy, dependsOn: setup)
 
         List pluginFiles = []
@@ -351,7 +397,7 @@ class ClusterFormationTasks {
 
             Project pluginProject = plugin.getValue()
             verifyProjectHasBuildPlugin(name, node.nodeVersion, project, pluginProject)
-            String configurationName = "_plugin_${pluginProject.path}"
+            String configurationName = "_plugin_${prefix}_${pluginProject.path}"
             Configuration configuration = project.configurations.findByName(configurationName)
             if (configuration == null) {
                 configuration = project.configurations.create(configurationName)
@@ -381,25 +427,27 @@ class ClusterFormationTasks {
     }
 
     /** Configures task to copy a plugin based on a zip file resolved using dependencies for an older version */
-    static Task configureCopyBwcPluginsTask(String name, Project project, Task setup, NodeInfo node) {
+    static Task configureCopyBwcPluginsTask(String name, Project project, Task setup, NodeInfo node, String prefix) {
+        Configuration bwcPlugins = project.configurations.getByName("${prefix}_elasticsearchBwcPlugins")
         for (Map.Entry plugin : node.config.plugins.entrySet()) {
             Project pluginProject = plugin.getValue()
             verifyProjectHasBuildPlugin(name, node.nodeVersion, project, pluginProject)
-            String configurationName = "_plugin_bwc_${pluginProject.path}"
+            String configurationName = "_plugin_bwc_${prefix}_${pluginProject.path}"
             Configuration configuration = project.configurations.findByName(configurationName)
             if (configuration == null) {
                 configuration = project.configurations.create(configurationName)
             }
 
             final String depName = pluginProject.extensions.findByName('esplugin').name
-            Dependency dep = project.configurations.elasticsearchBwcPlugins.dependencies.find {
+
+            Dependency dep = bwcPlugins.dependencies.find {
                 it.name == depName
             }
             configuration.dependencies.add(dep)
         }
 
         Copy copyPlugins = project.tasks.create(name: name, type: Copy, dependsOn: setup) {
-            from project.configurations.elasticsearchBwcPlugins
+            from bwcPlugins
             into node.pluginsTmpDir
         }
         return copyPlugins
@@ -419,12 +467,12 @@ class ClusterFormationTasks {
         return installModule
     }
 
-    static Task configureInstallPluginTask(String name, Project project, Task setup, NodeInfo node, Project plugin) {
+    static Task configureInstallPluginTask(String name, Project project, Task setup, NodeInfo node, Project plugin, String prefix) {
         final FileCollection pluginZip;
         if (node.nodeVersion != VersionProperties.elasticsearch) {
-            pluginZip = project.configurations.getByName("_plugin_bwc_${plugin.path}")
+            pluginZip = project.configurations.getByName("_plugin_bwc_${prefix}_${plugin.path}")
         } else {
-            pluginZip = project.configurations.getByName("_plugin_${plugin.path}")
+            pluginZip = project.configurations.getByName("_plugin_${prefix}_${plugin.path}")
         }
         // delay reading the file location until execution time by wrapping in a closure within a GString
         Object file = "${-> new File(node.pluginsTmpDir, pluginZip.singleFile.getName()).toURI().toURL().toString()}"
@@ -540,7 +588,7 @@ class ClusterFormationTasks {
                 anyNodeFailed |= node.failedMarker.exists()
             }
             if (ant.properties.containsKey("failed${name}".toString()) || anyNodeFailed) {
-                waitFailed(nodes, logger, 'Failed to start elasticsearch')
+                waitFailed(project, nodes, logger, 'Failed to start elasticsearch')
             }
 
             // go through each node checking the wait condition
@@ -557,14 +605,14 @@ class ClusterFormationTasks {
                 }
 
                 if (success == false) {
-                    waitFailed(nodes, logger, 'Elasticsearch cluster failed to pass wait condition')
+                    waitFailed(project, nodes, logger, 'Elasticsearch cluster failed to pass wait condition')
                 }
             }
         }
         return wait
     }
 
-    static void waitFailed(List nodes, Logger logger, String msg) {
+    static void waitFailed(Project project, List nodes, Logger logger, String msg) {
         for (NodeInfo node : nodes) {
             if (logger.isInfoEnabled() == false) {
                 // We already log the command at info level. No need to do it twice.
@@ -584,6 +632,17 @@ class ClusterFormationTasks {
                 logger.error("|\n|  [log]")
                 node.startLog.eachLine { line -> logger.error("|    ${line}") }
             }
+            if (node.pidFile.exists() && node.failedMarker.exists() == false &&
+                (node.httpPortsFile.exists() == false || node.transportPortsFile.exists() == false)) {
+                logger.error("|\n|  [jstack]")
+                String pid = node.pidFile.getText('UTF-8')
+                ByteArrayOutputStream output = new ByteArrayOutputStream()
+                project.exec {
+                    commandLine = ["${project.javaHome}/bin/jstack", pid]
+                    standardOutput = output
+                }
+                output.toString('UTF-8').eachLine { line -> logger.error("|    ${line}") }
+            }
             logger.error("|-----------------------------------------")
         }
         throw new GradleException(msg)
@@ -608,11 +667,11 @@ class ClusterFormationTasks {
             standardOutput = new ByteArrayOutputStream()
             doLast {
                 String out = standardOutput.toString()
-                if (out.contains("${pid} org.elasticsearch.bootstrap.Elasticsearch") == false) {
+                if (out.contains("${ext.pid} org.elasticsearch.bootstrap.Elasticsearch") == false) {
                     logger.error('jps -l')
                     logger.error(out)
-                    logger.error("pid file: ${pidFile}")
-                    logger.error("pid: ${pid}")
+                    logger.error("pid file: ${node.pidFile}")
+                    logger.error("pid: ${ext.pid}")
                     throw new GradleException("jps -l did not report any process with org.elasticsearch.bootstrap.Elasticsearch\n" +
                             "Did you run gradle clean? Maybe an old pid file is still lying around.")
                 } else {
@@ -649,11 +708,11 @@ class ClusterFormationTasks {
     }
 
     /** Returns a unique task name for this task and node configuration */
-    static String taskName(Task parentTask, NodeInfo node, String action) {
+    static String taskName(String prefix, NodeInfo node, String action) {
         if (node.config.numNodes > 1) {
-            return "${parentTask.name}#node${node.nodeNum}.${action}"
+            return "${prefix}#node${node.nodeNum}.${action}"
         } else {
-            return "${parentTask.name}#${action}"
+            return "${prefix}#${action}"
         }
     }
 
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy
index 46b81624ba3fa..498a1627b3598 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/Fixture.groovy
@@ -16,272 +16,15 @@
  * specific language governing permissions and limitations
  * under the License.
  */
-
 package org.elasticsearch.gradle.test
 
-import org.apache.tools.ant.taskdefs.condition.Os
-import org.elasticsearch.gradle.AntTask
-import org.elasticsearch.gradle.LoggedExec
-import org.gradle.api.GradleException
-import org.gradle.api.Task
-import org.gradle.api.tasks.Exec
-import org.gradle.api.tasks.Input
-
 /**
- * A fixture for integration tests which runs in a separate process.
+ * Any object that can produce an accompanying stop task, meant to tear down
+ * a previously instantiated service.
  */
-public class Fixture extends AntTask {
-
-    /** The path to the executable that starts the fixture. */
-    @Input
-    String executable
-
-    private final List arguments = new ArrayList<>()
-
-    @Input
-    public void args(Object... args) {
-        arguments.addAll(args)
-    }
-
-    /**
-     * Environment variables for the fixture process. The value can be any object, which
-     * will have toString() called at execution time.
-     */
-    private final Map environment = new HashMap<>()
-
-    @Input
-    public void env(String key, Object value) {
-        environment.put(key, value)
-    }
-
-    /** A flag to indicate whether the command should be executed from a shell. */
-    @Input
-    boolean useShell = false
-
-    /**
-     * A flag to indicate whether the fixture should be run in the foreground, or spawned.
-     * It is protected so subclasses can override (eg RunTask).
-     */
-    protected boolean spawn = true
-
-    /**
-     * A closure to call before the fixture is considered ready. The closure is passed the fixture object,
-     * as well as a groovy AntBuilder, to enable running ant condition checks. The default wait
-     * condition is for http on the http port.
-     */
-    @Input
-    Closure waitCondition = { Fixture fixture, AntBuilder ant ->
-        File tmpFile = new File(fixture.cwd, 'wait.success')
-        ant.get(src: "http://${fixture.addressAndPort}",
-                dest: tmpFile.toString(),
-                ignoreerrors: true, // do not fail on error, so logging information can be flushed
-                retries: 10)
-        return tmpFile.exists()
-    }
+public interface Fixture {
 
     /** A task which will stop this fixture. This should be used as a finalizedBy for any tasks that use the fixture. */
-    public final Task stopTask
-
-    public Fixture() {
-        stopTask = createStopTask()
-        finalizedBy(stopTask)
-    }
-
-    @Override
-    protected void runAnt(AntBuilder ant) {
-        project.delete(baseDir) // reset everything
-        cwd.mkdirs()
-        final String realExecutable
-        final List realArgs = new ArrayList<>()
-        final Map realEnv = environment
-        // We need to choose which executable we are using. In shell mode, or when we
-        // are spawning and thus using the wrapper script, the executable is the shell.
-        if (useShell || spawn) {
-            if (Os.isFamily(Os.FAMILY_WINDOWS)) {
-                realExecutable = 'cmd'
-                realArgs.add('/C')
-                realArgs.add('"') // quote the entire command
-            } else {
-                realExecutable = 'sh'
-            }
-        } else {
-            realExecutable = executable
-            realArgs.addAll(arguments)
-        }
-        if (spawn) {
-            writeWrapperScript(executable)
-            realArgs.add(wrapperScript)
-            realArgs.addAll(arguments)
-        }
-        if (Os.isFamily(Os.FAMILY_WINDOWS) && (useShell || spawn)) {
-            realArgs.add('"')
-        }
-        commandString.eachLine { line -> logger.info(line) }
-
-        ant.exec(executable: realExecutable, spawn: spawn, dir: cwd, taskname: name) {
-            realEnv.each { key, value -> env(key: key, value: value) }
-            realArgs.each { arg(value: it) }
-        }
-
-        String failedProp = "failed${name}"
-        // first wait for resources, or the failure marker from the wrapper script
-        ant.waitfor(maxwait: '30', maxwaitunit: 'second', checkevery: '500', checkeveryunit: 'millisecond', timeoutproperty: failedProp) {
-            or {
-                resourceexists {
-                    file(file: failureMarker.toString())
-                }
-                and {
-                    resourceexists {
-                        file(file: pidFile.toString())
-                    }
-                    resourceexists {
-                        file(file: portsFile.toString())
-                    }
-                }
-            }
-        }
-
-        if (ant.project.getProperty(failedProp) || failureMarker.exists()) {
-            fail("Failed to start ${name}")
-        }
-
-        // the process is started (has a pid) and is bound to a network interface
-        // so now wait undil the waitCondition has been met
-        // TODO: change this to a loop?
-        boolean success
-        try {
-            success = waitCondition(this, ant) == false
-        } catch (Exception e) {
-            String msg = "Wait condition caught exception for ${name}"
-            logger.error(msg, e)
-            fail(msg, e)
-        }
-        if (success == false) {
-            fail("Wait condition failed for ${name}")
-        }
-    }
-
-    /** Returns a debug string used to log information about how the fixture was run. */
-    protected String getCommandString() {
-        String commandString = "\n${name} configuration:\n"
-        commandString += "-----------------------------------------\n"
-        commandString += "  cwd: ${cwd}\n"
-        commandString += "  command: ${executable} ${arguments.join(' ')}\n"
-        commandString += '  environment:\n'
-        environment.each { k, v -> commandString += "    ${k}: ${v}\n" }
-        if (spawn) {
-            commandString += "\n  [${wrapperScript.name}]\n"
-            wrapperScript.eachLine('UTF-8', { line -> commandString += "    ${line}\n"})
-        }
-        return commandString
-    }
-
-    /**
-     * Writes a script to run the real executable, so that stdout/stderr can be captured.
-     * TODO: this could be removed if we do use our own ProcessBuilder and pump output from the process
-     */
-    private void writeWrapperScript(String executable) {
-        wrapperScript.parentFile.mkdirs()
-        String argsPasser = '"$@"'
-        String exitMarker = "; if [ \$? != 0 ]; then touch run.failed; fi"
-        if (Os.isFamily(Os.FAMILY_WINDOWS)) {
-            argsPasser = '%*'
-            exitMarker = "\r\n if \"%errorlevel%\" neq \"0\" ( type nul >> run.failed )"
-        }
-        wrapperScript.setText("\"${executable}\" ${argsPasser} > run.log 2>&1 ${exitMarker}", 'UTF-8')
-    }
-
-    /** Fail the build with the given message, and logging relevant info*/
-    private void fail(String msg, Exception... suppressed) {
-        if (logger.isInfoEnabled() == false) {
-            // We already log the command at info level. No need to do it twice.
-            commandString.eachLine { line -> logger.error(line) }
-        }
-        logger.error("${name} output:")
-        logger.error("-----------------------------------------")
-        logger.error("  failure marker exists: ${failureMarker.exists()}")
-        logger.error("  pid file exists: ${pidFile.exists()}")
-        logger.error("  ports file exists: ${portsFile.exists()}")
-        // also dump the log file for the startup script (which will include ES logging output to stdout)
-        if (runLog.exists()) {
-            logger.error("\n  [log]")
-            runLog.eachLine { line -> logger.error("    ${line}") }
-        }
-        logger.error("-----------------------------------------")
-        GradleException toThrow = new GradleException(msg)
-        for (Exception e : suppressed) {
-            toThrow.addSuppressed(e)
-        }
-        throw toThrow
-    }
-
-    /** Adds a task to kill an elasticsearch node with the given pidfile */
-    private Task createStopTask() {
-        final Fixture fixture = this
-        final Object pid = "${ -> fixture.pid }"
-        Exec stop = project.tasks.create(name: "${name}#stop", type: LoggedExec)
-        stop.onlyIf { fixture.pidFile.exists() }
-        stop.doFirst {
-            logger.info("Shutting down ${fixture.name} with pid ${pid}")
-        }
-        if (Os.isFamily(Os.FAMILY_WINDOWS)) {
-            stop.executable = 'Taskkill'
-            stop.args('/PID', pid, '/F')
-        } else {
-            stop.executable = 'kill'
-            stop.args('-9', pid)
-        }
-        stop.doLast {
-            project.delete(fixture.pidFile)
-        }
-        return stop
-    }
-
-    /**
-     * A path relative to the build dir that all configuration and runtime files
-     * will live in for this fixture
-     */
-    protected File getBaseDir() {
-        return new File(project.buildDir, "fixtures/${name}")
-    }
-
-    /** Returns the working directory for the process. Defaults to "cwd" inside baseDir. */
-    protected File getCwd() {
-        return new File(baseDir, 'cwd')
-    }
-
-    /** Returns the file the process writes its pid to. Defaults to "pid" inside baseDir. */
-    protected File getPidFile() {
-        return new File(baseDir, 'pid')
-    }
-
-    /** Reads the pid file and returns the process' pid */
-    public int getPid() {
-        return Integer.parseInt(pidFile.getText('UTF-8').trim())
-    }
-
-    /** Returns the file the process writes its bound ports to. Defaults to "ports" inside baseDir. */
-    protected File getPortsFile() {
-        return new File(baseDir, 'ports')
-    }
-
-    /** Returns an address and port suitable for a uri to connect to this node over http */
-    public String getAddressAndPort() {
-        return portsFile.readLines("UTF-8").get(0)
-    }
-
-    /** Returns a file that wraps around the actual command when {@code spawn == true}. */
-    protected File getWrapperScript() {
-        return new File(cwd, Os.isFamily(Os.FAMILY_WINDOWS) ? 'run.bat' : 'run')
-    }
-
-    /** Returns a file that the wrapper script writes when the command failed. */
-    protected File getFailureMarker() {
-        return new File(cwd, 'run.failed')
-    }
+    public Object getStopTask()
 
-    /** Returns a file that the wrapper script writes when the command failed. */
-    protected File getRunLog() {
-        return new File(cwd, 'run.log')
-    }
 }
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/MessyTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/MessyTestPlugin.groovy
index 1cca2c5aa49c6..1c0aec1bc00f3 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/MessyTestPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/MessyTestPlugin.groovy
@@ -48,7 +48,7 @@ class MessyTestPlugin extends StandaloneTestPlugin {
     }
 
     private static addPluginResources(Project project, Project pluginProject) {
-        String outputDir = "generated-resources/${pluginProject.name}"
+        String outputDir = "${project.buildDir}/generated-resources/${pluginProject.name}"
         String taskName = ClusterFormationTasks.pluginTaskName("copy", pluginProject.name, "Metadata")
         Copy copyPluginMetadata = project.tasks.create(taskName, Copy.class)
         copyPluginMetadata.into(outputDir)
@@ -57,7 +57,7 @@ class MessyTestPlugin extends StandaloneTestPlugin {
         project.sourceSets.test.output.dir(outputDir, builtBy: taskName)
 
         // add each generated dir to the test classpath in IDEs
-        //project.eclipse.classpath.sourceSets = [project.sourceSets.test]
         project.idea.module.singleEntryLibraries= ['TEST': [project.file(outputDir)]]
+        // Eclipse doesn't need this because it gets the entire module as a dependency
     }
 }
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy
index a9473cc28d280..46542708420f1 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/NodeInfo.groovy
@@ -21,7 +21,6 @@ package org.elasticsearch.gradle.test
 import org.apache.tools.ant.taskdefs.condition.Os
 import org.gradle.api.InvalidUserDataException
 import org.gradle.api.Project
-import org.gradle.api.Task
 
 /**
  * A container for the files and configuration associated with a single node in a test cluster.
@@ -96,26 +95,23 @@ class NodeInfo {
     /** the version of elasticsearch that this node runs */
     String nodeVersion
 
-    /** Creates a node to run as part of a cluster for the given task */
-    NodeInfo(ClusterConfiguration config, int nodeNum, Project project, Task task, String nodeVersion, File sharedDir) {
+    /** Holds node configuration for part of a test cluster. */
+    NodeInfo(ClusterConfiguration config, int nodeNum, Project project, String prefix, String nodeVersion, File sharedDir) {
         this.config = config
         this.nodeNum = nodeNum
         this.sharedDir = sharedDir
         if (config.clusterName != null) {
             clusterName = config.clusterName
         } else {
-            clusterName = "${task.path.replace(':', '_').substring(1)}"
+            clusterName = project.path.replace(':', '_').substring(1) + '_' + prefix
         }
-        baseDir = new File(project.buildDir, "cluster/${task.name} node${nodeNum}")
+        baseDir = new File(project.buildDir, "cluster/${prefix} node${nodeNum}")
         pidFile = new File(baseDir, 'es.pid')
         this.nodeVersion = nodeVersion
         homeDir = homeDir(baseDir, config.distribution, nodeVersion)
         confDir = confDir(baseDir, config.distribution, nodeVersion)
         if (config.dataDir != null) {
-            if (config.numNodes != 1) {
-                throw new IllegalArgumentException("Cannot set data dir for integ test with more than one node")
-            }
-            dataDir = config.dataDir
+            dataDir = "${config.dataDir(nodeNum)}"
         } else {
             dataDir = new File(homeDir, "data")
         }
@@ -151,6 +147,9 @@ class NodeInfo {
         args.addAll("-E", "node.portsfile=true")
         String collectedSystemProperties = config.systemProperties.collect { key, value -> "-D${key}=${value}" }.join(" ")
         String esJavaOpts = config.jvmArgs.isEmpty() ? collectedSystemProperties : collectedSystemProperties + " " + config.jvmArgs
+        if (Boolean.parseBoolean(System.getProperty('tests.asserts', 'true'))) {
+            esJavaOpts += " -ea -esa"
+        }
         env.put('ES_JAVA_OPTS', esJavaOpts)
         for (Map.Entry property : System.properties.entrySet()) {
             if (property.key.startsWith('tests.es.')) {
@@ -159,7 +158,10 @@ class NodeInfo {
             }
         }
         env.put('ES_JVM_OPTIONS', new File(confDir, 'jvm.options'))
-        args.addAll("-E", "path.conf=${confDir}", "-E", "path.data=${-> dataDir.toString()}")
+        args.addAll("-E", "path.conf=${confDir}")
+        if (!System.properties.containsKey("tests.es.path.data")) {
+            args.addAll("-E", "path.data=${-> dataDir.toString()}")
+        }
         if (Os.isFamily(Os.FAMILY_WINDOWS)) {
             args.add('"') // end the entire command, quoted
         }
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy
index 51bccb4fe7580..6494e500f33ab 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestIntegTestTask.groovy
@@ -20,19 +20,28 @@ package org.elasticsearch.gradle.test
 
 import com.carrotsearch.gradle.junit4.RandomizedTestingTask
 import org.elasticsearch.gradle.BuildPlugin
+import org.gradle.api.DefaultTask
 import org.gradle.api.Task
+import org.gradle.api.execution.TaskExecutionAdapter
 import org.gradle.api.internal.tasks.options.Option
 import org.gradle.api.plugins.JavaBasePlugin
 import org.gradle.api.tasks.Input
-import org.gradle.util.ConfigureUtil
+import org.gradle.api.tasks.TaskState
+
+import java.nio.charset.StandardCharsets
+import java.nio.file.Files
+import java.util.stream.Stream
 
 /**
- * Runs integration tests, but first starts an ES cluster,
- * and passes the ES cluster info as parameters to the tests.
+ * A wrapper task around setting up a cluster and running rest tests.
  */
-public class RestIntegTestTask extends RandomizedTestingTask {
+public class RestIntegTestTask extends DefaultTask {
+
+    protected ClusterConfiguration clusterConfig
+
+    protected RandomizedTestingTask runner
 
-    ClusterConfiguration clusterConfig
+    protected Task clusterInit
 
     /** Info about nodes in the integ test cluster. Note this is *not* available until runtime. */
     List nodes
@@ -42,37 +51,62 @@ public class RestIntegTestTask extends RandomizedTestingTask {
     boolean includePackaged = false
 
     public RestIntegTestTask() {
-        description = 'Runs rest tests against an elasticsearch cluster.'
-        group = JavaBasePlugin.VERIFICATION_GROUP
-        dependsOn(project.testClasses)
-        classpath = project.sourceSets.test.runtimeClasspath
-        testClassesDir = project.sourceSets.test.output.classesDir
-        clusterConfig = new ClusterConfiguration(project)
+        runner = project.tasks.create("${name}Runner", RandomizedTestingTask.class)
+        super.dependsOn(runner)
+        clusterInit = project.tasks.create(name: "${name}Cluster#init", dependsOn: project.testClasses)
+        runner.dependsOn(clusterInit)
+        runner.classpath = project.sourceSets.test.runtimeClasspath
+        runner.testClassesDir = project.sourceSets.test.output.classesDir
+        clusterConfig = project.extensions.create("${name}Cluster", ClusterConfiguration.class, project)
 
         // start with the common test configuration
-        configure(BuildPlugin.commonTestConfig(project))
+        runner.configure(BuildPlugin.commonTestConfig(project))
         // override/add more for rest tests
-        parallelism = '1'
-        include('**/*IT.class')
-        systemProperty('tests.rest.load_packaged', 'false')
+        runner.parallelism = '1'
+        runner.include('**/*IT.class')
+        runner.systemProperty('tests.rest.load_packaged', 'false')
         // we pass all nodes to the rest cluster to allow the clients to round-robin between them
         // this is more realistic than just talking to a single node
-        systemProperty('tests.rest.cluster', "${-> nodes.collect{it.httpUri()}.join(",")}")
-        systemProperty('tests.config.dir', "${-> nodes[0].confDir}")
+        runner.systemProperty('tests.rest.cluster', "${-> nodes.collect{it.httpUri()}.join(",")}")
+        runner.systemProperty('tests.config.dir', "${-> nodes[0].confDir}")
         // TODO: our "client" qa tests currently use the rest-test plugin. instead they should have their own plugin
         // that sets up the test cluster and passes this transport uri instead of http uri. Until then, we pass
         // both as separate sysprops
-        systemProperty('tests.cluster', "${-> nodes[0].transportUri()}")
+        runner.systemProperty('tests.cluster', "${-> nodes[0].transportUri()}")
+
+        // dump errors and warnings from cluster log on failure
+        TaskExecutionAdapter logDumpListener = new TaskExecutionAdapter() {
+            @Override
+            void afterExecute(Task task, TaskState state) {
+                if (state.failure != null) {
+                    for (NodeInfo nodeInfo : nodes) {
+                        printLogExcerpt(nodeInfo)
+                    }
+                }
+            }
+        }
+        runner.doFirst {
+            project.gradle.addListener(logDumpListener)
+        }
+        runner.doLast {
+            project.gradle.removeListener(logDumpListener)
+        }
 
         // copy the rest spec/tests into the test resources
         RestSpecHack.configureDependencies(project)
         project.afterEvaluate {
-            dependsOn(RestSpecHack.configureTask(project, includePackaged))
+            runner.dependsOn(RestSpecHack.configureTask(project, includePackaged))
         }
         // this must run after all projects have been configured, so we know any project
         // references can be accessed as a fully configured
         project.gradle.projectsEvaluated {
-            nodes = ClusterFormationTasks.setup(project, this, clusterConfig)
+            if (enabled == false) {
+                runner.enabled = false
+                clusterInit.enabled = false
+                return // no need to add cluster formation tasks if the task won't run!
+            }
+            nodes = ClusterFormationTasks.setup(project, "${name}Cluster", runner, clusterConfig)
+            super.dependsOn(runner.finalizedBy)
         }
     }
 
@@ -84,25 +118,16 @@ public class RestIntegTestTask extends RandomizedTestingTask {
         clusterConfig.debug = enabled;
     }
 
-    @Input
-    public void cluster(Closure closure) {
-        ConfigureUtil.configure(closure, clusterConfig)
-    }
-
-    public ClusterConfiguration getCluster() {
-        return clusterConfig
-    }
-
     public List getNodes() {
         return nodes
     }
 
     @Override
     public Task dependsOn(Object... dependencies) {
-        super.dependsOn(dependencies)
+        runner.dependsOn(dependencies)
         for (Object dependency : dependencies) {
             if (dependency instanceof Fixture) {
-                finalizedBy(((Fixture)dependency).stopTask)
+                runner.finalizedBy(((Fixture)dependency).getStopTask())
             }
         }
         return this
@@ -110,11 +135,54 @@ public class RestIntegTestTask extends RandomizedTestingTask {
 
     @Override
     public void setDependsOn(Iterable dependencies) {
-        super.setDependsOn(dependencies)
+        runner.setDependsOn(dependencies)
         for (Object dependency : dependencies) {
             if (dependency instanceof Fixture) {
-                finalizedBy(((Fixture)dependency).stopTask)
+                runner.finalizedBy(((Fixture)dependency).getStopTask())
             }
         }
     }
+
+    @Override
+    public Task mustRunAfter(Object... tasks) {
+        clusterInit.mustRunAfter(tasks)
+    }
+
+    /** Print out an excerpt of the log from the given node. */
+    protected static void printLogExcerpt(NodeInfo nodeInfo) {
+        File logFile = new File(nodeInfo.homeDir, "logs/${nodeInfo.clusterName}.log")
+        println("\nCluster ${nodeInfo.clusterName} - node ${nodeInfo.nodeNum} log excerpt:")
+        println("(full log at ${logFile})")
+        println('-----------------------------------------')
+        Stream stream = Files.lines(logFile.toPath(), StandardCharsets.UTF_8)
+        try {
+            boolean inStartup = true
+            boolean inExcerpt = false
+            int linesSkipped = 0
+            for (String line : stream) {
+                if (line.startsWith("[")) {
+                    inExcerpt = false // clear with the next log message
+                }
+                if (line =~ /(\[WARN\])|(\[ERROR\])/) {
+                    inExcerpt = true // show warnings and errors
+                }
+                if (inStartup || inExcerpt) {
+                    if (linesSkipped != 0) {
+                        println("... SKIPPED ${linesSkipped} LINES ...")
+                    }
+                    println(line)
+                    linesSkipped = 0
+                } else {
+                    ++linesSkipped
+                }
+                if (line =~ /recovered \[\d+\] indices into cluster_state/) {
+                    inStartup = false
+                }
+            }
+        } finally {
+            stream.close()
+        }
+        println('=========================================')
+
+    }
 }
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy
index 176b02cf9b0de..da1462412812a 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RestTestPlugin.groovy
@@ -22,6 +22,7 @@ import org.elasticsearch.gradle.BuildPlugin
 import org.gradle.api.InvalidUserDataException
 import org.gradle.api.Plugin
 import org.gradle.api.Project
+import org.gradle.api.plugins.JavaBasePlugin
 
 /**
  * Adds support for starting an Elasticsearch cluster before running integration
@@ -39,11 +40,13 @@ public class RestTestPlugin implements Plugin {
         if (false == REQUIRED_PLUGINS.any {project.pluginManager.hasPlugin(it)}) {
             throw new InvalidUserDataException('elasticsearch.rest-test '
                 + 'requires either elasticsearch.build or '
-                + 'elasticsearch.standalone-test')
+                + 'elasticsearch.standalone-rest-test')
         }
 
         RestIntegTestTask integTest = project.tasks.create('integTest', RestIntegTestTask.class)
-        integTest.cluster.distribution = 'zip' // rest tests should run with the real zip
+        integTest.description = 'Runs rest tests against an elasticsearch cluster.'
+        integTest.group = JavaBasePlugin.VERIFICATION_GROUP
+        integTest.clusterConfig.distribution = 'zip' // rest tests should run with the real zip
         integTest.mustRunAfter(project.precommit)
         project.check.dependsOn(integTest)
     }
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy
index a71dc59dbf914..a88152d7865ff 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/RunTask.groovy
@@ -18,7 +18,7 @@ public class RunTask extends DefaultTask {
         clusterConfig.daemonize = false
         clusterConfig.distribution = 'zip'
         project.afterEvaluate {
-            ClusterFormationTasks.setup(project, this, clusterConfig)
+            ClusterFormationTasks.setup(project, name, this, clusterConfig)
         }
     }
 
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneRestTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneRestTestPlugin.groovy
index 6e01767101755..c48dc890ab080 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneRestTestPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/StandaloneRestTestPlugin.groovy
@@ -40,9 +40,9 @@ public class StandaloneRestTestPlugin implements Plugin {
     @Override
     public void apply(Project project) {
         if (project.pluginManager.hasPlugin('elasticsearch.build')) {
-            throw new InvalidUserDataException('elasticsearch.standalone-test, '
-                + 'elasticsearch.standalone-test, and elasticsearch.build are '
-                + 'mutually exclusive')
+            throw new InvalidUserDataException('elasticsearch.standalone-test '
+                + 'elasticsearch.standalone-rest-test, and elasticsearch.build '
+                + 'are mutually exclusive')
         }
         project.pluginManager.apply(JavaBasePlugin)
         project.pluginManager.apply(RandomizedTestingPlugin)
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/TestWithDependenciesPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/TestWithDependenciesPlugin.groovy
new file mode 100644
index 0000000000000..7e370fd69e2d6
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/TestWithDependenciesPlugin.groovy
@@ -0,0 +1,66 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.gradle.test
+
+import org.elasticsearch.gradle.plugin.PluginBuildPlugin
+import org.gradle.api.Plugin
+import org.gradle.api.Project
+import org.gradle.api.artifacts.Dependency
+import org.gradle.api.artifacts.ProjectDependency
+import org.gradle.api.tasks.Copy
+
+/**
+ * A plugin to run tests that depend on other plugins or modules.
+ *
+ * This plugin will add the plugin-metadata and properties files for each
+ * dependency to the test source set.
+ */
+class TestWithDependenciesPlugin implements Plugin {
+
+    @Override
+    void apply(Project project) {
+        if (project.isEclipse) {
+            /* The changes this plugin makes both break and aren't needed by
+             * Eclipse. This is because Eclipse flattens main and test
+             * dependencies into a single dependency. Because Eclipse is
+             * "special".... */
+            return
+        }
+
+        project.configurations.testCompile.dependencies.all { Dependency dep ->
+            // this closure is run every time a compile dependency is added
+            if (dep instanceof ProjectDependency && dep.dependencyProject.plugins.hasPlugin(PluginBuildPlugin)) {
+                project.gradle.projectsEvaluated {
+                    addPluginResources(project, dep.dependencyProject)
+                }
+            }
+        }
+    }
+
+    private static addPluginResources(Project project, Project pluginProject) {
+        String outputDir = "${project.buildDir}/generated-resources/${pluginProject.name}"
+        String taskName = ClusterFormationTasks.pluginTaskName("copy", pluginProject.name, "Metadata")
+        Copy copyPluginMetadata = project.tasks.create(taskName, Copy.class)
+        copyPluginMetadata.into(outputDir)
+        copyPluginMetadata.from(pluginProject.tasks.pluginProperties)
+        copyPluginMetadata.from(pluginProject.file('src/main/plugin-metadata'))
+        project.sourceSets.test.output.dir(outputDir, builtBy: taskName)
+    }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/VagrantFixture.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/VagrantFixture.groovy
new file mode 100644
index 0000000000000..fa08a8f9c6667
--- /dev/null
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/test/VagrantFixture.groovy
@@ -0,0 +1,54 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.elasticsearch.gradle.test
+
+import org.elasticsearch.gradle.vagrant.VagrantCommandTask
+import org.gradle.api.Task
+
+/**
+ * A fixture for integration tests which runs in a virtual machine launched by Vagrant.
+ */
+class VagrantFixture extends VagrantCommandTask implements Fixture {
+
+    private VagrantCommandTask stopTask
+
+    public VagrantFixture() {
+        this.stopTask = project.tasks.create(name: "${name}#stop", type: VagrantCommandTask) {
+            command 'halt'
+        }
+        finalizedBy this.stopTask
+    }
+
+    @Override
+    void setBoxName(String boxName) {
+        super.setBoxName(boxName)
+        this.stopTask.setBoxName(boxName)
+    }
+
+    @Override
+    void setEnvironmentVars(Map environmentVars) {
+        super.setEnvironmentVars(environmentVars)
+        this.stopTask.setEnvironmentVars(environmentVars)
+    }
+
+    @Override
+    public Task getStopTask() {
+        return this.stopTask
+    }
+}
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy
index 65b90c4d9a0cd..110f2fc7e8461 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/BatsOverVagrantTask.groovy
@@ -27,12 +27,15 @@ import org.gradle.api.tasks.Input
 public class BatsOverVagrantTask extends VagrantCommandTask {
 
     @Input
-    String command
+    String remoteCommand
 
     BatsOverVagrantTask() {
-        project.afterEvaluate {
-            args 'ssh', boxName, '--command', command
-        }
+        command = 'ssh'
+    }
+
+    void setRemoteCommand(String remoteCommand) {
+        this.remoteCommand = Objects.requireNonNull(remoteCommand)
+        setArgs(['--command', remoteCommand])
     }
 
     @Override
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy
index 3f980c57a49a6..e15759a1fe588 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/TapLoggerOutputStream.groovy
@@ -19,11 +19,9 @@
 package org.elasticsearch.gradle.vagrant
 
 import com.carrotsearch.gradle.junit4.LoggingOutputStream
-import groovy.transform.PackageScope
 import org.gradle.api.GradleScriptException
 import org.gradle.api.logging.Logger
-import org.gradle.logging.ProgressLogger
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.internal.logging.progress.ProgressLogger
 
 import java.util.regex.Matcher
 
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy
index ecba08d7d4cb9..aab120e8d049a 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantCommandTask.groovy
@@ -21,9 +21,15 @@ package org.elasticsearch.gradle.vagrant
 import org.apache.commons.io.output.TeeOutputStream
 import org.elasticsearch.gradle.LoggedExec
 import org.gradle.api.tasks.Input
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.api.tasks.Optional
+import org.gradle.api.tasks.TaskAction
+import org.gradle.internal.logging.progress.ProgressLoggerFactory
 
 import javax.inject.Inject
+import java.util.concurrent.CountDownLatch
+import java.util.concurrent.locks.Lock
+import java.util.concurrent.locks.ReadWriteLock
+import java.util.concurrent.locks.ReentrantLock
 
 /**
  * Runs a vagrant command. Pretty much like Exec task but with a nicer output
@@ -31,6 +37,12 @@ import javax.inject.Inject
  */
 public class VagrantCommandTask extends LoggedExec {
 
+    @Input
+    String command
+
+    @Input @Optional
+    String subcommand
+
     @Input
     String boxName
 
@@ -40,15 +52,36 @@ public class VagrantCommandTask extends LoggedExec {
     public VagrantCommandTask() {
         executable = 'vagrant'
 
+        // We're using afterEvaluate here to slot in some logic that captures configurations and
+        // modifies the command line right before the main execution happens. The reason that we
+        // call doFirst instead of just doing the work in the afterEvaluate is that the latter
+        // restricts how subclasses can extend functionality. Calling afterEvaluate is like having
+        // all the logic of a task happening at construction time, instead of at execution time
+        // where a subclass can override or extend the logic.
         project.afterEvaluate {
-            // It'd be nice if --machine-readable were, well, nice
-            standardOutput = new TeeOutputStream(standardOutput, createLoggerOutputStream())
-            if (environmentVars != null) {
-                environment environmentVars
+            doFirst {
+                if (environmentVars != null) {
+                    environment environmentVars
+                }
+
+                // Build our command line for vagrant
+                def vagrantCommand = [executable, command]
+                if (subcommand != null) {
+                    vagrantCommand = vagrantCommand + subcommand
+                }
+                commandLine([*vagrantCommand, boxName, *args])
+
+                // It'd be nice if --machine-readable were, well, nice
+                standardOutput = new TeeOutputStream(standardOutput, createLoggerOutputStream())
             }
         }
     }
 
+    @Inject
+    ProgressLoggerFactory getProgressLoggerFactory() {
+        throw new UnsupportedOperationException()
+    }
+
     protected OutputStream createLoggerOutputStream() {
         return new VagrantLoggerOutputStream(
             command: commandLine.join(' '),
@@ -57,9 +90,4 @@ public class VagrantCommandTask extends LoggedExec {
               stuff starts with ==> $box */
             squashedPrefix: "==> $boxName: ")
     }
-
-    @Inject
-    ProgressLoggerFactory getProgressLoggerFactory() {
-        throw new UnsupportedOperationException();
-    }
 }
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy
index 331a638b5cade..e899c0171298b 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantLoggerOutputStream.groovy
@@ -19,9 +19,7 @@
 package org.elasticsearch.gradle.vagrant
 
 import com.carrotsearch.gradle.junit4.LoggingOutputStream
-import org.gradle.api.logging.Logger
-import org.gradle.logging.ProgressLogger
-import org.gradle.logging.ProgressLoggerFactory
+import org.gradle.internal.logging.progress.ProgressLogger
 
 /**
  * Adapts an OutputStream being written to by vagrant into a ProcessLogger. It
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantPropertiesExtension.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantPropertiesExtension.groovy
index f16913d5be64a..e6e7fca62f97e 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantPropertiesExtension.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantPropertiesExtension.groovy
@@ -25,12 +25,6 @@ class VagrantPropertiesExtension {
     @Input
     List boxes
 
-    @Input
-    Long testSeed
-
-    @Input
-    String formattedTestSeed
-
     @Input
     String upgradeFromVersion
 
diff --git a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy
index a5bb054a8b646..c8d77ea2fbfe5 100644
--- a/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy
+++ b/buildSrc/src/main/groovy/org/elasticsearch/gradle/vagrant/VagrantTestPlugin.groovy
@@ -1,14 +1,15 @@
 package org.elasticsearch.gradle.vagrant
 
+import com.carrotsearch.gradle.junit4.RandomizedTestingPlugin
 import org.elasticsearch.gradle.FileContentsTask
-import org.gradle.BuildAdapter
-import org.gradle.BuildResult
 import org.gradle.api.*
 import org.gradle.api.artifacts.dsl.RepositoryHandler
+import org.gradle.api.execution.TaskExecutionAdapter
 import org.gradle.api.internal.artifacts.dependencies.DefaultProjectDependency
 import org.gradle.api.tasks.Copy
 import org.gradle.api.tasks.Delete
 import org.gradle.api.tasks.Exec
+import org.gradle.api.tasks.TaskState
 
 class VagrantTestPlugin implements Plugin {
 
@@ -17,12 +18,11 @@ class VagrantTestPlugin implements Plugin {
             'centos-6',
             'centos-7',
             'debian-8',
-            'fedora-24',
+            'fedora-25',
             'oel-6',
             'oel-7',
-            'opensuse-13',
+            'opensuse-42',
             'sles-12',
-            'ubuntu-1204',
             'ubuntu-1404',
             'ubuntu-1604'
     ]
@@ -41,6 +41,7 @@ class VagrantTestPlugin implements Plugin {
 
     private static final BATS = 'bats'
     private static final String BATS_TEST_COMMAND ="cd \$BATS_ARCHIVES && sudo bats --tap \$BATS_TESTS/*.$BATS"
+    private static final String PLATFORM_TEST_COMMAND ="rm -rf ~/elasticsearch && rsync -r /elasticsearch/ ~/elasticsearch && cd ~/elasticsearch && \$GRADLE_HOME/bin/gradle test integTest"
 
     @Override
     void apply(Project project) {
@@ -82,29 +83,6 @@ class VagrantTestPlugin implements Plugin {
         }
     }
 
-    private static Set listVersions(Project project) {
-        Node xml
-        new URL('https://repo1.maven.org/maven2/org/elasticsearch/elasticsearch/maven-metadata.xml').openStream().withStream { s ->
-            xml = new XmlParser().parse(s)
-        }
-        Set versions = new TreeSet<>(xml.versioning.versions.version.collect { it.text() }.findAll { it ==~ /[5]\.\d\.\d/ })
-        if (versions.isEmpty() == false) {
-            return versions;
-        }
-
-        // If no version is found, we run the tests with the current version
-        return Collections.singleton(project.version);
-    }
-
-    private static File getVersionsFile(Project project) {
-        File versions = new File(project.projectDir, 'versions');
-        if (versions.exists() == false) {
-            // Use the elasticsearch's versions file from project :qa:vagrant
-            versions = project.project(":qa:vagrant").file('versions')
-        }
-        return versions
-    }
-
     private static void configureBatsRepositories(Project project) {
         RepositoryHandler repos = project.repositories
 
@@ -123,33 +101,13 @@ class VagrantTestPlugin implements Plugin {
     private static void createBatsConfiguration(Project project) {
         project.configurations.create(BATS)
 
-        Long seed
-        String formattedSeed = null
-        String[] upgradeFromVersions
-
-        String maybeTestsSeed = System.getProperty("tests.seed", null);
-        if (maybeTestsSeed != null) {
-            List seeds = maybeTestsSeed.tokenize(':')
-            if (seeds.size() != 0) {
-                String masterSeed = seeds.get(0)
-                seed = new BigInteger(masterSeed, 16).longValue()
-                formattedSeed = maybeTestsSeed
-            }
-        }
-        if (formattedSeed == null) {
-            seed = new Random().nextLong()
-            formattedSeed = String.format("%016X", seed)
-        }
-
-        String maybeUpdradeFromVersions = System.getProperty("tests.packaging.upgrade.from.versions", null)
-        if (maybeUpdradeFromVersions != null) {
-            upgradeFromVersions = maybeUpdradeFromVersions.split(",")
-        } else {
-            upgradeFromVersions = getVersionsFile(project)
+        String upgradeFromVersion = System.getProperty("tests.packaging.upgradeVersion");
+        if (upgradeFromVersion == null) {
+            String firstPartOfSeed = project.rootProject.testSeed.tokenize(':').get(0)
+            final long seed = Long.parseUnsignedLong(firstPartOfSeed, 16)
+            upgradeFromVersion = project.indexCompatVersions[new Random(seed).nextInt(project.indexCompatVersions.size())]
         }
 
-        String upgradeFromVersion = upgradeFromVersions[new Random(seed).nextInt(upgradeFromVersions.length)]
-
         DISTRIBUTION_ARCHIVES.each {
             // Adds a dependency for the current version
             project.dependencies.add(BATS, project.dependencies.project(path: ":distribution:${it}", configuration: 'archives'))
@@ -160,10 +118,7 @@ class VagrantTestPlugin implements Plugin {
             project.dependencies.add(BATS, "org.elasticsearch.distribution.${it}:elasticsearch:${upgradeFromVersion}@${it}")
         }
 
-        project.extensions.esvagrant.testSeed = seed
-        project.extensions.esvagrant.formattedTestSeed = formattedSeed
         project.extensions.esvagrant.upgradeFromVersion = upgradeFromVersion
-        project.extensions.esvagrant.upgradeFromVersions = upgradeFromVersions
     }
 
     private static void createCleanTask(Project project) {
@@ -193,7 +148,6 @@ class VagrantTestPlugin implements Plugin {
 
         Task createBatsDirsTask = project.tasks.create('createBatsDirs')
         createBatsDirsTask.outputs.dir batsDir
-        createBatsDirsTask.dependsOn project.tasks.vagrantVerifyVersions
         createBatsDirsTask.doLast {
             batsDir.mkdirs()
         }
@@ -223,7 +177,7 @@ class VagrantTestPlugin implements Plugin {
         // Now we iterate over dependencies of the bats configuration. When a project dependency is found,
         // we bring back its own archives, test files or test utils.
         project.afterEvaluate {
-            project.configurations.bats.dependencies.findAll {it.configuration == BATS }.each { d ->
+            project.configurations.bats.dependencies.findAll {it.targetConfiguration == BATS }.each { d ->
                 if (d instanceof DefaultProjectDependency) {
                     DefaultProjectDependency externalBatsDependency = (DefaultProjectDependency) d
                     Project externalBatsProject = externalBatsDependency.dependencyProject
@@ -254,51 +208,9 @@ class VagrantTestPlugin implements Plugin {
             contents project.extensions.esvagrant.upgradeFromVersion
         }
 
-        Task vagrantSetUpTask = project.tasks.create('vagrantSetUp')
+        Task vagrantSetUpTask = project.tasks.create('setupBats')
         vagrantSetUpTask.dependsOn 'vagrantCheckVersion'
         vagrantSetUpTask.dependsOn copyBatsTests, copyBatsUtils, copyBatsArchives, createVersionFile, createUpgradeFromFile
-        vagrantSetUpTask.doFirst {
-            project.gradle.addBuildListener new BuildAdapter() {
-                @Override
-                void buildFinished(BuildResult result) {
-                    if (result.failure) {
-                        println "Reproduce with: gradle packagingTest "
-                        +"-Pvagrant.boxes=${project.extensions.esvagrant.boxes} "
-                        + "-Dtests.seed=${project.extensions.esvagrant.formattedSeed} "
-                        + "-Dtests.packaging.upgrade.from.versions=${project.extensions.esvagrant.upgradeFromVersions.join(",")}"
-                    }
-                }
-            }
-        }
-    }
-
-    private static void createUpdateVersionsTask(Project project) {
-        project.tasks.create('vagrantUpdateVersions') {
-            description 'Update file containing options for the\n    "starting" version in the "upgrade from" packaging tests.'
-            group 'Verification'
-            doLast {
-                File versions = getVersionsFile(project)
-                versions.text = listVersions(project).join('\n') + '\n'
-            }
-        }
-    }
-
-    private static void createVerifyVersionsTask(Project project) {
-        project.tasks.create('vagrantVerifyVersions') {
-            description 'Update file containing options for the\n    "starting" version in the "upgrade from" packaging tests.'
-            group 'Verification'
-            doLast {
-                String maybeUpdateFromVersions = System.getProperty("tests.packaging.upgrade.from.versions", null)
-                if (maybeUpdateFromVersions == null) {
-                    Set versions = listVersions(project)
-                    Set actualVersions = new TreeSet<>(project.extensions.esvagrant.upgradeFromVersions)
-                    if (!versions.equals(actualVersions)) {
-                        throw new GradleException("out-of-date versions " + actualVersions +
-                                ", expected " + versions + "; run gradle vagrantUpdateVersions")
-                    }
-                }
-            }
-        }
     }
 
     private static void createCheckVagrantVersionTask(Project project) {
@@ -350,16 +262,26 @@ class VagrantTestPlugin implements Plugin {
         }
     }
 
+    private static void createPlatformTestTask(Project project) {
+        project.tasks.create('platformTest') {
+            group 'Verification'
+            description "Test unit and integ tests on different platforms using vagrant.\n" +
+                    "    Specify the vagrant boxes to test using the gradle property 'vagrant.boxes'.\n" +
+                    "    'all' can be used to test all available boxes. The available boxes are: \n" +
+                    "    ${BOXES}"
+            dependsOn 'vagrantCheckVersion'
+        }
+    }
+
     private static void createVagrantTasks(Project project) {
         createCleanTask(project)
         createStopTask(project)
         createSmokeTestTask(project)
-        createUpdateVersionsTask(project)
-        createVerifyVersionsTask(project)
         createCheckVagrantVersionTask(project)
         createCheckVirtualBoxVersionTask(project)
         createPrepareVagrantTestEnvTask(project)
         createPackagingTestTask(project)
+        createPlatformTestTask(project)
     }
 
     private static void createVagrantBoxesTasks(Project project) {
@@ -377,12 +299,15 @@ class VagrantTestPlugin implements Plugin {
         assert project.tasks.virtualboxCheckVersion != null
         Task virtualboxCheckVersion = project.tasks.virtualboxCheckVersion
 
-        assert project.tasks.vagrantSetUp != null
-        Task vagrantSetUp = project.tasks.vagrantSetUp
+        assert project.tasks.setupBats != null
+        Task setupBats = project.tasks.setupBats
 
         assert project.tasks.packagingTest != null
         Task packagingTest = project.tasks.packagingTest
 
+        assert project.tasks.platformTest != null
+        Task platformTest = project.tasks.platformTest
+
         /*
          * We always use the main project.rootDir as Vagrant's current working directory (VAGRANT_CWD)
          * so that boxes are not duplicated for every Gradle project that use this VagrantTestPlugin.
@@ -399,24 +324,23 @@ class VagrantTestPlugin implements Plugin {
 
             // always add a halt task for all boxes, so clean makes sure they are all shutdown
             Task halt = project.tasks.create("vagrant${boxTask}#halt", VagrantCommandTask) {
+                command 'halt'
                 boxName box
                 environmentVars vagrantEnvVars
-                args 'halt', box
             }
             stop.dependsOn(halt)
-            if (project.extensions.esvagrant.boxes.contains(box) == false) {
-                // we only need a halt task if this box was not specified
-                continue;
-            }
 
             Task update = project.tasks.create("vagrant${boxTask}#update", VagrantCommandTask) {
+                command 'box'
+                subcommand 'update'
                 boxName box
                 environmentVars vagrantEnvVars
-                args 'box', 'update', box
-                dependsOn vagrantCheckVersion, virtualboxCheckVersion, vagrantSetUp
+                dependsOn vagrantCheckVersion, virtualboxCheckVersion
             }
+            update.mustRunAfter(setupBats)
 
             Task up = project.tasks.create("vagrant${boxTask}#up", VagrantCommandTask) {
+                command 'up'
                 boxName box
                 environmentVars vagrantEnvVars
                 /* Its important that we try to reprovision the box even if it already
@@ -429,7 +353,7 @@ class VagrantTestPlugin implements Plugin {
                   vagrant's default but its possible to change that default and folks do.
                   But the boxes that we use are unlikely to work properly with other
                   virtualization providers. Thus the lock. */
-                args 'up', box, '--provision', '--provider', 'virtualbox'
+                args '--provision', '--provider', 'virtualbox'
                 /* It'd be possible to check if the box is already up here and output
                   SKIPPED but that would require running vagrant status which is slow! */
                 dependsOn update
@@ -444,14 +368,59 @@ class VagrantTestPlugin implements Plugin {
             }
             vagrantSmokeTest.dependsOn(smoke)
 
-            Task packaging = project.tasks.create("vagrant${boxTask}#packagingtest", BatsOverVagrantTask) {
+            Task packaging = project.tasks.create("vagrant${boxTask}#packagingTest", BatsOverVagrantTask) {
+                remoteCommand BATS_TEST_COMMAND
+                boxName box
+                environmentVars vagrantEnvVars
+                dependsOn up, setupBats
+                finalizedBy halt
+            }
+
+            TaskExecutionAdapter packagingReproListener = new TaskExecutionAdapter() {
+                @Override
+                void afterExecute(Task task, TaskState state) {
+                    if (state.failure != null) {
+                        println "REPRODUCE WITH: gradle ${packaging.path} " +
+                            "-Dtests.seed=${project.testSeed} "
+                    }
+                }
+            }
+            packaging.doFirst {
+                project.gradle.addListener(packagingReproListener)
+            }
+            packaging.doLast {
+                project.gradle.removeListener(packagingReproListener)
+            }
+            if (project.extensions.esvagrant.boxes.contains(box)) {
+                packagingTest.dependsOn(packaging)
+            }
+
+            Task platform = project.tasks.create("vagrant${boxTask}#platformTest", VagrantCommandTask) {
+                command 'ssh'
                 boxName box
                 environmentVars vagrantEnvVars
                 dependsOn up
                 finalizedBy halt
-                command BATS_TEST_COMMAND
+                args '--command', PLATFORM_TEST_COMMAND + " -Dtests.seed=${-> project.testSeed}"
+            }
+            TaskExecutionAdapter platformReproListener = new TaskExecutionAdapter() {
+                @Override
+                void afterExecute(Task task, TaskState state) {
+                    if (state.failure != null) {
+                        println "REPRODUCE WITH: gradle ${platform.path} " +
+                            "-Dtests.seed=${project.testSeed} "
+                    }
+                }
+            }
+            platform.doFirst {
+                project.gradle.addListener(platformReproListener)
+            }
+            platform.doLast {
+                project.gradle.removeListener(platformReproListener)
+            }
+            if (project.extensions.esvagrant.boxes.contains(box)) {
+                platformTest.dependsOn(platform)
             }
-            packagingTest.dependsOn(packaging)
         }
     }
 }
diff --git a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java
index cbfa31d1aaf5b..9bd14675d34a4 100644
--- a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java
+++ b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheck.java
@@ -28,6 +28,7 @@
 import java.nio.file.Paths;
 import java.nio.file.attribute.BasicFileAttributes;
 import java.util.HashSet;
+import java.util.Objects;
 import java.util.Set;
 
 /**
@@ -49,6 +50,7 @@ public static void main(String[] args) throws IOException {
         Path rootPath = null;
         boolean skipIntegTestsInDisguise = false;
         boolean selfTest = false;
+        boolean checkMainClasses = false;
         for (int i = 0; i < args.length; i++) {
             String arg = args[i];
             switch (arg) {
@@ -64,6 +66,9 @@ public static void main(String[] args) throws IOException {
                 case "--self-test":
                     selfTest = true;
                     break;
+                case "--main":
+                    checkMainClasses = true;
+                    break;
                 case "--":
                     rootPath = Paths.get(args[++i]);
                     break;
@@ -73,28 +78,43 @@ public static void main(String[] args) throws IOException {
         }
 
         NamingConventionsCheck check = new NamingConventionsCheck(testClass, integTestClass);
-        check.check(rootPath, skipIntegTestsInDisguise);
+        if (checkMainClasses) {
+            check.checkMain(rootPath);
+        } else {
+            check.checkTests(rootPath, skipIntegTestsInDisguise);
+        }
 
         if (selfTest) {
-            assertViolation("WrongName", check.missingSuffix);
-            assertViolation("WrongNameTheSecond", check.missingSuffix);
-            assertViolation("DummyAbstractTests", check.notRunnable);
-            assertViolation("DummyInterfaceTests", check.notRunnable);
-            assertViolation("InnerTests", check.innerClasses);
-            assertViolation("NotImplementingTests", check.notImplementing);
-            assertViolation("PlainUnit", check.pureUnitTest);
+            if (checkMainClasses) {
+                assertViolation(NamingConventionsCheckInMainTests.class.getName(), check.testsInMain);
+                assertViolation(NamingConventionsCheckInMainIT.class.getName(), check.testsInMain);
+            } else {
+                assertViolation("WrongName", check.missingSuffix);
+                assertViolation("WrongNameTheSecond", check.missingSuffix);
+                assertViolation("DummyAbstractTests", check.notRunnable);
+                assertViolation("DummyInterfaceTests", check.notRunnable);
+                assertViolation("InnerTests", check.innerClasses);
+                assertViolation("NotImplementingTests", check.notImplementing);
+                assertViolation("PlainUnit", check.pureUnitTest);
+            }
         }
 
         // Now we should have no violations
-        assertNoViolations("Not all subclasses of " + check.testClass.getSimpleName()
-                + " match the naming convention. Concrete classes must end with [Tests]", check.missingSuffix);
+        assertNoViolations(
+                "Not all subclasses of " + check.testClass.getSimpleName()
+                    + " match the naming convention. Concrete classes must end with [Tests]",
+                check.missingSuffix);
         assertNoViolations("Classes ending with [Tests] are abstract or interfaces", check.notRunnable);
         assertNoViolations("Found inner classes that are tests, which are excluded from the test runner", check.innerClasses);
         assertNoViolations("Pure Unit-Test found must subclass [" + check.testClass.getSimpleName() + "]", check.pureUnitTest);
         assertNoViolations("Classes ending with [Tests] must subclass [" + check.testClass.getSimpleName() + "]", check.notImplementing);
+        assertNoViolations(
+                "Classes ending with [Tests] or [IT] or extending [" + check.testClass.getSimpleName() + "] must be in src/test/java",
+                check.testsInMain);
         if (skipIntegTestsInDisguise == false) {
-            assertNoViolations("Subclasses of " + check.integTestClass.getSimpleName() +
-                    " should end with IT as they are integration tests", check.integTestsInDisguise);
+            assertNoViolations(
+                    "Subclasses of " + check.integTestClass.getSimpleName() + " should end with IT as they are integration tests",
+                    check.integTestsInDisguise);
         }
     }
 
@@ -104,84 +124,76 @@ public static void main(String[] args) throws IOException {
     private final Set> integTestsInDisguise = new HashSet<>();
     private final Set> notRunnable = new HashSet<>();
     private final Set> innerClasses = new HashSet<>();
+    private final Set> testsInMain = new HashSet<>();
 
     private final Class testClass;
     private final Class integTestClass;
 
     public NamingConventionsCheck(Class testClass, Class integTestClass) {
-        this.testClass = testClass;
+        this.testClass = Objects.requireNonNull(testClass, "--test-class is required");
         this.integTestClass = integTestClass;
     }
 
-    public void check(Path rootPath, boolean skipTestsInDisguised) throws IOException {
-        Files.walkFileTree(rootPath, new FileVisitor() {
-            /**
-             * The package name of the directory we are currently visiting. Kept as a string rather than something fancy because we load
-             * just about every class and doing so requires building a string out of it anyway. At least this way we don't need to build the
-             * first part of the string over and over and over again.
-             */
-            private String packageName;
-
+    public void checkTests(Path rootPath, boolean skipTestsInDisguised) throws IOException {
+        Files.walkFileTree(rootPath, new TestClassVisitor() {
             @Override
-            public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {
-                // First we visit the root directory
-                if (packageName == null) {
-                    // And it package is empty string regardless of the directory name
-                    packageName = "";
-                } else {
-                    packageName += dir.getFileName() + ".";
+            protected void visitTestClass(Class clazz) {
+                if (skipTestsInDisguised == false && integTestClass.isAssignableFrom(clazz)) {
+                    integTestsInDisguise.add(clazz);
+                }
+                if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) {
+                    notRunnable.add(clazz);
+                } else if (isTestCase(clazz) == false) {
+                    notImplementing.add(clazz);
+                } else if (Modifier.isStatic(clazz.getModifiers())) {
+                    innerClasses.add(clazz);
                 }
-                return FileVisitResult.CONTINUE;
             }
 
             @Override
-            public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {
-                // Go up one package by jumping back to the second to last '.'
-                packageName = packageName.substring(0, 1 + packageName.lastIndexOf('.', packageName.length() - 2));
-                return FileVisitResult.CONTINUE;
+            protected void visitIntegrationTestClass(Class clazz) {
+                if (isTestCase(clazz) == false) {
+                    notImplementing.add(clazz);
+                }
             }
 
             @Override
-            public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
-                String filename = file.getFileName().toString();
-                if (filename.endsWith(".class")) {
-                    String className = filename.substring(0, filename.length() - ".class".length());
-                    Class clazz = loadClassWithoutInitializing(packageName + className);
-                    if (clazz.getName().endsWith("Tests")) {
-                        if (skipTestsInDisguised == false && integTestClass.isAssignableFrom(clazz)) {
-                            integTestsInDisguise.add(clazz);
-                        }
-                        if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) {
-                            notRunnable.add(clazz);
-                        } else if (isTestCase(clazz) == false) {
-                            notImplementing.add(clazz);
-                        } else if (Modifier.isStatic(clazz.getModifiers())) {
-                            innerClasses.add(clazz);
-                        }
-                    } else if (clazz.getName().endsWith("IT")) {
-                        if (isTestCase(clazz) == false) {
-                            notImplementing.add(clazz);
-                        }
-                    } else if (Modifier.isAbstract(clazz.getModifiers()) == false && Modifier.isInterface(clazz.getModifiers()) == false) {
-                        if (isTestCase(clazz)) {
-                            missingSuffix.add(clazz);
-                        } else if (junit.framework.Test.class.isAssignableFrom(clazz)) {
-                            pureUnitTest.add(clazz);
-                        }
-                    }
+            protected void visitOtherClass(Class clazz) {
+                if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) {
+                    return;
+                }
+                if (isTestCase(clazz)) {
+                    missingSuffix.add(clazz);
+                } else if (junit.framework.Test.class.isAssignableFrom(clazz)) {
+                    pureUnitTest.add(clazz);
                 }
-                return FileVisitResult.CONTINUE;
+            }
+        });
+    }
+
+    public void checkMain(Path rootPath) throws IOException {
+        Files.walkFileTree(rootPath, new TestClassVisitor() {
+            @Override
+            protected void visitTestClass(Class clazz) {
+                testsInMain.add(clazz);
             }
 
-            private boolean isTestCase(Class clazz) {
-                return testClass.isAssignableFrom(clazz);
+            @Override
+            protected void visitIntegrationTestClass(Class clazz) {
+                testsInMain.add(clazz);
             }
 
             @Override
-            public FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException {
-                throw exc;
+            protected void visitOtherClass(Class clazz) {
+                if (Modifier.isAbstract(clazz.getModifiers()) || Modifier.isInterface(clazz.getModifiers())) {
+                    return;
+                }
+                if (isTestCase(clazz)) {
+                    testsInMain.add(clazz);
+                }
             }
         });
+
     }
 
     /**
@@ -203,7 +215,7 @@ private static void assertNoViolations(String message, Set> set) {
      * similar enough.
      */
     private static void assertViolation(String className, Set> set) {
-        className = "org.elasticsearch.test.NamingConventionsCheckBadClasses$" + className;
+        className = className.startsWith("org") ? className : "org.elasticsearch.test.NamingConventionsCheckBadClasses$" + className;
         if (false == set.remove(loadClassWithoutInitializing(className))) {
             System.err.println("Error in NamingConventionsCheck! Expected [" + className + "] to be a violation but wasn't.");
             System.exit(1);
@@ -229,4 +241,74 @@ static Class loadClassWithoutInitializing(String name) {
             throw new RuntimeException(e);
         }
     }
+
+    abstract class TestClassVisitor implements FileVisitor {
+        /**
+         * The package name of the directory we are currently visiting. Kept as a string rather than something fancy because we load
+         * just about every class and doing so requires building a string out of it anyway. At least this way we don't need to build the
+         * first part of the string over and over and over again.
+         */
+        private String packageName;
+
+        /**
+         * Visit classes named like a test.
+         */
+        protected abstract void visitTestClass(Class clazz);
+        /**
+         * Visit classes named like an integration test.
+         */
+        protected abstract void visitIntegrationTestClass(Class clazz);
+        /**
+         * Visit classes not named like a test at all.
+         */
+        protected abstract void visitOtherClass(Class clazz);
+
+        @Override
+        public final FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {
+            // First we visit the root directory
+            if (packageName == null) {
+                // And it package is empty string regardless of the directory name
+                packageName = "";
+            } else {
+                packageName += dir.getFileName() + ".";
+            }
+            return FileVisitResult.CONTINUE;
+        }
+
+        @Override
+        public final FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {
+            // Go up one package by jumping back to the second to last '.'
+            packageName = packageName.substring(0, 1 + packageName.lastIndexOf('.', packageName.length() - 2));
+            return FileVisitResult.CONTINUE;
+        }
+
+        @Override
+        public final FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
+            String filename = file.getFileName().toString();
+            if (filename.endsWith(".class")) {
+                String className = filename.substring(0, filename.length() - ".class".length());
+                Class clazz = loadClassWithoutInitializing(packageName + className);
+                if (clazz.getName().endsWith("Tests")) {
+                    visitTestClass(clazz);
+                } else if (clazz.getName().endsWith("IT")) {
+                    visitIntegrationTestClass(clazz);
+                } else {
+                    visitOtherClass(clazz);
+                }
+            }
+            return FileVisitResult.CONTINUE;
+        }
+
+        /**
+         * Is this class a test case?
+         */
+        protected boolean isTestCase(Class clazz) {
+            return testClass.isAssignableFrom(clazz);
+        }
+
+        @Override
+        public final FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException {
+            throw exc;
+        }
+    }
 }
diff --git a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainIT.java b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainIT.java
new file mode 100644
index 0000000000000..46adc7f065b16
--- /dev/null
+++ b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainIT.java
@@ -0,0 +1,26 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.test;
+
+/**
+ * This class should fail the naming conventions self test.
+ */
+public class NamingConventionsCheckInMainIT {
+}
diff --git a/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainTests.java b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainTests.java
new file mode 100644
index 0000000000000..27c0b41eb3f6a
--- /dev/null
+++ b/buildSrc/src/main/java/org/elasticsearch/test/NamingConventionsCheckInMainTests.java
@@ -0,0 +1,26 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.test;
+
+/**
+ * This class should fail the naming conventions self test.
+ */
+public class NamingConventionsCheckInMainTests {
+}
diff --git a/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.test-with-dependencies.properties b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.test-with-dependencies.properties
new file mode 100644
index 0000000000000..bcb374a85c618
--- /dev/null
+++ b/buildSrc/src/main/resources/META-INF/gradle-plugins/elasticsearch.test-with-dependencies.properties
@@ -0,0 +1,20 @@
+#
+# Licensed to Elasticsearch under one or more contributor
+# license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright
+# ownership. Elasticsearch licenses this file to you under
+# the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+
+implementation-class=org.elasticsearch.gradle.test.TestWithDependenciesPlugin
diff --git a/buildSrc/src/main/resources/checkstyle_suppressions.xml b/buildSrc/src/main/resources/checkstyle_suppressions.xml
index c8251702484c2..678155c656170 100644
--- a/buildSrc/src/main/resources/checkstyle_suppressions.xml
+++ b/buildSrc/src/main/resources/checkstyle_suppressions.xml
@@ -1,7 +1,7 @@
 
 
+        "-//Puppy Crawl//DTD Suppressions 1.1//EN"
+        "http://www.puppycrawl.com/dtds/suppressions_1_1.dtd">
 
 
   
@@ -10,11 +10,12 @@
   
   
 
+  
+  
+
   
-  
-  
-  
+  
   
   
   
@@ -26,7 +27,6 @@
   
   
   
-  
   
   
   
@@ -37,6 +37,7 @@
   
   
   
+  
   
   
   
@@ -53,11 +54,7 @@
   
   
   
-  
   
-  
-  
-  
   
   
   
@@ -125,57 +122,33 @@
   
   
   
-  
-  
   
   
   
   
   
-  
   
   
   
   
   
   
-  
-  
-  
-  
-  
-  
-  
-  
-  
-  
-  
-  
   
   
   
   
-  
-  
   
   
   
   
   
-  
-  
+  
   
   
-  
-  
-  
-  
   
   
   
   
-  
-  
   
   
   
@@ -192,7 +165,6 @@
   
   
   
-  
   
   
   
@@ -202,18 +174,12 @@
   
   
   
-  
-  
   
-  
   
   
   
-  
   
-  
   
-  
   
   
   
@@ -231,11 +197,9 @@
   
   
   
-  
   
   
   
-  
   
   
   
@@ -255,18 +219,12 @@
   
   
   
-  
-  
-  
   
   
-  
   
   
   
-  
   
-  
   
   
   
@@ -281,25 +239,20 @@
   
   
   
-  
   
   
   
   
   
   
-  
   
-  
   
   
   
   
   
   
-  
   
-  
   
   
   
@@ -312,13 +265,11 @@
   
   
   
-  
   
   
   
   
   
-  
   
   
   
@@ -328,67 +279,51 @@
   
   
   
-  
   
   
   
   
   
   
-  
   
   
   
   
   
+  
   
+  
   
   
-  
-  
   
+  
   
   
   
+  
   
+  
   
+  
+  
+  
   
   
   
   
-  
-  
-  
-  
-  
-  
-  
-  
-  
-  
-  
-  
-  
-  
+  
   
+  
   
   
-  
-  
   
   
   
-  
-  
-  
   
   
   
   
-  
-  
   
-  
   
   
   
@@ -398,11 +333,6 @@
   
   
   
-  
-  
-  
-  
-  
   
   
   
@@ -412,7 +342,6 @@
   
   
   
-  
   
   
   
@@ -420,33 +349,23 @@
   
   
   
-  
+  
   
   
-  
   
-  
   
   
-  
   
   
   
-  
   
   
   
-  
-  
   
   
   
   
-  
-  
-  
   
-  
   
   
   
@@ -454,79 +373,46 @@
   
   
   
-  
   
   
   
-  
   
-  
-  
-  
-  
   
-  
   
   
-  
-  
   
-  
   
-  
   
   
-  
-  
-  
-  
-  
-  
-  
   
-  
   
   
   
   
   
-  
-  
   
   
   
   
   
-  
   
-  
   
   
   
-  
+  
   
   
   
-  
-  
-  
   
-  
-  
-  
-  
   
   
   
-  
   
   
   
   
   
   
-  
-  
   
   
   
@@ -539,10 +425,8 @@
   
   
   
+  
   
-  
-  
-  
   
   
   
@@ -558,19 +442,14 @@
   
   
   
-  
   
   
   
-  
-  
   
   
   
   
   
-  
-  
   
   
   
@@ -603,14 +482,12 @@
   
   
   
-  
   
   
   
   
   
   
-  
   
   
   
@@ -652,26 +529,22 @@
   
   
   
-  
   
   
   
   
   
   
-  
   
   
   
   
   
-  
   
   
   
   
   
-  
   
   
   
@@ -679,109 +552,77 @@
   
   
   
-  
-  
   
   
   
   
   
-  
   
-  
   
   
   
   
   
-  
   
   
   
   
   
-  
   
   
   
   
   
-  
   
-  
-  
-  
-  
   
   
   
-  
-  
   
   
   
-  
   
   
   
-  
-  
-  
-  
-  
-  
-  
   
-  
-  
-  
   
-  
-  
-  
-  
-  
-  
+  
+  
+  
+  
+  
+  
   
   
-  
-  
+  
+  
   
   
-  
   
   
-  
-  
-  
-  
-  
   
+  
+  
+  
   
-  
   
   
   
   
   
   
-  
-  
-  
-  
+  
+  
   
   
   
   
-  
   
-  
   
   
-  
   
   
-  
+  
+  
   
   
   
@@ -789,11 +630,10 @@
   
   
   
+  
   
-  
   
   
-  
   
   
   
@@ -807,8 +647,6 @@
   
   
   
-  
-  
   
   
   
@@ -822,26 +660,20 @@
   
   
   
-  
   
   
-  
   
   
   
   
   
   
-  
   
   
-  
   
   
   
   
-  
-  
   
   
   
@@ -849,15 +681,14 @@
   
   
   
-  
   
   
   
   
-  
   
   
   
+  
   
   
   
@@ -880,27 +711,21 @@
   
   
   
-  
   
   
   
   
-  
-  
-  
   
   
   
-  
   
   
   
-  
   
   
   
   
-  
+  
   
   
   
@@ -909,76 +734,49 @@
   
   
   
-  
   
   
   
   
-  
   
   
-  
-  
   
   
   
-  
-  
+  
   
-  
   
-  
-  
-  
-  
-  
-  
-  
-  
-  
-  
+  
+  
   
   
   
   
-  
   
   
   
-  
-  
-  
-  
-  
+  
   
   
   
-  
-  
   
   
   
   
-  
-  
   
-  
   
   
-  
-  
-  
   
-  
-  
-  
-  
+  
   
-  
+  
+  
+  
+  
   
   
   
-  
+  
   
   
   
@@ -987,8 +785,6 @@
   
   
   
-  
-  
   
   
   
@@ -997,10 +793,4 @@
   
   
   
-  
-  
-  
-  
-  
-  
-
+
\ No newline at end of file
diff --git a/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs b/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs
index 9bee5e587b03f..48c93f444ba2a 100644
--- a/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs
+++ b/buildSrc/src/main/resources/eclipse.settings/org.eclipse.jdt.core.prefs
@@ -1,6 +1,5 @@
 eclipse.preferences.version=1
 
-# previous configuration from maven build
 # this is merged with gradle's generated properties during 'gradle eclipse'
 
 # NOTE: null pointer analysis etc is not enabled currently, it seems very unstable
diff --git a/buildSrc/src/main/resources/forbidden/es-all-signatures.txt b/buildSrc/src/main/resources/forbidden/es-all-signatures.txt
index 37f03f4c91c28..f1d271d602ce1 100644
--- a/buildSrc/src/main/resources/forbidden/es-all-signatures.txt
+++ b/buildSrc/src/main/resources/forbidden/es-all-signatures.txt
@@ -26,13 +26,25 @@ java.util.concurrent.ThreadLocalRandom
 
 java.security.MessageDigest#clone() @ use org.elasticsearch.common.hash.MessageDigests
 
-@defaultMessage this should not have been added to lucene in the first place
-org.apache.lucene.index.IndexReader#getCombinedCoreAndDeletesKey()
-
-@defaultMessage Soon to be removed
-org.apache.lucene.document.FieldType#numericType()
-
 @defaultMessage Don't use MethodHandles in slow ways, don't be lenient in tests.
 java.lang.invoke.MethodHandle#invoke(java.lang.Object[])
 java.lang.invoke.MethodHandle#invokeWithArguments(java.lang.Object[])
 java.lang.invoke.MethodHandle#invokeWithArguments(java.util.List)
+
+@defaultMessage Don't open socket connections
+java.net.URL#openStream()
+java.net.URLConnection#connect()
+java.net.URLConnection#getInputStream()
+java.net.Socket#connect(java.net.SocketAddress)
+java.net.Socket#connect(java.net.SocketAddress, int)
+java.nio.channels.SocketChannel#open(java.net.SocketAddress)
+java.nio.channels.SocketChannel#connect(java.net.SocketAddress)
+
+# This method is misleading, and uses lenient boolean parsing under the hood. If you intend to parse
+# a system property as a boolean, use
+# org.elasticsearch.common.Booleans#parseBoolean(java.lang.String) on the result of
+# java.lang.SystemProperty#getProperty(java.lang.String) instead. If you were not intending to parse
+# a system property as a boolean, but instead parse a string to a boolean, use
+# org.elasticsearch.common.Booleans#parseBoolean(java.lang.String) directly on the string.
+@defaultMessage use org.elasticsearch.common.Booleans#parseBoolean(java.lang.String)
+java.lang.Boolean#getBoolean(java.lang.String)
diff --git a/buildSrc/src/main/resources/forbidden/es-core-signatures.txt b/buildSrc/src/main/resources/forbidden/es-core-signatures.txt
index 059be403a672f..6507f05be5cd3 100644
--- a/buildSrc/src/main/resources/forbidden/es-core-signatures.txt
+++ b/buildSrc/src/main/resources/forbidden/es-core-signatures.txt
@@ -36,16 +36,6 @@ org.apache.lucene.index.IndexReader#decRef()
 org.apache.lucene.index.IndexReader#incRef()
 org.apache.lucene.index.IndexReader#tryIncRef()
 
-@defaultMessage Close listeners can only installed via ElasticsearchDirectoryReader#addReaderCloseListener
-org.apache.lucene.index.IndexReader#addReaderClosedListener(org.apache.lucene.index.IndexReader$ReaderClosedListener)
-org.apache.lucene.index.IndexReader#removeReaderClosedListener(org.apache.lucene.index.IndexReader$ReaderClosedListener)
-
-@defaultMessage Pass the precision step from the mappings explicitly instead
-org.apache.lucene.search.LegacyNumericRangeQuery#newDoubleRange(java.lang.String,java.lang.Double,java.lang.Double,boolean,boolean)
-org.apache.lucene.search.LegacyNumericRangeQuery#newFloatRange(java.lang.String,java.lang.Float,java.lang.Float,boolean,boolean)
-org.apache.lucene.search.LegacyNumericRangeQuery#newIntRange(java.lang.String,java.lang.Integer,java.lang.Integer,boolean,boolean)
-org.apache.lucene.search.LegacyNumericRangeQuery#newLongRange(java.lang.String,java.lang.Long,java.lang.Long,boolean,boolean)
-
 @defaultMessage Only use wait / notify when really needed try to use concurrency primitives, latches or callbacks instead.
 java.lang.Object#wait()
 java.lang.Object#wait(long)
diff --git a/buildSrc/src/main/resources/forbidden/http-signatures.txt b/buildSrc/src/main/resources/forbidden/http-signatures.txt
new file mode 100644
index 0000000000000..dcf20bbb09387
--- /dev/null
+++ b/buildSrc/src/main/resources/forbidden/http-signatures.txt
@@ -0,0 +1,45 @@
+# Licensed to Elasticsearch under one or more contributor
+# license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright
+# ownership. Elasticsearch licenses this file to you under
+# the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance  with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on
+# an 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
+# either express or implied. See the License for the specific
+# language governing permissions and limitations under the License.
+
+@defaultMessage Explicitly specify the ContentType of HTTP entities when creating
+org.apache.http.entity.StringEntity#(java.lang.String)
+org.apache.http.entity.StringEntity#(java.lang.String,java.lang.String)
+org.apache.http.entity.StringEntity#(java.lang.String,java.nio.charset.Charset)
+org.apache.http.entity.ByteArrayEntity#(byte[])
+org.apache.http.entity.ByteArrayEntity#(byte[],int,int)
+org.apache.http.entity.FileEntity#(java.io.File)
+org.apache.http.entity.InputStreamEntity#(java.io.InputStream)
+org.apache.http.entity.InputStreamEntity#(java.io.InputStream,long)
+org.apache.http.nio.entity.NByteArrayEntity#(byte[])
+org.apache.http.nio.entity.NByteArrayEntity#(byte[],int,int)
+org.apache.http.nio.entity.NFileEntity#(java.io.File)
+org.apache.http.nio.entity.NStringEntity#(java.lang.String)
+org.apache.http.nio.entity.NStringEntity#(java.lang.String,java.lang.String)
+
+@defaultMessage Use non-deprecated constructors
+org.apache.http.nio.entity.NFileEntity#(java.io.File,java.lang.String)
+org.apache.http.nio.entity.NFileEntity#(java.io.File,java.lang.String,boolean)
+org.apache.http.entity.FileEntity#(java.io.File,java.lang.String)
+org.apache.http.entity.StringEntity#(java.lang.String,java.lang.String,java.lang.String)
+
+@defaultMessage BasicEntity is easy to mess up and forget to set content type
+org.apache.http.entity.BasicHttpEntity#()
+
+@defaultMessage EntityTemplate is easy to mess up and forget to set content type
+org.apache.http.entity.EntityTemplate#(org.apache.http.entity.ContentProducer)
+
+@defaultMessage SerializableEntity uses java serialization and makes it easy to forget to set content type
+org.apache.http.entity.SerializableEntity#(java.io.Serializable)
diff --git a/buildSrc/src/main/resources/plugin-descriptor.properties b/buildSrc/src/main/resources/plugin-descriptor.properties
index ebde46d326ba9..67c6ee39968cd 100644
--- a/buildSrc/src/main/resources/plugin-descriptor.properties
+++ b/buildSrc/src/main/resources/plugin-descriptor.properties
@@ -30,11 +30,15 @@ name=${name}
 # 'classname': the name of the class to load, fully-qualified.
 classname=${classname}
 #
-# 'java.version' version of java the code is built against
+# 'java.version': version of java the code is built against
 # use the system property java.specification.version
 # version string must be a sequence of nonnegative decimal integers
 # separated by "."'s and may have leading zeros
 java.version=${javaVersion}
 #
-# 'elasticsearch.version' version of elasticsearch compiled against
+# 'elasticsearch.version': version of elasticsearch compiled against
 elasticsearch.version=${elasticsearchVersion}
+### optional elements for plugins:
+#
+# 'has.native.controller': whether or not the plugin has a native controller
+has.native.controller=${hasNativeController}
diff --git a/buildSrc/version.properties b/buildSrc/version.properties
index 15d2f32096221..e7243b9dad9ee 100644
--- a/buildSrc/version.properties
+++ b/buildSrc/version.properties
@@ -1,25 +1,31 @@
-elasticsearch     = 6.0.0-alpha1
-lucene            = 6.4.0-snapshot-084f7a0
+# When updating elasticsearch, please update 'rest' version in core/src/main/resources/org/elasticsearch/bootstrap/test-framework.policy
+elasticsearch     = 6.0.0-alpha3
+lucene            = 7.0.0-snapshot-a0aef2f
 
 # optional dependencies
 spatial4j         = 0.6
 jts               = 1.13
-jackson           = 2.8.1
+jackson           = 2.8.6
 snakeyaml         = 1.15
 # When updating log4j, please update also docs/java-api/index.asciidoc
-log4j             = 2.7
+log4j             = 2.8.2
 slf4j             = 1.6.2
-jna               = 4.2.2
+jna               = 4.4.0
 
 # test dependencies
-randomizedrunner  = 2.4.0
-junit             = 4.11
+randomizedrunner  = 2.5.0
+junit             = 4.12
 httpclient        = 4.5.2
+# When updating httpcore, please also update core/src/main/resources/org/elasticsearch/bootstrap/test-framework.policy
 httpcore          = 4.4.5
+# When updating httpasyncclient, please also update core/src/main/resources/org/elasticsearch/bootstrap/test-framework.policy
+httpasyncclient   = 4.1.2
 commonslogging    = 1.1.3
 commonscodec      = 1.10
 hamcrest          = 1.3
 securemock        = 1.2
+# When updating mocksocket, please also update core/src/main/resources/org/elasticsearch/bootstrap/test-framework.policy
 mocksocket        = 1.1
+
 # benchmark dependencies
 jmh               = 1.17.3
diff --git a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java
index 214a75d12cc01..e9cde26e6c870 100644
--- a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java
+++ b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/ops/bulk/BulkBenchmarkTask.java
@@ -95,7 +95,7 @@ private static final class LoadGenerator {
         private final BlockingQueue> bulkQueue;
         private final int bulkSize;
 
-        public LoadGenerator(Path bulkDataFile, BlockingQueue> bulkQueue, int bulkSize) {
+        LoadGenerator(Path bulkDataFile, BlockingQueue> bulkQueue, int bulkSize) {
             this.bulkDataFile = bulkDataFile;
             this.bulkQueue = bulkQueue;
             this.bulkSize = bulkSize;
@@ -143,7 +143,7 @@ private static final class BulkIndexer implements Runnable {
         private final BulkRequestExecutor bulkRequestExecutor;
         private final SampleRecorder sampleRecorder;
 
-        public BulkIndexer(BlockingQueue> bulkData, int warmupIterations, int measurementIterations,
+        BulkIndexer(BlockingQueue> bulkData, int warmupIterations, int measurementIterations,
                            SampleRecorder sampleRecorder, BulkRequestExecutor bulkRequestExecutor) {
             this.bulkData = bulkData;
             this.warmupIterations = warmupIterations;
diff --git a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java
index b342d93fba5a1..9210526e7c81c 100644
--- a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java
+++ b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/rest/RestClientBenchmark.java
@@ -73,7 +73,7 @@ private static final class RestBulkRequestExecutor implements BulkRequestExecuto
         private final RestClient client;
         private final String actionMetaData;
 
-        public RestBulkRequestExecutor(RestClient client, String index, String type) {
+        RestBulkRequestExecutor(RestClient client, String index, String type) {
             this.client = client;
             this.actionMetaData = String.format(Locale.ROOT, "{ \"index\" : { \"_index\" : \"%s\", \"_type\" : \"%s\" } }%n", index, type);
         }
diff --git a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java
index 6d6e5ade8275a..d2aee2251a67b 100644
--- a/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java
+++ b/client/benchmark/src/main/java/org/elasticsearch/client/benchmark/transport/TransportClientBenchmark.java
@@ -28,6 +28,7 @@
 import org.elasticsearch.client.transport.TransportClient;
 import org.elasticsearch.common.settings.Settings;
 import org.elasticsearch.common.transport.TransportAddress;
+import org.elasticsearch.common.xcontent.XContentType;
 import org.elasticsearch.index.query.QueryBuilders;
 import org.elasticsearch.plugin.noop.NoopPlugin;
 import org.elasticsearch.plugin.noop.action.bulk.NoopBulkAction;
@@ -70,7 +71,7 @@ private static final class TransportBulkRequestExecutor implements BulkRequestEx
         private final String indexName;
         private final String typeName;
 
-        public TransportBulkRequestExecutor(TransportClient client, String indexName, String typeName) {
+        TransportBulkRequestExecutor(TransportClient client, String indexName, String typeName) {
             this.client = client;
             this.indexName = indexName;
             this.typeName = typeName;
@@ -80,7 +81,7 @@ public TransportBulkRequestExecutor(TransportClient client, String indexName, St
         public boolean bulkIndex(List bulkData) {
             NoopBulkRequestBuilder builder = NoopBulkAction.INSTANCE.newRequestBuilder(client);
             for (String bulkItem : bulkData) {
-                builder.add(new IndexRequest(indexName, typeName).source(bulkItem.getBytes(StandardCharsets.UTF_8)));
+                builder.add(new IndexRequest(indexName, typeName).source(bulkItem.getBytes(StandardCharsets.UTF_8), XContentType.JSON));
             }
             BulkResponse bulkResponse;
             try {
diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java
index ac45f20dc2587..e8ed27715c10a 100644
--- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java
+++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/NoopPlugin.java
@@ -23,15 +23,23 @@
 import org.elasticsearch.plugin.noop.action.bulk.TransportNoopBulkAction;
 import org.elasticsearch.action.ActionRequest;
 import org.elasticsearch.action.ActionResponse;
+import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver;
+import org.elasticsearch.cluster.node.DiscoveryNodes;
+import org.elasticsearch.common.settings.ClusterSettings;
+import org.elasticsearch.common.settings.IndexScopedSettings;
+import org.elasticsearch.common.settings.Settings;
+import org.elasticsearch.common.settings.SettingsFilter;
 import org.elasticsearch.plugin.noop.action.search.NoopSearchAction;
 import org.elasticsearch.plugin.noop.action.search.RestNoopSearchAction;
 import org.elasticsearch.plugin.noop.action.search.TransportNoopSearchAction;
 import org.elasticsearch.plugins.ActionPlugin;
 import org.elasticsearch.plugins.Plugin;
+import org.elasticsearch.rest.RestController;
 import org.elasticsearch.rest.RestHandler;
 
 import java.util.Arrays;
 import java.util.List;
+import java.util.function.Supplier;
 
 public class NoopPlugin extends Plugin implements ActionPlugin {
     @Override
@@ -43,7 +51,11 @@ public class NoopPlugin extends Plugin implements ActionPlugin {
     }
 
     @Override
-    public List> getRestHandlers() {
-        return Arrays.asList(RestNoopBulkAction.class, RestNoopSearchAction.class);
+    public List getRestHandlers(Settings settings, RestController restController, ClusterSettings clusterSettings,
+            IndexScopedSettings indexScopedSettings, SettingsFilter settingsFilter, IndexNameExpressionResolver indexNameExpressionResolver,
+            Supplier nodesInCluster) {
+        return Arrays.asList(
+                new RestNoopBulkAction(settings, restController),
+                new RestNoopSearchAction(settings, restController));
     }
 }
diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java
index ceaf9f8cc9d17..1034e722e8789 100644
--- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java
+++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/NoopBulkRequestBuilder.java
@@ -33,6 +33,7 @@
 import org.elasticsearch.client.ElasticsearchClient;
 import org.elasticsearch.common.Nullable;
 import org.elasticsearch.common.unit.TimeValue;
+import org.elasticsearch.common.xcontent.XContentType;
 
 public class NoopBulkRequestBuilder extends ActionRequestBuilder
         implements WriteRequestBuilder {
@@ -95,17 +96,17 @@ public NoopBulkRequestBuilder add(UpdateRequestBuilder request) {
     /**
      * Adds a framed data in binary format
      */
-    public NoopBulkRequestBuilder add(byte[] data, int from, int length) throws Exception {
-        request.add(data, from, length, null, null);
+    public NoopBulkRequestBuilder add(byte[] data, int from, int length, XContentType xContentType) throws Exception {
+        request.add(data, from, length, null, null, xContentType);
         return this;
     }
 
     /**
      * Adds a framed data in binary format
      */
-    public NoopBulkRequestBuilder add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType)
-        throws Exception {
-        request.add(data, from, length, defaultIndex, defaultType);
+    public NoopBulkRequestBuilder add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType,
+                                      XContentType xContentType) throws Exception {
+        request.add(data, from, length, defaultIndex, defaultType, xContentType);
         return this;
     }
 
diff --git a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java
index 06082ed7d294c..ca5f32205674c 100644
--- a/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java
+++ b/client/client-benchmark-noop-api-plugin/src/main/java/org/elasticsearch/plugin/noop/action/bulk/RestNoopBulkAction.java
@@ -18,8 +18,8 @@
  */
 package org.elasticsearch.plugin.noop.action.bulk;
 
-import org.elasticsearch.action.DocWriteResponse;
 import org.elasticsearch.action.DocWriteRequest;
+import org.elasticsearch.action.DocWriteResponse;
 import org.elasticsearch.action.bulk.BulkItemResponse;
 import org.elasticsearch.action.bulk.BulkRequest;
 import org.elasticsearch.action.bulk.BulkShardRequest;
@@ -28,7 +28,6 @@
 import org.elasticsearch.client.Requests;
 import org.elasticsearch.client.node.NodeClient;
 import org.elasticsearch.common.Strings;
-import org.elasticsearch.common.inject.Inject;
 import org.elasticsearch.common.settings.Settings;
 import org.elasticsearch.common.xcontent.XContentBuilder;
 import org.elasticsearch.index.shard.ShardId;
@@ -47,7 +46,6 @@
 import static org.elasticsearch.rest.RestStatus.OK;
 
 public class RestNoopBulkAction extends BaseRestHandler {
-    @Inject
     public RestNoopBulkAction(Settings settings, RestController controller) {
         super(settings);
 
@@ -59,6 +57,11 @@ public RestNoopBulkAction(Settings settings, RestController controller) {
         controller.registerHandler(PUT, "/{index}/{type}/_noop_bulk", this);
     }
 
+    @Override
+    public String getName() {
+        return "noop_bulk_action";
+    }
+
     @Override
     public RestChannelConsumer prepareRequest(final RestRequest request, final NodeClient client) throws IOException {
         BulkRequest bulkRequest = Requests.bulkRequest();
@@ -75,7 +78,8 @@ public RestChannelConsumer prepareRequest(final RestRequest request, final NodeC
         }
         bulkRequest.timeout(request.paramAsTime("timeout", BulkShardRequest.DEFAULT_TIMEOUT));
         bulkRequest.setRefreshPolicy(request.param("refresh"));
-        bulkRequest.add(request.content(), defaultIndex, defaultType, defaultRouting, defaultFields, null, defaultPipeline, null, true);
+        bulkRequest.add(request.requiredContent(), defaultIndex, defaultType, defaultRouting, defaultFields,
+            null, defaultPipeline, null, true, request.getXContentType());
 
         // short circuit the call to the transport layer
         return channel -> {
@@ -91,7 +95,7 @@ private static class BulkRestBuilderListener extends RestBuilderListener listener) {
         listener.onResponse(new SearchResponse(new InternalSearchResponse(
-            new InternalSearchHits(
-                new InternalSearchHit[0], 0L, 0.0f),
+            new SearchHits(
+                new SearchHit[0], 0L, 0.0f),
             new InternalAggregations(Collections.emptyList()),
             new Suggest(Collections.emptyList()),
-            new SearchProfileShardResults(Collections.emptyMap()), false, false), "", 1, 1, 0, new ShardSearchFailure[0]));
+            new SearchProfileShardResults(Collections.emptyMap()), false, false, 1), "", 1, 1, 0, new ShardSearchFailure[0]));
     }
 }
diff --git a/client/rest-high-level/build.gradle b/client/rest-high-level/build.gradle
index 162e8608d4431..9203b8978fd05 100644
--- a/client/rest-high-level/build.gradle
+++ b/client/rest-high-level/build.gradle
@@ -1,3 +1,5 @@
+import org.elasticsearch.gradle.precommit.PrecommitTasks
+
 /*
  * Licensed to Elasticsearch under one or more contributor
  * license agreements. See the NOTICE file distributed with
@@ -24,6 +26,8 @@ group = 'org.elasticsearch.client'
 dependencies {
   compile "org.elasticsearch:elasticsearch:${version}"
   compile "org.elasticsearch.client:rest:${version}"
+  compile "org.elasticsearch.plugin:parent-join-client:${version}"
+  compile "org.elasticsearch.plugin:aggs-matrix-stats-client:${version}"
 
   testCompile "org.elasticsearch.client:test:${version}"
   testCompile "org.elasticsearch.test:framework:${version}"
@@ -39,3 +43,9 @@ dependencyLicenses {
     it.group.startsWith('org.elasticsearch') == false
   }
 }
+
+forbiddenApisMain {
+  // core does not depend on the httpclient for compile so we add the signatures here. We don't add them for test as they are already
+  // specified
+  signaturesURLs += [PrecommitTasks.getResource('/forbidden/http-signatures.txt')]
+}
\ No newline at end of file
diff --git a/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java b/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java
new file mode 100644
index 0000000000000..9e881cf7b9add
--- /dev/null
+++ b/client/rest-high-level/src/main/java/org/elasticsearch/client/Request.java
@@ -0,0 +1,548 @@
+/*
+ * Licensed to Elasticsearch under one or more contributor
+ * license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright
+ * ownership. Elasticsearch licenses this file to you under
+ * the Apache License, Version 2.0 (the "License"); you may
+ * not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.elasticsearch.client;
+
+import org.apache.http.HttpEntity;
+import org.apache.http.client.methods.HttpDelete;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpHead;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.client.methods.HttpPut;
+import org.apache.http.entity.ByteArrayEntity;
+import org.apache.http.entity.ContentType;
+import org.apache.lucene.util.BytesRef;
+import org.elasticsearch.action.DocWriteRequest;
+import org.elasticsearch.action.bulk.BulkRequest;
+import org.elasticsearch.action.delete.DeleteRequest;
+import org.elasticsearch.action.get.GetRequest;
+import org.elasticsearch.action.index.IndexRequest;
+import org.elasticsearch.action.search.ClearScrollRequest;
+import org.elasticsearch.action.search.SearchRequest;
+import org.elasticsearch.action.search.SearchScrollRequest;
+import org.elasticsearch.action.support.ActiveShardCount;
+import org.elasticsearch.action.support.IndicesOptions;
+import org.elasticsearch.action.support.WriteRequest;
+import org.elasticsearch.action.update.UpdateRequest;
+import org.elasticsearch.common.Nullable;
+import org.elasticsearch.common.Strings;
+import org.elasticsearch.common.bytes.BytesReference;
+import org.elasticsearch.common.lucene.uid.Versions;
+import org.elasticsearch.common.unit.TimeValue;
+import org.elasticsearch.common.xcontent.NamedXContentRegistry;
+import org.elasticsearch.common.xcontent.ToXContent;
+import org.elasticsearch.common.xcontent.XContentBuilder;
+import org.elasticsearch.common.xcontent.XContentHelper;
+import org.elasticsearch.common.xcontent.XContentParser;
+import org.elasticsearch.common.xcontent.XContentType;
+import org.elasticsearch.index.VersionType;
+import org.elasticsearch.rest.action.search.RestSearchAction;
+import org.elasticsearch.search.fetch.subphase.FetchSourceContext;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.Locale;
+import java.util.Map;
+import java.util.StringJoiner;
+
+final class Request {
+
+    static final XContentType REQUEST_BODY_CONTENT_TYPE = XContentType.JSON;
+
+    final String method;
+    final String endpoint;
+    final Map params;
+    final HttpEntity entity;
+
+    Request(String method, String endpoint, Map params, HttpEntity entity) {
+        this.method = method;
+        this.endpoint = endpoint;
+        this.params = params;
+        this.entity = entity;
+    }
+
+    @Override
+    public String toString() {
+        return "Request{" +
+                "method='" + method + '\'' +
+                ", endpoint='" + endpoint + '\'' +
+                ", params=" + params +
+                ", hasBody=" + (entity != null) +
+                '}';
+    }
+
+    static Request delete(DeleteRequest deleteRequest) {
+        String endpoint = endpoint(deleteRequest.index(), deleteRequest.type(), deleteRequest.id());
+
+        Params parameters = Params.builder();
+        parameters.withRouting(deleteRequest.routing());
+        parameters.withParent(deleteRequest.parent());
+        parameters.withTimeout(deleteRequest.timeout());
+        parameters.withVersion(deleteRequest.version());
+        parameters.withVersionType(deleteRequest.versionType());
+        parameters.withRefreshPolicy(deleteRequest.getRefreshPolicy());
+        parameters.withWaitForActiveShards(deleteRequest.waitForActiveShards());
+
+        return new Request(HttpDelete.METHOD_NAME, endpoint, parameters.getParams(), null);
+    }
+
+    static Request info() {
+        return new Request(HttpGet.METHOD_NAME, "/", Collections.emptyMap(), null);
+    }
+
+    static Request bulk(BulkRequest bulkRequest) throws IOException {
+        Params parameters = Params.builder();
+        parameters.withTimeout(bulkRequest.timeout());
+        parameters.withRefreshPolicy(bulkRequest.getRefreshPolicy());
+
+        // Bulk API only supports newline delimited JSON or Smile. Before executing
+        // the bulk, we need to check that all requests have the same content-type
+        // and this content-type is supported by the Bulk API.
+        XContentType bulkContentType = null;
+        for (int i = 0; i < bulkRequest.numberOfActions(); i++) {
+            DocWriteRequest request = bulkRequest.requests().get(i);
+
+            DocWriteRequest.OpType opType = request.opType();
+            if (opType == DocWriteRequest.OpType.INDEX || opType == DocWriteRequest.OpType.CREATE) {
+                bulkContentType = enforceSameContentType((IndexRequest) request, bulkContentType);
+
+            } else if (opType == DocWriteRequest.OpType.UPDATE) {
+                UpdateRequest updateRequest = (UpdateRequest) request;
+                if (updateRequest.doc() != null) {
+                    bulkContentType = enforceSameContentType(updateRequest.doc(), bulkContentType);
+                }
+                if (updateRequest.upsertRequest() != null) {
+                    bulkContentType = enforceSameContentType(updateRequest.upsertRequest(), bulkContentType);
+                }
+            }
+        }
+
+        if (bulkContentType == null) {
+            bulkContentType = XContentType.JSON;
+        }
+
+        byte separator = bulkContentType.xContent().streamSeparator();
+        ContentType requestContentType = ContentType.create(bulkContentType.mediaType());
+
+        ByteArrayOutputStream content = new ByteArrayOutputStream();
+        for (DocWriteRequest request : bulkRequest.requests()) {
+            DocWriteRequest.OpType opType = request.opType();
+
+            try (XContentBuilder metadata = XContentBuilder.builder(bulkContentType.xContent())) {
+                metadata.startObject();
+                {
+                    metadata.startObject(opType.getLowercase());
+                    if (Strings.hasLength(request.index())) {
+                        metadata.field("_index", request.index());
+                    }
+                    if (Strings.hasLength(request.type())) {
+                        metadata.field("_type", request.type());
+                    }
+                    if (Strings.hasLength(request.id())) {
+                        metadata.field("_id", request.id());
+                    }
+                    if (Strings.hasLength(request.routing())) {
+                        metadata.field("_routing", request.routing());
+                    }
+                    if (Strings.hasLength(request.parent())) {
+                        metadata.field("_parent", request.parent());
+                    }
+                    if (request.version() != Versions.MATCH_ANY) {
+                        metadata.field("_version", request.version());
+                    }
+
+                    VersionType versionType = request.versionType();
+                    if (versionType != VersionType.INTERNAL) {
+                        if (versionType == VersionType.EXTERNAL) {
+                            metadata.field("_version_type", "external");
+                        } else if (versionType == VersionType.EXTERNAL_GTE) {
+                            metadata.field("_version_type", "external_gte");
+                        } else if (versionType == VersionType.FORCE) {
+                            metadata.field("_version_type", "force");
+                        }
+                    }
+
+                    if (opType == DocWriteRequest.OpType.INDEX || opType == DocWriteRequest.OpType.CREATE) {
+                        IndexRequest indexRequest = (IndexRequest) request;
+                        if (Strings.hasLength(indexRequest.getPipeline())) {
+                            metadata.field("pipeline", indexRequest.getPipeline());
+                        }
+                    } else if (opType == DocWriteRequest.OpType.UPDATE) {
+                        UpdateRequest updateRequest = (UpdateRequest) request;
+                        if (updateRequest.retryOnConflict() > 0) {
+                            metadata.field("_retry_on_conflict", updateRequest.retryOnConflict());
+                        }
+                        if (updateRequest.fetchSource() != null) {
+                            metadata.field("_source", updateRequest.fetchSource());
+                        }
+                    }
+                    metadata.endObject();
+                }
+                metadata.endObject();
+
+                BytesRef metadataSource = metadata.bytes().toBytesRef();
+                content.write(metadataSource.bytes, metadataSource.offset, metadataSource.length);
+                content.write(separator);
+            }
+
+            BytesRef source = null;
+            if (opType == DocWriteRequest.OpType.INDEX || opType == DocWriteRequest.OpType.CREATE) {
+                IndexRequest indexRequest = (IndexRequest) request;
+                BytesReference indexSource = indexRequest.source();
+                XContentType indexXContentType = indexRequest.getContentType();
+
+                try (XContentParser parser = XContentHelper.createParser(NamedXContentRegistry.EMPTY, indexSource, indexXContentType)) {
+                    try (XContentBuilder builder = XContentBuilder.builder(bulkContentType.xContent())) {
+                        builder.copyCurrentStructure(parser);
+                        source = builder.bytes().toBytesRef();
+                    }
+                }
+            } else if (opType == DocWriteRequest.OpType.UPDATE) {
+                source = XContentHelper.toXContent((UpdateRequest) request, bulkContentType, false).toBytesRef();
+            }
+
+            if (source != null) {
+                content.write(source.bytes, source.offset, source.length);
+                content.write(separator);
+            }
+        }
+
+        HttpEntity entity = new ByteArrayEntity(content.toByteArray(), 0, content.size(), requestContentType);
+        return new Request(HttpPost.METHOD_NAME, "/_bulk", parameters.getParams(), entity);
+    }
+
+    static Request exists(GetRequest getRequest) {
+        Request request = get(getRequest);
+        return new Request(HttpHead.METHOD_NAME, request.endpoint, request.params, null);
+    }
+
+    static Request get(GetRequest getRequest) {
+        String endpoint = endpoint(getRequest.index(), getRequest.type(), getRequest.id());
+
+        Params parameters = Params.builder();
+        parameters.withPreference(getRequest.preference());
+        parameters.withRouting(getRequest.routing());
+        parameters.withParent(getRequest.parent());
+        parameters.withRefresh(getRequest.refresh());
+        parameters.withRealtime(getRequest.realtime());
+        parameters.withStoredFields(getRequest.storedFields());
+        parameters.withVersion(getRequest.version());
+        parameters.withVersionType(getRequest.versionType());
+        parameters.withFetchSourceContext(getRequest.fetchSourceContext());
+
+        return new Request(HttpGet.METHOD_NAME, endpoint, parameters.getParams(), null);
+    }
+
+    static Request index(IndexRequest indexRequest) {
+        String method = Strings.hasLength(indexRequest.id()) ? HttpPut.METHOD_NAME : HttpPost.METHOD_NAME;
+
+        boolean isCreate = (indexRequest.opType() == DocWriteRequest.OpType.CREATE);
+        String endpoint = endpoint(indexRequest.index(), indexRequest.type(), indexRequest.id(), isCreate ? "_create" : null);
+
+        Params parameters = Params.builder();
+        parameters.withRouting(indexRequest.routing());
+        parameters.withParent(indexRequest.parent());
+        parameters.withTimeout(indexRequest.timeout());
+        parameters.withVersion(indexRequest.version());
+        parameters.withVersionType(indexRequest.versionType());
+        parameters.withPipeline(indexRequest.getPipeline());
+        parameters.withRefreshPolicy(indexRequest.getRefreshPolicy());
+        parameters.withWaitForActiveShards(indexRequest.waitForActiveShards());
+
+        BytesRef source = indexRequest.source().toBytesRef();
+        ContentType contentType = ContentType.create(indexRequest.getContentType().mediaType());
+        HttpEntity entity = new ByteArrayEntity(source.bytes, source.offset, source.length, contentType);
+
+        return new Request(method, endpoint, parameters.getParams(), entity);
+    }
+
+    static Request ping() {
+        return new Request(HttpHead.METHOD_NAME, "/", Collections.emptyMap(), null);
+    }
+
+    static Request update(UpdateRequest updateRequest) throws IOException {
+        String endpoint = endpoint(updateRequest.index(), updateRequest.type(), updateRequest.id(), "_update");
+
+        Params parameters = Params.builder();
+        parameters.withRouting(updateRequest.routing());
+        parameters.withParent(updateRequest.parent());
+        parameters.withTimeout(updateRequest.timeout());
+        parameters.withRefreshPolicy(updateRequest.getRefreshPolicy());
+        parameters.withWaitForActiveShards(updateRequest.waitForActiveShards());
+        parameters.withDocAsUpsert(updateRequest.docAsUpsert());
+        parameters.withFetchSourceContext(updateRequest.fetchSource());
+        parameters.withRetryOnConflict(updateRequest.retryOnConflict());
+        parameters.withVersion(updateRequest.version());
+        parameters.withVersionType(updateRequest.versionType());
+
+        // The Java API allows update requests with different content types
+        // set for the partial document and the upsert document. This client
+        // only accepts update requests that have the same content types set
+        // for both doc and upsert.
+        XContentType xContentType = null;
+        if (updateRequest.doc() != null) {
+            xContentType = updateRequest.doc().getContentType();
+        }
+        if (updateRequest.upsertRequest() != null) {
+            XContentType upsertContentType = updateRequest.upsertRequest().getContentType();
+            if ((xContentType != null) && (xContentType != upsertContentType)) {
+                throw new IllegalStateException("Update request cannot have different content types for doc [" + xContentType + "]" +
+                        " and upsert [" + upsertContentType + "] documents");
+            } else {
+                xContentType = upsertContentType;
+            }
+        }
+        if (xContentType == null) {
+            xContentType = Requests.INDEX_CONTENT_TYPE;
+        }
+
+        HttpEntity entity = createEntity(updateRequest, xContentType);
+        return new Request(HttpPost.METHOD_NAME, endpoint, parameters.getParams(), entity);
+    }
+
+    static Request search(SearchRequest searchRequest) throws IOException {
+        String endpoint = endpoint(searchRequest.indices(), searchRequest.types(), "_search");
+        Params params = Params.builder();
+        params.putParam(RestSearchAction.TYPED_KEYS_PARAM, "true");
+        params.withRouting(searchRequest.routing());
+        params.withPreference(searchRequest.preference());
+        params.withIndicesOptions(searchRequest.indicesOptions());
+        params.putParam("search_type", searchRequest.searchType().name().toLowerCase(Locale.ROOT));
+        if (searchRequest.requestCache() != null) {
+            params.putParam("request_cache", Boolean.toString(searchRequest.requestCache()));
+        }
+        params.putParam("batched_reduce_size", Integer.toString(searchRequest.getBatchedReduceSize()));
+        if (searchRequest.scroll() != null) {
+            params.putParam("scroll", searchRequest.scroll().keepAlive());
+        }
+        HttpEntity entity = null;
+        if (searchRequest.source() != null) {
+            entity = createEntity(searchRequest.source(), REQUEST_BODY_CONTENT_TYPE);
+        }
+        return new Request(HttpGet.METHOD_NAME, endpoint, params.getParams(), entity);
+    }
+
+    static Request searchScroll(SearchScrollRequest searchScrollRequest) throws IOException {
+        HttpEntity entity = createEntity(searchScrollRequest, REQUEST_BODY_CONTENT_TYPE);
+        return new Request("GET", "/_search/scroll", Collections.emptyMap(), entity);
+    }
+
+    static Request clearScroll(ClearScrollRequest clearScrollRequest) throws IOException {
+        HttpEntity entity = createEntity(clearScrollRequest, REQUEST_BODY_CONTENT_TYPE);
+        return new Request("DELETE", "/_search/scroll", Collections.emptyMap(), entity);
+    }
+
+    private static HttpEntity createEntity(ToXContent toXContent, XContentType xContentType) throws IOException {
+        BytesRef source = XContentHelper.toXContent(toXContent, xContentType, false).toBytesRef();
+        return new ByteArrayEntity(source.bytes, source.offset, source.length, ContentType.create(xContentType.mediaType()));
+    }
+
+    static String endpoint(String[] indices, String[] types, String endpoint) {
+        return endpoint(String.join(",", indices), String.join(",", types), endpoint);
+    }
+
+    /**
+     * Utility method to build request's endpoint.
+     */
+    static String endpoint(String... parts) {
+        StringJoiner joiner = new StringJoiner("/", "/", "");
+        for (String part : parts) {
+            if (Strings.hasLength(part)) {
+                joiner.add(part);
+            }
+        }
+        return joiner.toString();
+    }
+
+    /**
+     * Utility class to build request's parameters map and centralize all parameter names.
+     */
+    static class Params {
+        private final Map params = new HashMap<>();
+
+        private Params() {
+        }
+
+        Params putParam(String key, String value) {
+            if (Strings.hasLength(value)) {
+                if (params.putIfAbsent(key, value) != null) {
+                    throw new IllegalArgumentException("Request parameter [" + key + "] is already registered");
+                }
+            }
+            return this;
+        }
+
+        Params putParam(String key, TimeValue value) {
+            if (value != null) {
+                return putParam(key, value.getStringRep());
+            }
+            return this;
+        }
+
+        Params withDocAsUpsert(boolean docAsUpsert) {
+            if (docAsUpsert) {
+                return putParam("doc_as_upsert", Boolean.TRUE.toString());
+            }
+            return this;
+        }
+
+        Params withFetchSourceContext(FetchSourceContext fetchSourceContext) {
+            if (fetchSourceContext != null) {
+                if (fetchSourceContext.fetchSource() == false) {
+                    putParam("_source", Boolean.FALSE.toString());
+                }
+                if (fetchSourceContext.includes() != null && fetchSourceContext.includes().length > 0) {
+                    putParam("_source_include", String.join(",", fetchSourceContext.includes()));
+                }
+                if (fetchSourceContext.excludes() != null && fetchSourceContext.excludes().length > 0) {
+                    putParam("_source_exclude", String.join(",", fetchSourceContext.excludes()));
+                }
+            }
+            return this;
+        }
+
+        Params withParent(String parent) {
+            return putParam("parent", parent);
+        }
+
+        Params withPipeline(String pipeline) {
+            return putParam("pipeline", pipeline);
+        }
+
+        Params withPreference(String preference) {
+            return putParam("preference", preference);
+        }
+
+        Params withRealtime(boolean realtime) {
+            if (realtime == false) {
+                return putParam("realtime", Boolean.FALSE.toString());
+            }
+            return this;
+        }
+
+        Params withRefresh(boolean refresh) {
+            if (refresh) {
+                return withRefreshPolicy(WriteRequest.RefreshPolicy.IMMEDIATE);
+            }
+            return this;
+        }
+
+        Params withRefreshPolicy(WriteRequest.RefreshPolicy refreshPolicy) {
+            if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) {
+                return putParam("refresh", refreshPolicy.getValue());
+            }
+            return this;
+        }
+
+        Params withRetryOnConflict(int retryOnConflict) {
+            if (retryOnConflict > 0) {
+                return putParam("retry_on_conflict", String.valueOf(retryOnConflict));
+            }
+            return this;
+        }
+
+        Params withRouting(String routing) {
+            return putParam("routing", routing);
+        }
+
+        Params withStoredFields(String[] storedFields) {
+            if (storedFields != null && storedFields.length > 0) {
+                return putParam("stored_fields", String.join(",", storedFields));
+            }
+            return this;
+        }
+
+        Params withTimeout(TimeValue timeout) {
+            return putParam("timeout", timeout);
+        }
+
+        Params withVersion(long version) {
+            if (version != Versions.MATCH_ANY) {
+                return putParam("version", Long.toString(version));
+            }
+            return this;
+        }
+
+        Params withVersionType(VersionType versionType) {
+            if (versionType != VersionType.INTERNAL) {
+                return putParam("version_type", versionType.name().toLowerCase(Locale.ROOT));
+            }
+            return this;
+        }
+
+        Params withWaitForActiveShards(ActiveShardCount activeShardCount) {
+            if (activeShardCount != null && activeShardCount != ActiveShardCount.DEFAULT) {
+                return putParam("wait_for_active_shards", activeShardCount.toString().toLowerCase(Locale.ROOT));
+            }
+            return this;
+        }
+
+        Params withIndicesOptions(IndicesOptions indicesOptions) {
+            putParam("ignore_unavailable", Boolean.toString(indicesOptions.ignoreUnavailable()));
+            putParam("allow_no_indices", Boolean.toString(indicesOptions.allowNoIndices()));
+            String expandWildcards;
+            if (indicesOptions.expandWildcardsOpen() == false && indicesOptions.expandWildcardsClosed() == false) {
+                expandWildcards = "none";
+            } else {
+                StringJoiner joiner  = new StringJoiner(",");
+                if (indicesOptions.expandWildcardsOpen()) {
+                    joiner.add("open");
+                }
+                if (indicesOptions.expandWildcardsClosed()) {
+                    joiner.add("closed");
+                }
+                expandWildcards = joiner.toString();
+            }
+            putParam("expand_wildcards", expandWildcards);
+            return this;
+        }
+
+        Map getParams() {
+            return Collections.unmodifiableMap(params);
+        }
+
+        static Params builder() {
+            return new Params();
+        }
+    }
+
+    /**
+     * Ensure that the {@link IndexRequest}'s content type is supported by the Bulk API and that it conforms
+     * to the current {@link BulkRequest}'s content type (if it's known at the time of this method get called).
+     *
+     * @return the {@link IndexRequest}'s content type
+     */
+    static XContentType enforceSameContentType(IndexRequest indexRequest, @Nullable XContentType xContentType) {
+        XContentType requestContentType = indexRequest.getContentType();
+        if (requestContentType != XContentType.JSON && requestContentType != XContentType.SMILE) {
+            throw new IllegalArgumentException("Unsupported content-type found for request with content-type [" + requestContentType
+                    + "], only JSON and SMILE are supported");
+        }
+        if (xContentType == null) {
+            return requestContentType;
+        }
+        if (requestContentType != xContentType) {
+            throw new IllegalArgumentException("Mismatching content-type found for request with content-type [" + requestContentType
+                    + "], previous requests have content-type [" + xContentType + "]");
+        }
+        return xContentType;
+    }
+}
diff --git a/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java b/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java
index 58ecc5f9c2d96..a354bdfb7ba5a 100644
--- a/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java
+++ b/client/rest-high-level/src/main/java/org/elasticsearch/client/RestHighLevelClient.java
@@ -19,35 +19,564 @@
 
 package org.elasticsearch.client;
 
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
 import org.apache.http.Header;
+import org.apache.http.HttpEntity;
+import org.elasticsearch.ElasticsearchException;
+import org.elasticsearch.ElasticsearchStatusException;
+import org.elasticsearch.action.ActionListener;
+import org.elasticsearch.action.ActionRequest;
+import org.elasticsearch.action.ActionRequestValidationException;
+import org.elasticsearch.action.bulk.BulkRequest;
+import org.elasticsearch.action.bulk.BulkResponse;
+import org.elasticsearch.action.delete.DeleteRequest;
+import org.elasticsearch.action.delete.DeleteResponse;
+import org.elasticsearch.action.get.GetRequest;
+import org.elasticsearch.action.get.GetResponse;
+import org.elasticsearch.action.index.IndexRequest;
+import org.elasticsearch.action.index.IndexResponse;
+import org.elasticsearch.action.main.MainRequest;
+import org.elasticsearch.action.main.MainResponse;
+import org.elasticsearch.action.search.ClearScrollRequest;
+import org.elasticsearch.action.search.ClearScrollResponse;
+import org.elasticsearch.action.search.SearchRequest;
+import org.elasticsearch.action.search.SearchResponse;
+import org.elasticsearch.action.search.SearchScrollRequest;
+import org.elasticsearch.action.update.UpdateRequest;
+import org.elasticsearch.action.update.UpdateResponse;
+import org.elasticsearch.common.CheckedFunction;
+import org.elasticsearch.common.ParseField;
+import org.elasticsearch.common.xcontent.ContextParser;
+import org.elasticsearch.common.xcontent.NamedXContentRegistry;
+import org.elasticsearch.common.xcontent.XContentParser;
+import org.elasticsearch.common.xcontent.XContentType;
+import org.elasticsearch.join.aggregations.ChildrenAggregationBuilder;
+import org.elasticsearch.join.aggregations.ParsedChildren;
+import org.elasticsearch.rest.BytesRestResponse;
+import org.elasticsearch.rest.RestStatus;
+import org.elasticsearch.search.aggregations.Aggregation;
+import org.elasticsearch.search.aggregations.bucket.adjacency.AdjacencyMatrixAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.adjacency.ParsedAdjacencyMatrix;
+import org.elasticsearch.search.aggregations.bucket.filter.FilterAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.filter.ParsedFilter;
+import org.elasticsearch.search.aggregations.bucket.filters.FiltersAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.filters.ParsedFilters;
+import org.elasticsearch.search.aggregations.bucket.geogrid.GeoGridAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.geogrid.ParsedGeoHashGrid;
+import org.elasticsearch.search.aggregations.bucket.global.GlobalAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.global.ParsedGlobal;
+import org.elasticsearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.histogram.HistogramAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.histogram.ParsedDateHistogram;
+import org.elasticsearch.search.aggregations.bucket.histogram.ParsedHistogram;
+import org.elasticsearch.search.aggregations.bucket.missing.MissingAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.missing.ParsedMissing;
+import org.elasticsearch.search.aggregations.bucket.nested.NestedAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.nested.ParsedNested;
+import org.elasticsearch.search.aggregations.bucket.nested.ParsedReverseNested;
+import org.elasticsearch.search.aggregations.bucket.nested.ReverseNestedAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.range.ParsedRange;
+import org.elasticsearch.search.aggregations.bucket.range.RangeAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.range.date.DateRangeAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.range.date.ParsedDateRange;
+import org.elasticsearch.search.aggregations.bucket.range.geodistance.GeoDistanceAggregationBuilder;
+import org.elasticsearch.search.aggregations.bucket.range.geodistance.ParsedGeoDistance;
+import org.elasticsearch.search.aggregations.bucket.sampler.InternalSampler;
+import org.elasticsearch.search.aggregations.bucket.sampler.ParsedSampler;
+import org.elasticsearch.search.aggregations.bucket.significant.ParsedSignificantLongTerms;
+import org.elasticsearch.search.aggregations.bucket.significant.ParsedSignificantStringTerms;
+import org.elasticsearch.search.aggregations.bucket.significant.SignificantLongTerms;
+import org.elasticsearch.search.aggregations.bucket.significant.SignificantStringTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.DoubleTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.LongTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.ParsedDoubleTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.ParsedLongTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.ParsedStringTerms;
+import org.elasticsearch.search.aggregations.bucket.terms.StringTerms;
+import org.elasticsearch.search.aggregations.matrix.stats.MatrixStatsAggregationBuilder;
+import org.elasticsearch.search.aggregations.matrix.stats.ParsedMatrixStats;
+import org.elasticsearch.search.aggregations.metrics.avg.AvgAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.avg.ParsedAvg;
+import org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.cardinality.ParsedCardinality;
+import org.elasticsearch.search.aggregations.metrics.geobounds.GeoBoundsAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.geobounds.ParsedGeoBounds;
+import org.elasticsearch.search.aggregations.metrics.geocentroid.GeoCentroidAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.geocentroid.ParsedGeoCentroid;
+import org.elasticsearch.search.aggregations.metrics.max.MaxAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.max.ParsedMax;
+import org.elasticsearch.search.aggregations.metrics.min.MinAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.min.ParsedMin;
+import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.InternalHDRPercentileRanks;
+import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.InternalHDRPercentiles;
+import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.ParsedHDRPercentileRanks;
+import org.elasticsearch.search.aggregations.metrics.percentiles.hdr.ParsedHDRPercentiles;
+import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.InternalTDigestPercentileRanks;
+import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.InternalTDigestPercentiles;
+import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.ParsedTDigestPercentileRanks;
+import org.elasticsearch.search.aggregations.metrics.percentiles.tdigest.ParsedTDigestPercentiles;
+import org.elasticsearch.search.aggregations.metrics.scripted.ParsedScriptedMetric;
+import org.elasticsearch.search.aggregations.metrics.scripted.ScriptedMetricAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.stats.ParsedStats;
+import org.elasticsearch.search.aggregations.metrics.stats.StatsAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.stats.extended.ExtendedStatsAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.stats.extended.ParsedExtendedStats;
+import org.elasticsearch.search.aggregations.metrics.sum.ParsedSum;
+import org.elasticsearch.search.aggregations.metrics.sum.SumAggregationBuilder;
+import org.elasticsearch.search.aggregations.metrics.valuecount.ParsedValueCount;
+import org.elasticsearch.search.aggregations.metrics.valuecount.ValueCountAggregationBuilder;
+import org.elasticsearch.search.aggregations.pipeline.InternalSimpleValue;
+import org.elasticsearch.search.aggregations.pipeline.ParsedSimpleValue;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.InternalBucketMetricValue;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.ParsedBucketMetricValue;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.percentile.ParsedPercentilesBucket;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.percentile.PercentilesBucketPipelineAggregationBuilder;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.ParsedStatsBucket;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.StatsBucketPipelineAggregationBuilder;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.extended.ExtendedStatsBucketPipelineAggregationBuilder;
+import org.elasticsearch.search.aggregations.pipeline.bucketmetrics.stats.extended.ParsedExtendedStatsBucket;
+import org.elasticsearch.search.aggregations.pipeline.derivative.DerivativePipelineAggregationBuilder;
+import org.elasticsearch.search.aggregations.pipeline.derivative.ParsedDerivative;
+import org.elasticsearch.search.suggest.Suggest;
+import org.elasticsearch.search.suggest.completion.CompletionSuggestion;
+import org.elasticsearch.search.suggest.phrase.PhraseSuggestion;
+import org.elasticsearch.search.suggest.term.TermSuggestion;
 
 import java.io.IOException;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
 import java.util.Objects;
+import java.util.Set;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+
+import static java.util.Collections.emptySet;
+import static java.util.Collections.singleton;
+import static java.util.stream.Collectors.toList;
 
 /**
  * High level REST client that wraps an instance of the low level {@link RestClient} and allows to build requests and read responses.
  * The provided {@link RestClient} is externally built and closed.
+ * Can be sub-classed to expose additional client methods that make use of endpoints added to Elasticsearch through plugins, or to
+ * add support for custom response sections, again added to Elasticsearch through plugins.
  */
-public final class RestHighLevelClient {
-
-    private static final Log logger = LogFactory.getLog(RestHighLevelClient.class);
+public class RestHighLevelClient {
 
     private final RestClient client;
+    private final NamedXContentRegistry registry;
+
+    /**
+     * Creates a {@link RestHighLevelClient} given the low level {@link RestClient} that it should use to perform requests.
+     */
+    public RestHighLevelClient(RestClient restClient) {
+        this(restClient, Collections.emptyList());
+    }
+
+    /**
+     * Creates a {@link RestHighLevelClient} given the low level {@link RestClient} that it should use to perform requests and
+     * a list of entries that allow to parse custom response sections added to Elasticsearch through plugins.
+     */
+    protected RestHighLevelClient(RestClient restClient, List namedXContentEntries) {
+        this.client = Objects.requireNonNull(restClient);
+        this.registry = new NamedXContentRegistry(Stream.of(getDefaultNamedXContents().stream(), namedXContentEntries.stream())
+                .flatMap(Function.identity()).collect(toList()));
+    }
+
+    /**
+     * Executes a bulk request using the Bulk API
+     *
+     * See Bulk API on elastic.co
+     */
+    public BulkResponse bulk(BulkRequest bulkRequest, Header... headers) throws IOException {
+        return performRequestAndParseEntity(bulkRequest, Request::bulk, BulkResponse::fromXContent, emptySet(), headers);
+    }
+
+    /**
+     * Asynchronously executes a bulk request using the Bulk API
+     *
+     * See Bulk API on elastic.co
+     */
+    public void bulkAsync(BulkRequest bulkRequest, ActionListener listener, Header... headers) {
+        performRequestAsyncAndParseEntity(bulkRequest, Request::bulk, BulkResponse::fromXContent, listener, emptySet(), headers);
+    }
+
+    /**
+     * Pings the remote Elasticsearch cluster and returns true if the ping succeeded, false otherwise
+     */
+    public boolean ping(Header... headers) throws IOException {
+        return performRequest(new MainRequest(), (request) -> Request.ping(), RestHighLevelClient::convertExistsResponse,
+                emptySet(), headers);
+    }
+
+    /**
+     * Get the cluster info otherwise provided when sending an HTTP request to port 9200
+     */
+    public MainResponse info(Header... headers) throws IOException {
+        return performRequestAndParseEntity(new MainRequest(), (request) -> Request.info(), MainResponse::fromXContent, emptySet(),
+                headers);
+    }
+
+    /**
+     * Retrieves a document by id using the Get API
+     *
+     * See Get API on elastic.co
+     */
+    public GetResponse get(GetRequest getRequest, Header... headers) throws IOException {
+        return performRequestAndParseEntity(getRequest, Request::get, GetResponse::fromXContent, singleton(404), headers);
+    }
+
+    /**
+     * Asynchronously retrieves a document by id using the Get API
+     *
+     * See Get API on elastic.co
+     */
+    public void getAsync(GetRequest getRequest, ActionListener listener, Header... headers) {
+        performRequestAsyncAndParseEntity(getRequest, Request::get, GetResponse::fromXContent, listener, singleton(404), headers);
+    }
+
+    /**
+     * Checks for the existence of a document. Returns true if it exists, false otherwise
+     *
+     * See Get API on elastic.co
+     */
+    public boolean exists(GetRequest getRequest, Header... headers) throws IOException {
+        return performRequest(getRequest, Request::exists, RestHighLevelClient::convertExistsResponse, emptySet(), headers);
+    }
+
+    /**
+     * Asynchronously checks for the existence of a document. Returns true if it exists, false otherwise
+     *
+     * See Get API on elastic.co
+     */
+    public void existsAsync(GetRequest getRequest, ActionListener listener, Header... headers) {
+        performRequestAsync(getRequest, Request::exists, RestHighLevelClient::convertExistsResponse, listener, emptySet(), headers);
+    }
+
+    /**
+     * Index a document using the Index API
+     *
+     * See Index API on elastic.co
+     */
+    public IndexResponse index(IndexRequest indexRequest, Header... headers) throws IOException {
+        return performRequestAndParseEntity(indexRequest, Request::index, IndexResponse::fromXContent, emptySet(), headers);
+    }
+
+    /**
+     * Asynchronously index a document using the Index API
+     *
+     * See Index API on elastic.co
+     */
+    public void indexAsync(IndexRequest indexRequest, ActionListener listener, Header... headers) {
+        performRequestAsyncAndParseEntity(indexRequest, Request::index, IndexResponse::fromXContent, listener, emptySet(), headers);
+    }
+
+    /**
+     * Updates a document using the Update API
+     * 

+ * See Update API on elastic.co + */ + public UpdateResponse update(UpdateRequest updateRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(updateRequest, Request::update, UpdateResponse::fromXContent, emptySet(), headers); + } + + /** + * Asynchronously updates a document using the Update API + *

+ * See Update API on elastic.co + */ + public void updateAsync(UpdateRequest updateRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(updateRequest, Request::update, UpdateResponse::fromXContent, listener, emptySet(), headers); + } + + /** + * Deletes a document by id using the Delete api + * + * See Delete API on elastic.co + */ + public DeleteResponse delete(DeleteRequest deleteRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(deleteRequest, Request::delete, DeleteResponse::fromXContent, Collections.singleton(404), + headers); + } + + /** + * Asynchronously deletes a document by id using the Delete api + * + * See Delete API on elastic.co + */ + public void deleteAsync(DeleteRequest deleteRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(deleteRequest, Request::delete, DeleteResponse::fromXContent, listener, + Collections.singleton(404), headers); + } + + /** + * Executes a search using the Search api + * + * See Search API on elastic.co + */ + public SearchResponse search(SearchRequest searchRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(searchRequest, Request::search, SearchResponse::fromXContent, emptySet(), headers); + } - public RestHighLevelClient(RestClient client) { - this.client = Objects.requireNonNull(client); + /** + * Asynchronously executes a search using the Search api + * + * See Search API on elastic.co + */ + public void searchAsync(SearchRequest searchRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(searchRequest, Request::search, SearchResponse::fromXContent, listener, emptySet(), headers); } - public boolean ping(Header... headers) { + /** + * Executes a search using the Search Scroll api + * + * See Search Scroll + * API on elastic.co + */ + public SearchResponse searchScroll(SearchScrollRequest searchScrollRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(searchScrollRequest, Request::searchScroll, SearchResponse::fromXContent, emptySet(), headers); + } + + /** + * Asynchronously executes a search using the Search Scroll api + * + * See Search Scroll + * API on elastic.co + */ + public void searchScrollAsync(SearchScrollRequest searchScrollRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(searchScrollRequest, Request::searchScroll, SearchResponse::fromXContent, + listener, emptySet(), headers); + } + + /** + * Clears one or more scroll ids using the Clear Scroll api + * + * See + * Clear Scroll API on elastic.co + */ + public ClearScrollResponse clearScroll(ClearScrollRequest clearScrollRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(clearScrollRequest, Request::clearScroll, ClearScrollResponse::fromXContent, + emptySet(), headers); + } + + /** + * Asynchronously clears one or more scroll ids using the Clear Scroll api + * + * See + * Clear Scroll API on elastic.co + */ + public void clearScrollAsync(ClearScrollRequest clearScrollRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(clearScrollRequest, Request::clearScroll, ClearScrollResponse::fromXContent, + listener, emptySet(), headers); + } + + protected Resp performRequestAndParseEntity(Req request, + CheckedFunction requestConverter, + CheckedFunction entityParser, + Set ignores, Header... headers) throws IOException { + return performRequest(request, requestConverter, (response) -> parseEntity(response.getEntity(), entityParser), ignores, headers); + } + + protected Resp performRequest(Req request, + CheckedFunction requestConverter, + CheckedFunction responseConverter, + Set ignores, Header... headers) throws IOException { + ActionRequestValidationException validationException = request.validate(); + if (validationException != null) { + throw validationException; + } + Request req = requestConverter.apply(request); + Response response; + try { + response = client.performRequest(req.method, req.endpoint, req.params, req.entity, headers); + } catch (ResponseException e) { + if (ignores.contains(e.getResponse().getStatusLine().getStatusCode())) { + try { + return responseConverter.apply(e.getResponse()); + } catch (Exception innerException) { + throw parseResponseException(e); + } + } + throw parseResponseException(e); + } + + try { + return responseConverter.apply(response); + } catch(Exception e) { + throw new IOException("Unable to parse response body for " + response, e); + } + } + + protected void performRequestAsyncAndParseEntity(Req request, + CheckedFunction requestConverter, + CheckedFunction entityParser, + ActionListener listener, Set ignores, Header... headers) { + performRequestAsync(request, requestConverter, (response) -> parseEntity(response.getEntity(), entityParser), + listener, ignores, headers); + } + + protected void performRequestAsync(Req request, + CheckedFunction requestConverter, + CheckedFunction responseConverter, + ActionListener listener, Set ignores, Header... headers) { + ActionRequestValidationException validationException = request.validate(); + if (validationException != null) { + listener.onFailure(validationException); + return; + } + Request req; try { - client.performRequest("HEAD", "/", headers); - return true; - } catch(IOException exception) { - return false; + req = requestConverter.apply(request); + } catch (Exception e) { + listener.onFailure(e); + return; + } + + ResponseListener responseListener = wrapResponseListener(responseConverter, listener, ignores); + client.performRequestAsync(req.method, req.endpoint, req.params, req.entity, responseListener, headers); + } + + ResponseListener wrapResponseListener(CheckedFunction responseConverter, + ActionListener actionListener, Set ignores) { + return new ResponseListener() { + @Override + public void onSuccess(Response response) { + try { + actionListener.onResponse(responseConverter.apply(response)); + } catch(Exception e) { + IOException ioe = new IOException("Unable to parse response body for " + response, e); + onFailure(ioe); + } + } + + @Override + public void onFailure(Exception exception) { + if (exception instanceof ResponseException) { + ResponseException responseException = (ResponseException) exception; + Response response = responseException.getResponse(); + if (ignores.contains(response.getStatusLine().getStatusCode())) { + try { + actionListener.onResponse(responseConverter.apply(response)); + } catch (Exception innerException) { + //the exception is ignored as we now try to parse the response as an error. + //this covers cases like get where 404 can either be a valid document not found response, + //or an error for which parsing is completely different. We try to consider the 404 response as a valid one + //first. If parsing of the response breaks, we fall back to parsing it as an error. + actionListener.onFailure(parseResponseException(responseException)); + } + } else { + actionListener.onFailure(parseResponseException(responseException)); + } + } else { + actionListener.onFailure(exception); + } + } + }; + } + + /** + * Converts a {@link ResponseException} obtained from the low level REST client into an {@link ElasticsearchException}. + * If a response body was returned, tries to parse it as an error returned from Elasticsearch. + * If no response body was returned or anything goes wrong while parsing the error, returns a new {@link ElasticsearchStatusException} + * that wraps the original {@link ResponseException}. The potential exception obtained while parsing is added to the returned + * exception as a suppressed exception. This method is guaranteed to not throw any exception eventually thrown while parsing. + */ + ElasticsearchStatusException parseResponseException(ResponseException responseException) { + Response response = responseException.getResponse(); + HttpEntity entity = response.getEntity(); + ElasticsearchStatusException elasticsearchException; + if (entity == null) { + elasticsearchException = new ElasticsearchStatusException( + responseException.getMessage(), RestStatus.fromCode(response.getStatusLine().getStatusCode()), responseException); + } else { + try { + elasticsearchException = parseEntity(entity, BytesRestResponse::errorFromXContent); + elasticsearchException.addSuppressed(responseException); + } catch (Exception e) { + RestStatus restStatus = RestStatus.fromCode(response.getStatusLine().getStatusCode()); + elasticsearchException = new ElasticsearchStatusException("Unable to parse response body", restStatus, responseException); + elasticsearchException.addSuppressed(e); + } } + return elasticsearchException; } + Resp parseEntity( + HttpEntity entity, CheckedFunction entityParser) throws IOException { + if (entity == null) { + throw new IllegalStateException("Response body expected but not returned"); + } + if (entity.getContentType() == null) { + throw new IllegalStateException("Elasticsearch didn't return the [Content-Type] header, unable to parse response body"); + } + XContentType xContentType = XContentType.fromMediaTypeOrFormat(entity.getContentType().getValue()); + if (xContentType == null) { + throw new IllegalStateException("Unsupported Content-Type: " + entity.getContentType().getValue()); + } + try (XContentParser parser = xContentType.xContent().createParser(registry, entity.getContent())) { + return entityParser.apply(parser); + } + } + + static boolean convertExistsResponse(Response response) { + return response.getStatusLine().getStatusCode() == 200; + } + static List getDefaultNamedXContents() { + Map> map = new HashMap<>(); + map.put(CardinalityAggregationBuilder.NAME, (p, c) -> ParsedCardinality.fromXContent(p, (String) c)); + map.put(InternalHDRPercentiles.NAME, (p, c) -> ParsedHDRPercentiles.fromXContent(p, (String) c)); + map.put(InternalHDRPercentileRanks.NAME, (p, c) -> ParsedHDRPercentileRanks.fromXContent(p, (String) c)); + map.put(InternalTDigestPercentiles.NAME, (p, c) -> ParsedTDigestPercentiles.fromXContent(p, (String) c)); + map.put(InternalTDigestPercentileRanks.NAME, (p, c) -> ParsedTDigestPercentileRanks.fromXContent(p, (String) c)); + map.put(PercentilesBucketPipelineAggregationBuilder.NAME, (p, c) -> ParsedPercentilesBucket.fromXContent(p, (String) c)); + map.put(MinAggregationBuilder.NAME, (p, c) -> ParsedMin.fromXContent(p, (String) c)); + map.put(MaxAggregationBuilder.NAME, (p, c) -> ParsedMax.fromXContent(p, (String) c)); + map.put(SumAggregationBuilder.NAME, (p, c) -> ParsedSum.fromXContent(p, (String) c)); + map.put(AvgAggregationBuilder.NAME, (p, c) -> ParsedAvg.fromXContent(p, (String) c)); + map.put(ValueCountAggregationBuilder.NAME, (p, c) -> ParsedValueCount.fromXContent(p, (String) c)); + map.put(InternalSimpleValue.NAME, (p, c) -> ParsedSimpleValue.fromXContent(p, (String) c)); + map.put(DerivativePipelineAggregationBuilder.NAME, (p, c) -> ParsedDerivative.fromXContent(p, (String) c)); + map.put(InternalBucketMetricValue.NAME, (p, c) -> ParsedBucketMetricValue.fromXContent(p, (String) c)); + map.put(StatsAggregationBuilder.NAME, (p, c) -> ParsedStats.fromXContent(p, (String) c)); + map.put(StatsBucketPipelineAggregationBuilder.NAME, (p, c) -> ParsedStatsBucket.fromXContent(p, (String) c)); + map.put(ExtendedStatsAggregationBuilder.NAME, (p, c) -> ParsedExtendedStats.fromXContent(p, (String) c)); + map.put(ExtendedStatsBucketPipelineAggregationBuilder.NAME, + (p, c) -> ParsedExtendedStatsBucket.fromXContent(p, (String) c)); + map.put(GeoBoundsAggregationBuilder.NAME, (p, c) -> ParsedGeoBounds.fromXContent(p, (String) c)); + map.put(GeoCentroidAggregationBuilder.NAME, (p, c) -> ParsedGeoCentroid.fromXContent(p, (String) c)); + map.put(HistogramAggregationBuilder.NAME, (p, c) -> ParsedHistogram.fromXContent(p, (String) c)); + map.put(DateHistogramAggregationBuilder.NAME, (p, c) -> ParsedDateHistogram.fromXContent(p, (String) c)); + map.put(StringTerms.NAME, (p, c) -> ParsedStringTerms.fromXContent(p, (String) c)); + map.put(LongTerms.NAME, (p, c) -> ParsedLongTerms.fromXContent(p, (String) c)); + map.put(DoubleTerms.NAME, (p, c) -> ParsedDoubleTerms.fromXContent(p, (String) c)); + map.put(MissingAggregationBuilder.NAME, (p, c) -> ParsedMissing.fromXContent(p, (String) c)); + map.put(NestedAggregationBuilder.NAME, (p, c) -> ParsedNested.fromXContent(p, (String) c)); + map.put(ReverseNestedAggregationBuilder.NAME, (p, c) -> ParsedReverseNested.fromXContent(p, (String) c)); + map.put(GlobalAggregationBuilder.NAME, (p, c) -> ParsedGlobal.fromXContent(p, (String) c)); + map.put(FilterAggregationBuilder.NAME, (p, c) -> ParsedFilter.fromXContent(p, (String) c)); + map.put(InternalSampler.PARSER_NAME, (p, c) -> ParsedSampler.fromXContent(p, (String) c)); + map.put(GeoGridAggregationBuilder.NAME, (p, c) -> ParsedGeoHashGrid.fromXContent(p, (String) c)); + map.put(RangeAggregationBuilder.NAME, (p, c) -> ParsedRange.fromXContent(p, (String) c)); + map.put(DateRangeAggregationBuilder.NAME, (p, c) -> ParsedDateRange.fromXContent(p, (String) c)); + map.put(GeoDistanceAggregationBuilder.NAME, (p, c) -> ParsedGeoDistance.fromXContent(p, (String) c)); + map.put(FiltersAggregationBuilder.NAME, (p, c) -> ParsedFilters.fromXContent(p, (String) c)); + map.put(AdjacencyMatrixAggregationBuilder.NAME, (p, c) -> ParsedAdjacencyMatrix.fromXContent(p, (String) c)); + map.put(SignificantLongTerms.NAME, (p, c) -> ParsedSignificantLongTerms.fromXContent(p, (String) c)); + map.put(SignificantStringTerms.NAME, (p, c) -> ParsedSignificantStringTerms.fromXContent(p, (String) c)); + map.put(ScriptedMetricAggregationBuilder.NAME, (p, c) -> ParsedScriptedMetric.fromXContent(p, (String) c)); + map.put(ChildrenAggregationBuilder.NAME, (p, c) -> ParsedChildren.fromXContent(p, (String) c)); + map.put(MatrixStatsAggregationBuilder.NAME, (p, c) -> ParsedMatrixStats.fromXContent(p, (String) c)); + List entries = map.entrySet().stream() + .map(entry -> new NamedXContentRegistry.Entry(Aggregation.class, new ParseField(entry.getKey()), entry.getValue())) + .collect(Collectors.toList()); + entries.add(new NamedXContentRegistry.Entry(Suggest.Suggestion.class, new ParseField(TermSuggestion.NAME), + (parser, context) -> TermSuggestion.fromXContent(parser, (String)context))); + entries.add(new NamedXContentRegistry.Entry(Suggest.Suggestion.class, new ParseField(PhraseSuggestion.NAME), + (parser, context) -> PhraseSuggestion.fromXContent(parser, (String)context))); + entries.add(new NamedXContentRegistry.Entry(Suggest.Suggestion.class, new ParseField(CompletionSuggestion.NAME), + (parser, context) -> CompletionSuggestion.fromXContent(parser, (String)context))); + return entries; + } } diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java new file mode 100644 index 0000000000000..b078a983357fc --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/CrudIT.java @@ -0,0 +1,705 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ElasticsearchStatusException; +import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.DocWriteResponse; +import org.elasticsearch.action.bulk.BulkItemResponse; +import org.elasticsearch.action.bulk.BulkProcessor; +import org.elasticsearch.action.bulk.BulkRequest; +import org.elasticsearch.action.bulk.BulkResponse; +import org.elasticsearch.action.delete.DeleteRequest; +import org.elasticsearch.action.delete.DeleteResponse; +import org.elasticsearch.action.get.GetRequest; +import org.elasticsearch.action.get.GetResponse; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.index.IndexResponse; +import org.elasticsearch.action.update.UpdateRequest; +import org.elasticsearch.action.update.UpdateResponse; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.ByteSizeUnit; +import org.elasticsearch.common.unit.ByteSizeValue; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.VersionType; +import org.elasticsearch.index.get.GetResult; +import org.elasticsearch.rest.RestStatus; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptType; +import org.elasticsearch.search.fetch.subphase.FetchSourceContext; +import org.elasticsearch.threadpool.ThreadPool; + +import java.io.IOException; +import java.util.Collections; +import java.util.Map; +import java.util.concurrent.atomic.AtomicReference; + +import static java.util.Collections.singletonMap; + +public class CrudIT extends ESRestHighLevelClientTestCase { + + public void testDelete() throws IOException { + { + // Testing deletion + String docId = "id"; + highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar"))); + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId); + if (randomBoolean()) { + deleteRequest.version(1L); + } + DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync); + assertEquals("index", deleteResponse.getIndex()); + assertEquals("type", deleteResponse.getType()); + assertEquals(docId, deleteResponse.getId()); + assertEquals(DocWriteResponse.Result.DELETED, deleteResponse.getResult()); + } + { + // Testing non existing document + String docId = "does_not_exist"; + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId); + DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync); + assertEquals("index", deleteResponse.getIndex()); + assertEquals("type", deleteResponse.getType()); + assertEquals(docId, deleteResponse.getId()); + assertEquals(DocWriteResponse.Result.NOT_FOUND, deleteResponse.getResult()); + } + { + // Testing version conflict + String docId = "version_conflict"; + highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar"))); + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).version(2); + ElasticsearchException exception = expectThrows(ElasticsearchException.class, + () -> execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync)); + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][" + docId + "]: " + + "version conflict, current version [1] is different than the one provided [2]]", exception.getMessage()); + assertEquals("index", exception.getMetadata("es.index").get(0)); + } + { + // Testing version type + String docId = "version_type"; + highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar")) + .versionType(VersionType.EXTERNAL).version(12)); + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).versionType(VersionType.EXTERNAL).version(13); + DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync); + assertEquals("index", deleteResponse.getIndex()); + assertEquals("type", deleteResponse.getType()); + assertEquals(docId, deleteResponse.getId()); + assertEquals(DocWriteResponse.Result.DELETED, deleteResponse.getResult()); + } + { + // Testing version type with a wrong version + String docId = "wrong_version"; + highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar")) + .versionType(VersionType.EXTERNAL).version(12)); + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).versionType(VersionType.EXTERNAL).version(10); + execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync); + }); + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][" + + docId + "]: version conflict, current version [12] is higher or equal to the one provided [10]]", exception.getMessage()); + assertEquals("index", exception.getMetadata("es.index").get(0)); + } + { + // Testing routing + String docId = "routing"; + highLevelClient().index(new IndexRequest("index", "type", docId).source(Collections.singletonMap("foo", "bar")).routing("foo")); + DeleteRequest deleteRequest = new DeleteRequest("index", "type", docId).routing("foo"); + DeleteResponse deleteResponse = execute(deleteRequest, highLevelClient()::delete, highLevelClient()::deleteAsync); + assertEquals("index", deleteResponse.getIndex()); + assertEquals("type", deleteResponse.getType()); + assertEquals(docId, deleteResponse.getId()); + assertEquals(DocWriteResponse.Result.DELETED, deleteResponse.getResult()); + } + } + + public void testExists() throws IOException { + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + assertFalse(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync)); + } + String document = "{\"field1\":\"value1\",\"field2\":\"value2\"}"; + StringEntity stringEntity = new StringEntity(document, ContentType.APPLICATION_JSON); + Response response = client().performRequest("PUT", "/index/type/id", Collections.singletonMap("refresh", "wait_for"), stringEntity); + assertEquals(201, response.getStatusLine().getStatusCode()); + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + assertTrue(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync)); + } + { + GetRequest getRequest = new GetRequest("index", "type", "does_not_exist"); + assertFalse(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync)); + } + { + GetRequest getRequest = new GetRequest("index", "type", "does_not_exist").version(1); + assertFalse(execute(getRequest, highLevelClient()::exists, highLevelClient()::existsAsync)); + } + } + + public void testGet() throws IOException { + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + ElasticsearchException exception = expectThrows(ElasticsearchException.class, + () -> execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync)); + assertEquals(RestStatus.NOT_FOUND, exception.status()); + assertEquals("Elasticsearch exception [type=index_not_found_exception, reason=no such index]", exception.getMessage()); + assertEquals("index", exception.getMetadata("es.index").get(0)); + } + + String document = "{\"field1\":\"value1\",\"field2\":\"value2\"}"; + StringEntity stringEntity = new StringEntity(document, ContentType.APPLICATION_JSON); + Response response = client().performRequest("PUT", "/index/type/id", Collections.singletonMap("refresh", "wait_for"), stringEntity); + assertEquals(201, response.getStatusLine().getStatusCode()); + { + GetRequest getRequest = new GetRequest("index", "type", "id").version(2); + ElasticsearchException exception = expectThrows(ElasticsearchException.class, + () -> execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync)); + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, " + "reason=[type][id]: " + + "version conflict, current version [1] is different than the one provided [2]]", exception.getMessage()); + assertEquals("index", exception.getMetadata("es.index").get(0)); + } + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + if (randomBoolean()) { + getRequest.version(1L); + } + GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync); + assertEquals("index", getResponse.getIndex()); + assertEquals("type", getResponse.getType()); + assertEquals("id", getResponse.getId()); + assertTrue(getResponse.isExists()); + assertFalse(getResponse.isSourceEmpty()); + assertEquals(1L, getResponse.getVersion()); + assertEquals(document, getResponse.getSourceAsString()); + } + { + GetRequest getRequest = new GetRequest("index", "type", "does_not_exist"); + GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync); + assertEquals("index", getResponse.getIndex()); + assertEquals("type", getResponse.getType()); + assertEquals("does_not_exist", getResponse.getId()); + assertFalse(getResponse.isExists()); + assertEquals(-1, getResponse.getVersion()); + assertTrue(getResponse.isSourceEmpty()); + assertNull(getResponse.getSourceAsString()); + } + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + getRequest.fetchSourceContext(new FetchSourceContext(false, Strings.EMPTY_ARRAY, Strings.EMPTY_ARRAY)); + GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync); + assertEquals("index", getResponse.getIndex()); + assertEquals("type", getResponse.getType()); + assertEquals("id", getResponse.getId()); + assertTrue(getResponse.isExists()); + assertTrue(getResponse.isSourceEmpty()); + assertEquals(1L, getResponse.getVersion()); + assertNull(getResponse.getSourceAsString()); + } + { + GetRequest getRequest = new GetRequest("index", "type", "id"); + if (randomBoolean()) { + getRequest.fetchSourceContext(new FetchSourceContext(true, new String[]{"field1"}, Strings.EMPTY_ARRAY)); + } else { + getRequest.fetchSourceContext(new FetchSourceContext(true, Strings.EMPTY_ARRAY, new String[]{"field2"})); + } + GetResponse getResponse = execute(getRequest, highLevelClient()::get, highLevelClient()::getAsync); + assertEquals("index", getResponse.getIndex()); + assertEquals("type", getResponse.getType()); + assertEquals("id", getResponse.getId()); + assertTrue(getResponse.isExists()); + assertFalse(getResponse.isSourceEmpty()); + assertEquals(1L, getResponse.getVersion()); + Map sourceAsMap = getResponse.getSourceAsMap(); + assertEquals(1, sourceAsMap.size()); + assertEquals("value1", sourceAsMap.get("field1")); + } + } + + public void testIndex() throws IOException { + final XContentType xContentType = randomFrom(XContentType.values()); + { + IndexRequest indexRequest = new IndexRequest("index", "type"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("test", "test").endObject()); + + IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals(DocWriteResponse.Result.CREATED, indexResponse.getResult()); + assertEquals("index", indexResponse.getIndex()); + assertEquals("type", indexResponse.getType()); + assertTrue(Strings.hasLength(indexResponse.getId())); + assertEquals(1L, indexResponse.getVersion()); + assertNotNull(indexResponse.getShardId()); + assertEquals(-1, indexResponse.getShardId().getId()); + assertEquals("index", indexResponse.getShardId().getIndexName()); + assertEquals("index", indexResponse.getShardId().getIndex().getName()); + assertEquals("_na_", indexResponse.getShardId().getIndex().getUUID()); + assertNotNull(indexResponse.getShardInfo()); + assertEquals(0, indexResponse.getShardInfo().getFailed()); + assertTrue(indexResponse.getShardInfo().getSuccessful() > 0); + assertTrue(indexResponse.getShardInfo().getTotal() > 0); + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "id"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("version", 1).endObject()); + + IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals("index", indexResponse.getIndex()); + assertEquals("type", indexResponse.getType()); + assertEquals("id", indexResponse.getId()); + assertEquals(1L, indexResponse.getVersion()); + + indexRequest = new IndexRequest("index", "type", "id"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("version", 2).endObject()); + + indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + assertEquals(RestStatus.OK, indexResponse.status()); + assertEquals("index", indexResponse.getIndex()); + assertEquals("type", indexResponse.getType()); + assertEquals("id", indexResponse.getId()); + assertEquals(2L, indexResponse.getVersion()); + + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + IndexRequest wrongRequest = new IndexRequest("index", "type", "id"); + wrongRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject()); + wrongRequest.version(5L); + + execute(wrongRequest, highLevelClient()::index, highLevelClient()::indexAsync); + }); + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][id]: " + + "version conflict, current version [2] is different than the one provided [5]]", exception.getMessage()); + assertEquals("index", exception.getMetadata("es.index").get(0)); + } + { + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + IndexRequest indexRequest = new IndexRequest("index", "type", "missing_parent"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject()); + indexRequest.parent("missing"); + + execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + }); + + assertEquals(RestStatus.BAD_REQUEST, exception.status()); + assertEquals("Elasticsearch exception [type=illegal_argument_exception, " + + "reason=can't specify parent if no parent field has been configured]", exception.getMessage()); + } + { + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + IndexRequest indexRequest = new IndexRequest("index", "type", "missing_pipeline"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject()); + indexRequest.setPipeline("missing"); + + execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + }); + + assertEquals(RestStatus.BAD_REQUEST, exception.status()); + assertEquals("Elasticsearch exception [type=illegal_argument_exception, " + + "reason=pipeline with id [missing] does not exist]", exception.getMessage()); + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "external_version_type"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject()); + indexRequest.version(12L); + indexRequest.versionType(VersionType.EXTERNAL); + + IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals("index", indexResponse.getIndex()); + assertEquals("type", indexResponse.getType()); + assertEquals("external_version_type", indexResponse.getId()); + assertEquals(12L, indexResponse.getVersion()); + } + { + final IndexRequest indexRequest = new IndexRequest("index", "type", "with_create_op_type"); + indexRequest.source(XContentBuilder.builder(xContentType.xContent()).startObject().field("field", "test").endObject()); + indexRequest.opType(DocWriteRequest.OpType.CREATE); + + IndexResponse indexResponse = execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals("index", indexResponse.getIndex()); + assertEquals("type", indexResponse.getType()); + assertEquals("with_create_op_type", indexResponse.getId()); + + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + execute(indexRequest, highLevelClient()::index, highLevelClient()::indexAsync); + }); + + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][with_create_op_type]: " + + "version conflict, document already exists (current version [1])]", exception.getMessage()); + } + } + + public void testUpdate() throws IOException { + { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "does_not_exist"); + updateRequest.doc(singletonMap("field", "value"), randomFrom(XContentType.values())); + + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> + execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync)); + assertEquals(RestStatus.NOT_FOUND, exception.status()); + assertEquals("Elasticsearch exception [type=document_missing_exception, reason=[type][does_not_exist]: document missing]", + exception.getMessage()); + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "id"); + indexRequest.source(singletonMap("field", "value")); + IndexResponse indexResponse = highLevelClient().index(indexRequest); + assertEquals(RestStatus.CREATED, indexResponse.status()); + + UpdateRequest updateRequest = new UpdateRequest("index", "type", "id"); + updateRequest.doc(singletonMap("field", "updated"), randomFrom(XContentType.values())); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.OK, updateResponse.status()); + assertEquals(indexResponse.getVersion() + 1, updateResponse.getVersion()); + + UpdateRequest updateRequestConflict = new UpdateRequest("index", "type", "id"); + updateRequestConflict.doc(singletonMap("field", "with_version_conflict"), randomFrom(XContentType.values())); + updateRequestConflict.version(indexResponse.getVersion()); + + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> + execute(updateRequestConflict, highLevelClient()::update, highLevelClient()::updateAsync)); + assertEquals(RestStatus.CONFLICT, exception.status()); + assertEquals("Elasticsearch exception [type=version_conflict_engine_exception, reason=[type][id]: version conflict, " + + "current version [2] is different than the one provided [1]]", exception.getMessage()); + } + { + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "id"); + updateRequest.doc(singletonMap("field", "updated"), randomFrom(XContentType.values())); + if (randomBoolean()) { + updateRequest.parent("missing"); + } else { + updateRequest.routing("missing"); + } + execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + }); + + assertEquals(RestStatus.NOT_FOUND, exception.status()); + assertEquals("Elasticsearch exception [type=document_missing_exception, reason=[type][id]: document missing]", + exception.getMessage()); + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "with_script"); + indexRequest.source(singletonMap("counter", 12)); + IndexResponse indexResponse = highLevelClient().index(indexRequest); + assertEquals(RestStatus.CREATED, indexResponse.status()); + + UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_script"); + Script script = new Script(ScriptType.INLINE, "painless", "ctx._source.counter += params.count", singletonMap("count", 8)); + updateRequest.script(script); + updateRequest.fetchSource(true); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.OK, updateResponse.status()); + assertEquals(DocWriteResponse.Result.UPDATED, updateResponse.getResult()); + assertEquals(2L, updateResponse.getVersion()); + assertEquals(20, updateResponse.getGetResult().sourceAsMap().get("counter")); + + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "with_doc"); + indexRequest.source("field_1", "one", "field_3", "three"); + indexRequest.version(12L); + indexRequest.versionType(VersionType.EXTERNAL); + IndexResponse indexResponse = highLevelClient().index(indexRequest); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals(12L, indexResponse.getVersion()); + + UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_doc"); + updateRequest.doc(singletonMap("field_2", "two"), randomFrom(XContentType.values())); + updateRequest.fetchSource("field_*", "field_3"); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.OK, updateResponse.status()); + assertEquals(DocWriteResponse.Result.UPDATED, updateResponse.getResult()); + assertEquals(13L, updateResponse.getVersion()); + GetResult getResult = updateResponse.getGetResult(); + assertEquals(13L, updateResponse.getVersion()); + Map sourceAsMap = getResult.sourceAsMap(); + assertEquals("one", sourceAsMap.get("field_1")); + assertEquals("two", sourceAsMap.get("field_2")); + assertFalse(sourceAsMap.containsKey("field_3")); + } + { + IndexRequest indexRequest = new IndexRequest("index", "type", "noop"); + indexRequest.source("field", "value"); + IndexResponse indexResponse = highLevelClient().index(indexRequest); + assertEquals(RestStatus.CREATED, indexResponse.status()); + assertEquals(1L, indexResponse.getVersion()); + + UpdateRequest updateRequest = new UpdateRequest("index", "type", "noop"); + updateRequest.doc(singletonMap("field", "value"), randomFrom(XContentType.values())); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.OK, updateResponse.status()); + assertEquals(DocWriteResponse.Result.NOOP, updateResponse.getResult()); + assertEquals(1L, updateResponse.getVersion()); + + updateRequest.detectNoop(false); + + updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.OK, updateResponse.status()); + assertEquals(DocWriteResponse.Result.UPDATED, updateResponse.getResult()); + assertEquals(2L, updateResponse.getVersion()); + } + { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_upsert"); + updateRequest.upsert(singletonMap("doc_status", "created")); + updateRequest.doc(singletonMap("doc_status", "updated")); + updateRequest.fetchSource(true); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.CREATED, updateResponse.status()); + assertEquals("index", updateResponse.getIndex()); + assertEquals("type", updateResponse.getType()); + assertEquals("with_upsert", updateResponse.getId()); + GetResult getResult = updateResponse.getGetResult(); + assertEquals(1L, updateResponse.getVersion()); + assertEquals("created", getResult.sourceAsMap().get("doc_status")); + } + { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_doc_as_upsert"); + updateRequest.doc(singletonMap("field", "initialized")); + updateRequest.fetchSource(true); + updateRequest.docAsUpsert(true); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.CREATED, updateResponse.status()); + assertEquals("index", updateResponse.getIndex()); + assertEquals("type", updateResponse.getType()); + assertEquals("with_doc_as_upsert", updateResponse.getId()); + GetResult getResult = updateResponse.getGetResult(); + assertEquals(1L, updateResponse.getVersion()); + assertEquals("initialized", getResult.sourceAsMap().get("field")); + } + { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "with_scripted_upsert"); + updateRequest.fetchSource(true); + updateRequest.script(new Script(ScriptType.INLINE, "painless", "ctx._source.level = params.test", singletonMap("test", "C"))); + updateRequest.scriptedUpsert(true); + updateRequest.upsert(singletonMap("level", "A")); + + UpdateResponse updateResponse = execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + assertEquals(RestStatus.CREATED, updateResponse.status()); + assertEquals("index", updateResponse.getIndex()); + assertEquals("type", updateResponse.getType()); + assertEquals("with_scripted_upsert", updateResponse.getId()); + + GetResult getResult = updateResponse.getGetResult(); + assertEquals(1L, updateResponse.getVersion()); + assertEquals("C", getResult.sourceAsMap().get("level")); + } + { + IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { + UpdateRequest updateRequest = new UpdateRequest("index", "type", "id"); + updateRequest.doc(new IndexRequest().source(Collections.singletonMap("field", "doc"), XContentType.JSON)); + updateRequest.upsert(new IndexRequest().source(Collections.singletonMap("field", "upsert"), XContentType.YAML)); + execute(updateRequest, highLevelClient()::update, highLevelClient()::updateAsync); + }); + assertEquals("Update request cannot have different content types for doc [JSON] and upsert [YAML] documents", + exception.getMessage()); + } + } + + public void testBulk() throws IOException { + int nbItems = randomIntBetween(10, 100); + boolean[] errors = new boolean[nbItems]; + + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + + BulkRequest bulkRequest = new BulkRequest(); + for (int i = 0; i < nbItems; i++) { + String id = String.valueOf(i); + boolean erroneous = randomBoolean(); + errors[i] = erroneous; + + DocWriteRequest.OpType opType = randomFrom(DocWriteRequest.OpType.values()); + if (opType == DocWriteRequest.OpType.DELETE) { + if (erroneous == false) { + assertEquals(RestStatus.CREATED, + highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); + } + DeleteRequest deleteRequest = new DeleteRequest("index", "test", id); + bulkRequest.add(deleteRequest); + + } else { + BytesReference source = XContentBuilder.builder(xContentType.xContent()).startObject().field("id", i).endObject().bytes(); + if (opType == DocWriteRequest.OpType.INDEX) { + IndexRequest indexRequest = new IndexRequest("index", "test", id).source(source, xContentType); + if (erroneous) { + indexRequest.version(12L); + } + bulkRequest.add(indexRequest); + + } else if (opType == DocWriteRequest.OpType.CREATE) { + IndexRequest createRequest = new IndexRequest("index", "test", id).source(source, xContentType).create(true); + if (erroneous) { + assertEquals(RestStatus.CREATED, highLevelClient().index(createRequest).status()); + } + bulkRequest.add(createRequest); + + } else if (opType == DocWriteRequest.OpType.UPDATE) { + UpdateRequest updateRequest = new UpdateRequest("index", "test", id) + .doc(new IndexRequest().source(source, xContentType)); + if (erroneous == false) { + assertEquals(RestStatus.CREATED, + highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); + } + bulkRequest.add(updateRequest); + } + } + } + + BulkResponse bulkResponse = execute(bulkRequest, highLevelClient()::bulk, highLevelClient()::bulkAsync); + assertEquals(RestStatus.OK, bulkResponse.status()); + assertTrue(bulkResponse.getTook().getMillis() > 0); + assertEquals(nbItems, bulkResponse.getItems().length); + + validateBulkResponses(nbItems, errors, bulkResponse, bulkRequest); + } + + public void testBulkProcessorIntegration() throws IOException, InterruptedException { + int nbItems = randomIntBetween(10, 100); + boolean[] errors = new boolean[nbItems]; + + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + + AtomicReference responseRef = new AtomicReference<>(); + AtomicReference requestRef = new AtomicReference<>(); + AtomicReference error = new AtomicReference<>(); + + BulkProcessor.Listener listener = new BulkProcessor.Listener() { + @Override + public void beforeBulk(long executionId, BulkRequest request) { + + } + + @Override + public void afterBulk(long executionId, BulkRequest request, BulkResponse response) { + responseRef.set(response); + requestRef.set(request); + } + + @Override + public void afterBulk(long executionId, BulkRequest request, Throwable failure) { + error.set(failure); + } + }; + + ThreadPool threadPool = new ThreadPool(Settings.builder().put("node.name", getClass().getName()).build()); + // Pull the client to a variable to work around https://bugs.eclipse.org/bugs/show_bug.cgi?id=514884 + RestHighLevelClient hlClient = highLevelClient(); + try(BulkProcessor processor = new BulkProcessor.Builder(hlClient::bulkAsync, listener, threadPool) + .setConcurrentRequests(0) + .setBulkSize(new ByteSizeValue(5, ByteSizeUnit.GB)) + .setBulkActions(nbItems + 1) + .build()) { + for (int i = 0; i < nbItems; i++) { + String id = String.valueOf(i); + boolean erroneous = randomBoolean(); + errors[i] = erroneous; + + DocWriteRequest.OpType opType = randomFrom(DocWriteRequest.OpType.values()); + if (opType == DocWriteRequest.OpType.DELETE) { + if (erroneous == false) { + assertEquals(RestStatus.CREATED, + highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); + } + DeleteRequest deleteRequest = new DeleteRequest("index", "test", id); + processor.add(deleteRequest); + + } else { + if (opType == DocWriteRequest.OpType.INDEX) { + IndexRequest indexRequest = new IndexRequest("index", "test", id).source(xContentType, "id", i); + if (erroneous) { + indexRequest.version(12L); + } + processor.add(indexRequest); + + } else if (opType == DocWriteRequest.OpType.CREATE) { + IndexRequest createRequest = new IndexRequest("index", "test", id).source(xContentType, "id", i).create(true); + if (erroneous) { + assertEquals(RestStatus.CREATED, highLevelClient().index(createRequest).status()); + } + processor.add(createRequest); + + } else if (opType == DocWriteRequest.OpType.UPDATE) { + UpdateRequest updateRequest = new UpdateRequest("index", "test", id) + .doc(new IndexRequest().source(xContentType, "id", i)); + if (erroneous == false) { + assertEquals(RestStatus.CREATED, + highLevelClient().index(new IndexRequest("index", "test", id).source("field", -1)).status()); + } + processor.add(updateRequest); + } + } + } + assertNull(responseRef.get()); + assertNull(requestRef.get()); + } + + + BulkResponse bulkResponse = responseRef.get(); + BulkRequest bulkRequest = requestRef.get(); + + assertEquals(RestStatus.OK, bulkResponse.status()); + assertTrue(bulkResponse.getTook().getMillis() > 0); + assertEquals(nbItems, bulkResponse.getItems().length); + assertNull(error.get()); + + validateBulkResponses(nbItems, errors, bulkResponse, bulkRequest); + + terminate(threadPool); + } + + private void validateBulkResponses(int nbItems, boolean[] errors, BulkResponse bulkResponse, BulkRequest bulkRequest) { + for (int i = 0; i < nbItems; i++) { + BulkItemResponse bulkItemResponse = bulkResponse.getItems()[i]; + + assertEquals(i, bulkItemResponse.getItemId()); + assertEquals("index", bulkItemResponse.getIndex()); + assertEquals("test", bulkItemResponse.getType()); + assertEquals(String.valueOf(i), bulkItemResponse.getId()); + + DocWriteRequest.OpType requestOpType = bulkRequest.requests().get(i).opType(); + if (requestOpType == DocWriteRequest.OpType.INDEX || requestOpType == DocWriteRequest.OpType.CREATE) { + assertEquals(errors[i], bulkItemResponse.isFailed()); + assertEquals(errors[i] ? RestStatus.CONFLICT : RestStatus.CREATED, bulkItemResponse.status()); + } else if (requestOpType == DocWriteRequest.OpType.UPDATE) { + assertEquals(errors[i], bulkItemResponse.isFailed()); + assertEquals(errors[i] ? RestStatus.NOT_FOUND : RestStatus.OK, bulkItemResponse.status()); + } else if (requestOpType == DocWriteRequest.OpType.DELETE) { + assertFalse(bulkItemResponse.isFailed()); + assertEquals(errors[i] ? RestStatus.NOT_FOUND : RestStatus.OK, bulkItemResponse.status()); + } + } + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java new file mode 100644 index 0000000000000..8ad42c2232020 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/CustomRestHighLevelClientTests.java @@ -0,0 +1,181 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.HttpResponse; +import org.apache.http.ProtocolVersion; +import org.apache.http.RequestLine; +import org.apache.http.client.methods.HttpGet; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.message.BasicHeader; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicRequestLine; +import org.apache.http.message.BasicStatusLine; +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.Build; +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.main.MainRequest; +import org.elasticsearch.action.main.MainResponse; +import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.common.SuppressForbidden; +import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.test.ESTestCase; +import org.junit.Before; + +import java.io.IOException; +import java.lang.reflect.Method; +import java.lang.reflect.Modifier; + +import static java.util.Collections.emptyMap; +import static java.util.Collections.emptySet; +import static org.elasticsearch.client.ESRestHighLevelClientTestCase.execute; +import static org.mockito.Matchers.any; +import static org.mockito.Matchers.anyMapOf; +import static org.mockito.Matchers.anyObject; +import static org.mockito.Matchers.anyVararg; +import static org.mockito.Matchers.eq; +import static org.mockito.Mockito.doAnswer; +import static org.mockito.Mockito.mock; + +/** + * Test and demonstrates how {@link RestHighLevelClient} can be extended to support custom endpoints. + */ +public class CustomRestHighLevelClientTests extends ESTestCase { + + private static final String ENDPOINT = "/_custom"; + + private CustomRestClient restHighLevelClient; + + @Before + @SuppressWarnings("unchecked") + public void initClients() throws IOException { + if (restHighLevelClient == null) { + final RestClient restClient = mock(RestClient.class); + restHighLevelClient = new CustomRestClient(restClient); + + doAnswer(mock -> mockPerformRequest((Header) mock.getArguments()[4])) + .when(restClient) + .performRequest(eq(HttpGet.METHOD_NAME), eq(ENDPOINT), anyMapOf(String.class, String.class), anyObject(), anyVararg()); + + doAnswer(mock -> mockPerformRequestAsync((Header) mock.getArguments()[5], (ResponseListener) mock.getArguments()[4])) + .when(restClient) + .performRequestAsync(eq(HttpGet.METHOD_NAME), eq(ENDPOINT), anyMapOf(String.class, String.class), + any(HttpEntity.class), any(ResponseListener.class), anyVararg()); + } + } + + public void testCustomEndpoint() throws IOException { + final MainRequest request = new MainRequest(); + final Header header = new BasicHeader("node_name", randomAlphaOfLengthBetween(1, 10)); + + MainResponse response = execute(request, restHighLevelClient::custom, restHighLevelClient::customAsync, header); + assertEquals(header.getValue(), response.getNodeName()); + + response = execute(request, restHighLevelClient::customAndParse, restHighLevelClient::customAndParseAsync, header); + assertEquals(header.getValue(), response.getNodeName()); + } + + /** + * The {@link RestHighLevelClient} must declare the following execution methods using the protected modifier + * so that they can be used by subclasses to implement custom logic. + */ + @SuppressForbidden(reason = "We're forced to uses Class#getDeclaredMethods() here because this test checks protected methods") + public void testMethodsVisibility() throws ClassNotFoundException { + String[] methodNames = new String[]{"performRequest", "performRequestAndParseEntity", "performRequestAsync", + "performRequestAsyncAndParseEntity"}; + for (String methodName : methodNames) { + boolean found = false; + for (Method method : RestHighLevelClient.class.getDeclaredMethods()) { + if (method.getName().equals(methodName)) { + assertTrue("Method " + methodName + " must be protected", Modifier.isProtected(method.getModifiers())); + found = true; + } + } + assertTrue("Failed to find method " + methodName, found); + } + } + + /** + * Mocks the asynchronous request execution by calling the {@link #mockPerformRequest(Header)} method. + */ + private Void mockPerformRequestAsync(Header httpHeader, ResponseListener responseListener) { + try { + responseListener.onSuccess(mockPerformRequest(httpHeader)); + } catch (IOException e) { + responseListener.onFailure(e); + } + return null; + } + + /** + * Mocks the synchronous request execution like if it was executed by Elasticsearch. + */ + private Response mockPerformRequest(Header httpHeader) throws IOException { + ProtocolVersion protocol = new ProtocolVersion("HTTP", 1, 1); + HttpResponse httpResponse = new BasicHttpResponse(new BasicStatusLine(protocol, 200, "OK")); + + MainResponse response = new MainResponse(httpHeader.getValue(), Version.CURRENT, ClusterName.DEFAULT, "_na", Build.CURRENT, true); + BytesRef bytesRef = XContentHelper.toXContent(response, XContentType.JSON, false).toBytesRef(); + httpResponse.setEntity(new ByteArrayEntity(bytesRef.bytes, ContentType.APPLICATION_JSON)); + + RequestLine requestLine = new BasicRequestLine(HttpGet.METHOD_NAME, ENDPOINT, protocol); + return new Response(requestLine, new HttpHost("localhost", 9200), httpResponse); + } + + /** + * A custom high level client that provides custom methods to execute a request and get its associate response back. + */ + static class CustomRestClient extends RestHighLevelClient { + + private CustomRestClient(RestClient restClient) { + super(restClient); + } + + MainResponse custom(MainRequest mainRequest, Header... headers) throws IOException { + return performRequest(mainRequest, this::toRequest, this::toResponse, emptySet(), headers); + } + + MainResponse customAndParse(MainRequest mainRequest, Header... headers) throws IOException { + return performRequestAndParseEntity(mainRequest, this::toRequest, MainResponse::fromXContent, emptySet(), headers); + } + + void customAsync(MainRequest mainRequest, ActionListener listener, Header... headers) { + performRequestAsync(mainRequest, this::toRequest, this::toResponse, listener, emptySet(), headers); + } + + void customAndParseAsync(MainRequest mainRequest, ActionListener listener, Header... headers) { + performRequestAsyncAndParseEntity(mainRequest, this::toRequest, MainResponse::fromXContent, listener, emptySet(), headers); + } + + Request toRequest(MainRequest mainRequest) throws IOException { + return new Request(HttpGet.METHOD_NAME, ENDPOINT, emptyMap(), null); + } + + MainResponse toResponse(Response response) throws IOException { + return parseEntity(response.getEntity(), MainResponse::fromXContent); + } + } +} \ No newline at end of file diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java index bc12b1433d7e4..cdd8317830909 100644 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/ESRestHighLevelClientTestCase.java @@ -19,6 +19,9 @@ package org.elasticsearch.client; +import org.apache.http.Header; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.support.PlainActionFuture; import org.elasticsearch.test.rest.ESRestTestCase; import org.junit.AfterClass; import org.junit.Before; @@ -38,11 +41,35 @@ public void initHighLevelClient() throws IOException { } @AfterClass - public static void cleanupClient() throws IOException { + public static void cleanupClient() { restHighLevelClient = null; } protected static RestHighLevelClient highLevelClient() { return restHighLevelClient; } + + /** + * Executes the provided request using either the sync method or its async variant, both provided as functions + */ + protected static Resp execute(Req request, SyncMethod syncMethod, + AsyncMethod asyncMethod, Header... headers) throws IOException { + if (randomBoolean()) { + return syncMethod.execute(request, headers); + } else { + PlainActionFuture future = PlainActionFuture.newFuture(); + asyncMethod.execute(request, future, headers); + return future.actionGet(); + } + } + + @FunctionalInterface + protected interface SyncMethod { + Response execute(Request request, Header... headers) throws IOException; + } + + @FunctionalInterface + protected interface AsyncMethod { + void execute(Request request, ActionListener listener, Header... headers); + } } diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/MainActionIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/MainActionIT.java deleted file mode 100644 index 717ab7a44f3fd..0000000000000 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/MainActionIT.java +++ /dev/null @@ -1,27 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.client; - -public class MainActionIT extends ESRestHighLevelClientTestCase { - - public void testPing() { - assertTrue(highLevelClient().ping()); - } -} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/PingAndInfoIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/PingAndInfoIT.java new file mode 100644 index 0000000000000..b22ded52655df --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/PingAndInfoIT.java @@ -0,0 +1,51 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.elasticsearch.action.main.MainResponse; + +import java.io.IOException; +import java.util.Map; + +public class PingAndInfoIT extends ESRestHighLevelClientTestCase { + + public void testPing() throws IOException { + assertTrue(highLevelClient().ping()); + } + + @SuppressWarnings("unchecked") + public void testInfo() throws IOException { + MainResponse info = highLevelClient().info(); + // compare with what the low level client outputs + Map infoAsMap = entityAsMap(adminClient().performRequest("GET", "/")); + assertEquals(infoAsMap.get("cluster_name"), info.getClusterName().value()); + assertEquals(infoAsMap.get("cluster_uuid"), info.getClusterUuid()); + + // only check node name existence, might be a different one from what was hit by low level client in multi-node cluster + assertNotNull(info.getNodeName()); + Map versionMap = (Map) infoAsMap.get("version"); + assertEquals(versionMap.get("build_hash"), info.getBuild().shortHash()); + assertEquals(versionMap.get("build_date"), info.getBuild().date()); + assertEquals(versionMap.get("build_snapshot"), info.getBuild().isSnapshot()); + assertEquals(versionMap.get("number"), info.getVersion().toString()); + assertEquals(versionMap.get("lucene_version"), info.getVersion().luceneVersion.toString()); + } + +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java new file mode 100644 index 0000000000000..f18e348adce5e --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RequestTests.java @@ -0,0 +1,906 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.HttpEntity; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.util.EntityUtils; +import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.bulk.BulkRequest; +import org.elasticsearch.action.bulk.BulkShardRequest; +import org.elasticsearch.action.delete.DeleteRequest; +import org.elasticsearch.action.get.GetRequest; +import org.elasticsearch.action.index.IndexRequest; +import org.elasticsearch.action.search.ClearScrollRequest; +import org.elasticsearch.action.search.SearchRequest; +import org.elasticsearch.action.search.SearchScrollRequest; +import org.elasticsearch.action.search.SearchType; +import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.action.support.replication.ReplicatedWriteRequest; +import org.elasticsearch.action.support.replication.ReplicationRequest; +import org.elasticsearch.action.update.UpdateRequest; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.bytes.BytesArray; +import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.io.Streams; +import org.elasticsearch.common.lucene.uid.Versions; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.index.VersionType; +import org.elasticsearch.index.query.TermQueryBuilder; +import org.elasticsearch.rest.action.search.RestSearchAction; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; +import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.collapse.CollapseBuilder; +import org.elasticsearch.search.fetch.subphase.FetchSourceContext; +import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; +import org.elasticsearch.search.rescore.QueryRescorerBuilder; +import org.elasticsearch.search.suggest.SuggestBuilder; +import org.elasticsearch.search.suggest.completion.CompletionSuggestionBuilder; +import org.elasticsearch.test.ESTestCase; +import org.elasticsearch.test.RandomObjects; + +import java.io.IOException; +import java.io.InputStream; +import java.util.HashMap; +import java.util.Locale; +import java.util.Map; +import java.util.StringJoiner; +import java.util.function.Consumer; +import java.util.function.Function; + +import static java.util.Collections.singletonMap; +import static org.elasticsearch.client.Request.enforceSameContentType; +import static org.elasticsearch.test.hamcrest.ElasticsearchAssertions.assertToXContentEquivalent; + +public class RequestTests extends ESTestCase { + + public void testPing() { + Request request = Request.ping(); + assertEquals("/", request.endpoint); + assertEquals(0, request.params.size()); + assertNull(request.entity); + assertEquals("HEAD", request.method); + } + + public void testInfo() { + Request request = Request.info(); + assertEquals("/", request.endpoint); + assertEquals(0, request.params.size()); + assertNull(request.entity); + assertEquals("GET", request.method); + } + + public void testGet() { + getAndExistsTest(Request::get, "GET"); + } + + public void testDelete() throws IOException { + String index = randomAlphaOfLengthBetween(3, 10); + String type = randomAlphaOfLengthBetween(3, 10); + String id = randomAlphaOfLengthBetween(3, 10); + DeleteRequest deleteRequest = new DeleteRequest(index, type, id); + + Map expectedParams = new HashMap<>(); + + setRandomTimeout(deleteRequest, expectedParams); + setRandomRefreshPolicy(deleteRequest, expectedParams); + setRandomVersion(deleteRequest, expectedParams); + setRandomVersionType(deleteRequest, expectedParams); + + if (frequently()) { + if (randomBoolean()) { + String routing = randomAlphaOfLengthBetween(3, 10); + deleteRequest.routing(routing); + expectedParams.put("routing", routing); + } + if (randomBoolean()) { + String parent = randomAlphaOfLengthBetween(3, 10); + deleteRequest.parent(parent); + expectedParams.put("parent", parent); + } + } + + Request request = Request.delete(deleteRequest); + assertEquals("/" + index + "/" + type + "/" + id, request.endpoint); + assertEquals(expectedParams, request.params); + assertEquals("DELETE", request.method); + assertNull(request.entity); + } + + public void testExists() { + getAndExistsTest(Request::exists, "HEAD"); + } + + private static void getAndExistsTest(Function requestConverter, String method) { + String index = randomAlphaOfLengthBetween(3, 10); + String type = randomAlphaOfLengthBetween(3, 10); + String id = randomAlphaOfLengthBetween(3, 10); + GetRequest getRequest = new GetRequest(index, type, id); + + Map expectedParams = new HashMap<>(); + if (randomBoolean()) { + if (randomBoolean()) { + String preference = randomAlphaOfLengthBetween(3, 10); + getRequest.preference(preference); + expectedParams.put("preference", preference); + } + if (randomBoolean()) { + String routing = randomAlphaOfLengthBetween(3, 10); + getRequest.routing(routing); + expectedParams.put("routing", routing); + } + if (randomBoolean()) { + boolean realtime = randomBoolean(); + getRequest.realtime(realtime); + if (realtime == false) { + expectedParams.put("realtime", "false"); + } + } + if (randomBoolean()) { + boolean refresh = randomBoolean(); + getRequest.refresh(refresh); + if (refresh) { + expectedParams.put("refresh", "true"); + } + } + if (randomBoolean()) { + long version = randomLong(); + getRequest.version(version); + if (version != Versions.MATCH_ANY) { + expectedParams.put("version", Long.toString(version)); + } + } + if (randomBoolean()) { + VersionType versionType = randomFrom(VersionType.values()); + getRequest.versionType(versionType); + if (versionType != VersionType.INTERNAL) { + expectedParams.put("version_type", versionType.name().toLowerCase(Locale.ROOT)); + } + } + if (randomBoolean()) { + int numStoredFields = randomIntBetween(1, 10); + String[] storedFields = new String[numStoredFields]; + StringBuilder storedFieldsParam = new StringBuilder(); + for (int i = 0; i < numStoredFields; i++) { + String storedField = randomAlphaOfLengthBetween(3, 10); + storedFields[i] = storedField; + storedFieldsParam.append(storedField); + if (i < numStoredFields - 1) { + storedFieldsParam.append(","); + } + } + getRequest.storedFields(storedFields); + expectedParams.put("stored_fields", storedFieldsParam.toString()); + } + if (randomBoolean()) { + randomizeFetchSourceContextParams(getRequest::fetchSourceContext, expectedParams); + } + } + Request request = requestConverter.apply(getRequest); + assertEquals("/" + index + "/" + type + "/" + id, request.endpoint); + assertEquals(expectedParams, request.params); + assertNull(request.entity); + assertEquals(method, request.method); + } + + public void testIndex() throws IOException { + String index = randomAlphaOfLengthBetween(3, 10); + String type = randomAlphaOfLengthBetween(3, 10); + IndexRequest indexRequest = new IndexRequest(index, type); + + String id = randomBoolean() ? randomAlphaOfLengthBetween(3, 10) : null; + indexRequest.id(id); + + Map expectedParams = new HashMap<>(); + + String method = "POST"; + if (id != null) { + method = "PUT"; + if (randomBoolean()) { + indexRequest.opType(DocWriteRequest.OpType.CREATE); + } + } + + setRandomTimeout(indexRequest, expectedParams); + setRandomRefreshPolicy(indexRequest, expectedParams); + + // There is some logic around _create endpoint and version/version type + if (indexRequest.opType() == DocWriteRequest.OpType.CREATE) { + indexRequest.version(randomFrom(Versions.MATCH_ANY, Versions.MATCH_DELETED)); + expectedParams.put("version", Long.toString(Versions.MATCH_DELETED)); + } else { + setRandomVersion(indexRequest, expectedParams); + setRandomVersionType(indexRequest, expectedParams); + } + + if (frequently()) { + if (randomBoolean()) { + String routing = randomAlphaOfLengthBetween(3, 10); + indexRequest.routing(routing); + expectedParams.put("routing", routing); + } + if (randomBoolean()) { + String parent = randomAlphaOfLengthBetween(3, 10); + indexRequest.parent(parent); + expectedParams.put("parent", parent); + } + if (randomBoolean()) { + String pipeline = randomAlphaOfLengthBetween(3, 10); + indexRequest.setPipeline(pipeline); + expectedParams.put("pipeline", pipeline); + } + } + + XContentType xContentType = randomFrom(XContentType.values()); + int nbFields = randomIntBetween(0, 10); + try (XContentBuilder builder = XContentBuilder.builder(xContentType.xContent())) { + builder.startObject(); + for (int i = 0; i < nbFields; i++) { + builder.field("field_" + i, i); + } + builder.endObject(); + indexRequest.source(builder); + } + + Request request = Request.index(indexRequest); + if (indexRequest.opType() == DocWriteRequest.OpType.CREATE) { + assertEquals("/" + index + "/" + type + "/" + id + "/_create", request.endpoint); + } else if (id != null) { + assertEquals("/" + index + "/" + type + "/" + id, request.endpoint); + } else { + assertEquals("/" + index + "/" + type, request.endpoint); + } + assertEquals(expectedParams, request.params); + assertEquals(method, request.method); + + HttpEntity entity = request.entity; + assertTrue(entity instanceof ByteArrayEntity); + assertEquals(indexRequest.getContentType().mediaType(), entity.getContentType().getValue()); + try (XContentParser parser = createParser(xContentType.xContent(), entity.getContent())) { + assertEquals(nbFields, parser.map().size()); + } + } + + public void testUpdate() throws IOException { + XContentType xContentType = randomFrom(XContentType.values()); + + Map expectedParams = new HashMap<>(); + String index = randomAlphaOfLengthBetween(3, 10); + String type = randomAlphaOfLengthBetween(3, 10); + String id = randomAlphaOfLengthBetween(3, 10); + + UpdateRequest updateRequest = new UpdateRequest(index, type, id); + updateRequest.detectNoop(randomBoolean()); + + if (randomBoolean()) { + BytesReference source = RandomObjects.randomSource(random(), xContentType); + updateRequest.doc(new IndexRequest().source(source, xContentType)); + + boolean docAsUpsert = randomBoolean(); + updateRequest.docAsUpsert(docAsUpsert); + if (docAsUpsert) { + expectedParams.put("doc_as_upsert", "true"); + } + } else { + updateRequest.script(mockScript("_value + 1")); + updateRequest.scriptedUpsert(randomBoolean()); + } + if (randomBoolean()) { + BytesReference source = RandomObjects.randomSource(random(), xContentType); + updateRequest.upsert(new IndexRequest().source(source, xContentType)); + } + if (randomBoolean()) { + String routing = randomAlphaOfLengthBetween(3, 10); + updateRequest.routing(routing); + expectedParams.put("routing", routing); + } + if (randomBoolean()) { + String parent = randomAlphaOfLengthBetween(3, 10); + updateRequest.parent(parent); + expectedParams.put("parent", parent); + } + if (randomBoolean()) { + String timeout = randomTimeValue(); + updateRequest.timeout(timeout); + expectedParams.put("timeout", timeout); + } else { + expectedParams.put("timeout", ReplicationRequest.DEFAULT_TIMEOUT.getStringRep()); + } + if (randomBoolean()) { + WriteRequest.RefreshPolicy refreshPolicy = randomFrom(WriteRequest.RefreshPolicy.values()); + updateRequest.setRefreshPolicy(refreshPolicy); + if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) { + expectedParams.put("refresh", refreshPolicy.getValue()); + } + } + if (randomBoolean()) { + int waitForActiveShards = randomIntBetween(0, 10); + updateRequest.waitForActiveShards(waitForActiveShards); + expectedParams.put("wait_for_active_shards", String.valueOf(waitForActiveShards)); + } + if (randomBoolean()) { + long version = randomLong(); + updateRequest.version(version); + if (version != Versions.MATCH_ANY) { + expectedParams.put("version", Long.toString(version)); + } + } + if (randomBoolean()) { + VersionType versionType = randomFrom(VersionType.values()); + updateRequest.versionType(versionType); + if (versionType != VersionType.INTERNAL) { + expectedParams.put("version_type", versionType.name().toLowerCase(Locale.ROOT)); + } + } + if (randomBoolean()) { + int retryOnConflict = randomIntBetween(0, 5); + updateRequest.retryOnConflict(retryOnConflict); + if (retryOnConflict > 0) { + expectedParams.put("retry_on_conflict", String.valueOf(retryOnConflict)); + } + } + if (randomBoolean()) { + randomizeFetchSourceContextParams(updateRequest::fetchSource, expectedParams); + } + + Request request = Request.update(updateRequest); + assertEquals("/" + index + "/" + type + "/" + id + "/_update", request.endpoint); + assertEquals(expectedParams, request.params); + assertEquals("POST", request.method); + + HttpEntity entity = request.entity; + assertTrue(entity instanceof ByteArrayEntity); + + UpdateRequest parsedUpdateRequest = new UpdateRequest(); + + XContentType entityContentType = XContentType.fromMediaTypeOrFormat(entity.getContentType().getValue()); + try (XContentParser parser = createParser(entityContentType.xContent(), entity.getContent())) { + parsedUpdateRequest.fromXContent(parser); + } + + assertEquals(updateRequest.scriptedUpsert(), parsedUpdateRequest.scriptedUpsert()); + assertEquals(updateRequest.docAsUpsert(), parsedUpdateRequest.docAsUpsert()); + assertEquals(updateRequest.detectNoop(), parsedUpdateRequest.detectNoop()); + assertEquals(updateRequest.fetchSource(), parsedUpdateRequest.fetchSource()); + assertEquals(updateRequest.script(), parsedUpdateRequest.script()); + if (updateRequest.doc() != null) { + assertToXContentEquivalent(updateRequest.doc().source(), parsedUpdateRequest.doc().source(), xContentType); + } else { + assertNull(parsedUpdateRequest.doc()); + } + if (updateRequest.upsertRequest() != null) { + assertToXContentEquivalent(updateRequest.upsertRequest().source(), parsedUpdateRequest.upsertRequest().source(), xContentType); + } else { + assertNull(parsedUpdateRequest.upsertRequest()); + } + } + + public void testUpdateWithDifferentContentTypes() throws IOException { + IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { + UpdateRequest updateRequest = new UpdateRequest(); + updateRequest.doc(new IndexRequest().source(singletonMap("field", "doc"), XContentType.JSON)); + updateRequest.upsert(new IndexRequest().source(singletonMap("field", "upsert"), XContentType.YAML)); + Request.update(updateRequest); + }); + assertEquals("Update request cannot have different content types for doc [JSON] and upsert [YAML] documents", + exception.getMessage()); + } + + public void testBulk() throws IOException { + Map expectedParams = new HashMap<>(); + + BulkRequest bulkRequest = new BulkRequest(); + if (randomBoolean()) { + String timeout = randomTimeValue(); + bulkRequest.timeout(timeout); + expectedParams.put("timeout", timeout); + } else { + expectedParams.put("timeout", BulkShardRequest.DEFAULT_TIMEOUT.getStringRep()); + } + + if (randomBoolean()) { + WriteRequest.RefreshPolicy refreshPolicy = randomFrom(WriteRequest.RefreshPolicy.values()); + bulkRequest.setRefreshPolicy(refreshPolicy); + if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) { + expectedParams.put("refresh", refreshPolicy.getValue()); + } + } + + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + + int nbItems = randomIntBetween(10, 100); + for (int i = 0; i < nbItems; i++) { + String index = randomAlphaOfLength(5); + String type = randomAlphaOfLength(5); + String id = randomAlphaOfLength(5); + + BytesReference source = RandomObjects.randomSource(random(), xContentType); + DocWriteRequest.OpType opType = randomFrom(DocWriteRequest.OpType.values()); + + DocWriteRequest docWriteRequest = null; + if (opType == DocWriteRequest.OpType.INDEX) { + IndexRequest indexRequest = new IndexRequest(index, type, id).source(source, xContentType); + docWriteRequest = indexRequest; + if (randomBoolean()) { + indexRequest.setPipeline(randomAlphaOfLength(5)); + } + if (randomBoolean()) { + indexRequest.parent(randomAlphaOfLength(5)); + } + } else if (opType == DocWriteRequest.OpType.CREATE) { + IndexRequest createRequest = new IndexRequest(index, type, id).source(source, xContentType).create(true); + docWriteRequest = createRequest; + if (randomBoolean()) { + createRequest.parent(randomAlphaOfLength(5)); + } + } else if (opType == DocWriteRequest.OpType.UPDATE) { + final UpdateRequest updateRequest = new UpdateRequest(index, type, id).doc(new IndexRequest().source(source, xContentType)); + docWriteRequest = updateRequest; + if (randomBoolean()) { + updateRequest.retryOnConflict(randomIntBetween(1, 5)); + } + if (randomBoolean()) { + randomizeFetchSourceContextParams(updateRequest::fetchSource, new HashMap<>()); + } + if (randomBoolean()) { + updateRequest.parent(randomAlphaOfLength(5)); + } + } else if (opType == DocWriteRequest.OpType.DELETE) { + docWriteRequest = new DeleteRequest(index, type, id); + } + + if (randomBoolean()) { + docWriteRequest.routing(randomAlphaOfLength(10)); + } + if (randomBoolean()) { + docWriteRequest.version(randomNonNegativeLong()); + } + if (randomBoolean()) { + docWriteRequest.versionType(randomFrom(VersionType.values())); + } + bulkRequest.add(docWriteRequest); + } + + Request request = Request.bulk(bulkRequest); + assertEquals("/_bulk", request.endpoint); + assertEquals(expectedParams, request.params); + assertEquals("POST", request.method); + assertEquals(xContentType.mediaType(), request.entity.getContentType().getValue()); + byte[] content = new byte[(int) request.entity.getContentLength()]; + try (InputStream inputStream = request.entity.getContent()) { + Streams.readFully(inputStream, content); + } + + BulkRequest parsedBulkRequest = new BulkRequest(); + parsedBulkRequest.add(content, 0, content.length, xContentType); + assertEquals(bulkRequest.numberOfActions(), parsedBulkRequest.numberOfActions()); + + for (int i = 0; i < bulkRequest.numberOfActions(); i++) { + DocWriteRequest originalRequest = bulkRequest.requests().get(i); + DocWriteRequest parsedRequest = parsedBulkRequest.requests().get(i); + + assertEquals(originalRequest.opType(), parsedRequest.opType()); + assertEquals(originalRequest.index(), parsedRequest.index()); + assertEquals(originalRequest.type(), parsedRequest.type()); + assertEquals(originalRequest.id(), parsedRequest.id()); + assertEquals(originalRequest.routing(), parsedRequest.routing()); + assertEquals(originalRequest.parent(), parsedRequest.parent()); + assertEquals(originalRequest.version(), parsedRequest.version()); + assertEquals(originalRequest.versionType(), parsedRequest.versionType()); + + DocWriteRequest.OpType opType = originalRequest.opType(); + if (opType == DocWriteRequest.OpType.INDEX) { + IndexRequest indexRequest = (IndexRequest) originalRequest; + IndexRequest parsedIndexRequest = (IndexRequest) parsedRequest; + + assertEquals(indexRequest.getPipeline(), parsedIndexRequest.getPipeline()); + assertToXContentEquivalent(indexRequest.source(), parsedIndexRequest.source(), xContentType); + } else if (opType == DocWriteRequest.OpType.UPDATE) { + UpdateRequest updateRequest = (UpdateRequest) originalRequest; + UpdateRequest parsedUpdateRequest = (UpdateRequest) parsedRequest; + + assertEquals(updateRequest.retryOnConflict(), parsedUpdateRequest.retryOnConflict()); + assertEquals(updateRequest.fetchSource(), parsedUpdateRequest.fetchSource()); + if (updateRequest.doc() != null) { + assertToXContentEquivalent(updateRequest.doc().source(), parsedUpdateRequest.doc().source(), xContentType); + } else { + assertNull(parsedUpdateRequest.doc()); + } + } + } + } + + public void testBulkWithDifferentContentTypes() throws IOException { + { + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(new DeleteRequest("index", "type", "0")); + bulkRequest.add(new UpdateRequest("index", "type", "1").script(mockScript("test"))); + bulkRequest.add(new DeleteRequest("index", "type", "2")); + + Request request = Request.bulk(bulkRequest); + assertEquals(XContentType.JSON.mediaType(), request.entity.getContentType().getValue()); + } + { + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(new DeleteRequest("index", "type", "0")); + bulkRequest.add(new IndexRequest("index", "type", "0").source(singletonMap("field", "value"), xContentType)); + bulkRequest.add(new DeleteRequest("index", "type", "2")); + + Request request = Request.bulk(bulkRequest); + assertEquals(xContentType.mediaType(), request.entity.getContentType().getValue()); + } + { + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + UpdateRequest updateRequest = new UpdateRequest("index", "type", "0"); + if (randomBoolean()) { + updateRequest.doc(new IndexRequest().source(singletonMap("field", "value"), xContentType)); + } else { + updateRequest.upsert(new IndexRequest().source(singletonMap("field", "value"), xContentType)); + } + + Request request = Request.bulk(new BulkRequest().add(updateRequest)); + assertEquals(xContentType.mediaType(), request.entity.getContentType().getValue()); + } + { + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(new IndexRequest("index", "type", "0").source(singletonMap("field", "value"), XContentType.SMILE)); + bulkRequest.add(new IndexRequest("index", "type", "1").source(singletonMap("field", "value"), XContentType.JSON)); + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> Request.bulk(bulkRequest)); + assertEquals("Mismatching content-type found for request with content-type [JSON], " + + "previous requests have content-type [SMILE]", exception.getMessage()); + } + { + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(new IndexRequest("index", "type", "0") + .source(singletonMap("field", "value"), XContentType.JSON)); + bulkRequest.add(new IndexRequest("index", "type", "1") + .source(singletonMap("field", "value"), XContentType.JSON)); + bulkRequest.add(new UpdateRequest("index", "type", "2") + .doc(new IndexRequest().source(singletonMap("field", "value"), XContentType.JSON)) + .upsert(new IndexRequest().source(singletonMap("field", "value"), XContentType.SMILE)) + ); + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> Request.bulk(bulkRequest)); + assertEquals("Mismatching content-type found for request with content-type [SMILE], " + + "previous requests have content-type [JSON]", exception.getMessage()); + } + { + XContentType xContentType = randomFrom(XContentType.CBOR, XContentType.YAML); + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(new DeleteRequest("index", "type", "0")); + bulkRequest.add(new IndexRequest("index", "type", "1").source(singletonMap("field", "value"), XContentType.JSON)); + bulkRequest.add(new DeleteRequest("index", "type", "2")); + bulkRequest.add(new DeleteRequest("index", "type", "3")); + bulkRequest.add(new IndexRequest("index", "type", "4").source(singletonMap("field", "value"), XContentType.JSON)); + bulkRequest.add(new IndexRequest("index", "type", "1").source(singletonMap("field", "value"), xContentType)); + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> Request.bulk(bulkRequest)); + assertEquals("Unsupported content-type found for request with content-type [" + xContentType + + "], only JSON and SMILE are supported", exception.getMessage()); + } + } + + public void testSearch() throws Exception { + SearchRequest searchRequest = new SearchRequest(); + int numIndices = randomIntBetween(0, 5); + String[] indices = new String[numIndices]; + for (int i = 0; i < numIndices; i++) { + indices[i] = "index-" + randomAlphaOfLengthBetween(2, 5); + } + searchRequest.indices(indices); + int numTypes = randomIntBetween(0, 5); + String[] types = new String[numTypes]; + for (int i = 0; i < numTypes; i++) { + types[i] = "type-" + randomAlphaOfLengthBetween(2, 5); + } + searchRequest.types(types); + + Map expectedParams = new HashMap<>(); + expectedParams.put(RestSearchAction.TYPED_KEYS_PARAM, "true"); + if (randomBoolean()) { + searchRequest.routing(randomAlphaOfLengthBetween(3, 10)); + expectedParams.put("routing", searchRequest.routing()); + } + if (randomBoolean()) { + searchRequest.preference(randomAlphaOfLengthBetween(3, 10)); + expectedParams.put("preference", searchRequest.preference()); + } + if (randomBoolean()) { + searchRequest.searchType(randomFrom(SearchType.values())); + } + expectedParams.put("search_type", searchRequest.searchType().name().toLowerCase(Locale.ROOT)); + if (randomBoolean()) { + searchRequest.requestCache(randomBoolean()); + expectedParams.put("request_cache", Boolean.toString(searchRequest.requestCache())); + } + if (randomBoolean()) { + searchRequest.setBatchedReduceSize(randomIntBetween(2, Integer.MAX_VALUE)); + } + expectedParams.put("batched_reduce_size", Integer.toString(searchRequest.getBatchedReduceSize())); + if (randomBoolean()) { + searchRequest.scroll(randomTimeValue()); + expectedParams.put("scroll", searchRequest.scroll().keepAlive().getStringRep()); + } + + if (randomBoolean()) { + searchRequest.indicesOptions(IndicesOptions.fromOptions(randomBoolean(), randomBoolean(), randomBoolean(), randomBoolean())); + } + expectedParams.put("ignore_unavailable", Boolean.toString(searchRequest.indicesOptions().ignoreUnavailable())); + expectedParams.put("allow_no_indices", Boolean.toString(searchRequest.indicesOptions().allowNoIndices())); + if (searchRequest.indicesOptions().expandWildcardsOpen() && searchRequest.indicesOptions().expandWildcardsClosed()) { + expectedParams.put("expand_wildcards", "open,closed"); + } else if (searchRequest.indicesOptions().expandWildcardsOpen()) { + expectedParams.put("expand_wildcards", "open"); + } else if (searchRequest.indicesOptions().expandWildcardsClosed()) { + expectedParams.put("expand_wildcards", "closed"); + } else { + expectedParams.put("expand_wildcards", "none"); + } + + SearchSourceBuilder searchSourceBuilder = null; + if (frequently()) { + searchSourceBuilder = new SearchSourceBuilder(); + if (randomBoolean()) { + searchSourceBuilder.size(randomIntBetween(0, Integer.MAX_VALUE)); + } + if (randomBoolean()) { + searchSourceBuilder.from(randomIntBetween(0, Integer.MAX_VALUE)); + } + if (randomBoolean()) { + searchSourceBuilder.minScore(randomFloat()); + } + if (randomBoolean()) { + searchSourceBuilder.explain(randomBoolean()); + } + if (randomBoolean()) { + searchSourceBuilder.profile(randomBoolean()); + } + if (randomBoolean()) { + searchSourceBuilder.highlighter(new HighlightBuilder().field(randomAlphaOfLengthBetween(3, 10))); + } + if (randomBoolean()) { + searchSourceBuilder.query(new TermQueryBuilder(randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10))); + } + if (randomBoolean()) { + searchSourceBuilder.aggregation(new TermsAggregationBuilder(randomAlphaOfLengthBetween(3, 10), ValueType.STRING) + .field(randomAlphaOfLengthBetween(3, 10))); + } + if (randomBoolean()) { + searchSourceBuilder.suggest(new SuggestBuilder().addSuggestion(randomAlphaOfLengthBetween(3, 10), + new CompletionSuggestionBuilder(randomAlphaOfLengthBetween(3, 10)))); + } + if (randomBoolean()) { + searchSourceBuilder.addRescorer(new QueryRescorerBuilder( + new TermQueryBuilder(randomAlphaOfLengthBetween(3, 10), randomAlphaOfLengthBetween(3, 10)))); + } + if (randomBoolean()) { + searchSourceBuilder.collapse(new CollapseBuilder(randomAlphaOfLengthBetween(3, 10))); + } + searchRequest.source(searchSourceBuilder); + } + + Request request = Request.search(searchRequest); + StringJoiner endpoint = new StringJoiner("/", "/", ""); + String index = String.join(",", indices); + if (Strings.hasLength(index)) { + endpoint.add(index); + } + String type = String.join(",", types); + if (Strings.hasLength(type)) { + endpoint.add(type); + } + endpoint.add("_search"); + assertEquals(endpoint.toString(), request.endpoint); + assertEquals(expectedParams, request.params); + if (searchSourceBuilder == null) { + assertNull(request.entity); + } else { + assertToXContentBody(searchSourceBuilder, request.entity); + } + } + + public void testSearchScroll() throws IOException { + SearchScrollRequest searchScrollRequest = new SearchScrollRequest(); + searchScrollRequest.scrollId(randomAlphaOfLengthBetween(5, 10)); + if (randomBoolean()) { + searchScrollRequest.scroll(randomPositiveTimeValue()); + } + Request request = Request.searchScroll(searchScrollRequest); + assertEquals("GET", request.method); + assertEquals("/_search/scroll", request.endpoint); + assertEquals(0, request.params.size()); + assertToXContentBody(searchScrollRequest, request.entity); + assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaType(), request.entity.getContentType().getValue()); + } + + public void testClearScroll() throws IOException { + ClearScrollRequest clearScrollRequest = new ClearScrollRequest(); + int numScrolls = randomIntBetween(1, 10); + for (int i = 0; i < numScrolls; i++) { + clearScrollRequest.addScrollId(randomAlphaOfLengthBetween(5, 10)); + } + Request request = Request.clearScroll(clearScrollRequest); + assertEquals("DELETE", request.method); + assertEquals("/_search/scroll", request.endpoint); + assertEquals(0, request.params.size()); + assertToXContentBody(clearScrollRequest, request.entity); + assertEquals(Request.REQUEST_BODY_CONTENT_TYPE.mediaType(), request.entity.getContentType().getValue()); + } + + private static void assertToXContentBody(ToXContent expectedBody, HttpEntity actualEntity) throws IOException { + BytesReference expectedBytes = XContentHelper.toXContent(expectedBody, Request.REQUEST_BODY_CONTENT_TYPE, false); + assertEquals(XContentType.JSON.mediaType(), actualEntity.getContentType().getValue()); + assertEquals(expectedBytes, new BytesArray(EntityUtils.toByteArray(actualEntity))); + } + + public void testParams() { + final int nbParams = randomIntBetween(0, 10); + Request.Params params = Request.Params.builder(); + Map expectedParams = new HashMap<>(); + for (int i = 0; i < nbParams; i++) { + String paramName = "p_" + i; + String paramValue = randomAlphaOfLength(5); + params.putParam(paramName, paramValue); + expectedParams.put(paramName, paramValue); + } + + Map requestParams = params.getParams(); + assertEquals(nbParams, requestParams.size()); + assertEquals(expectedParams, requestParams); + } + + public void testParamsNoDuplicates() { + Request.Params params = Request.Params.builder(); + params.putParam("test", "1"); + + IllegalArgumentException e = expectThrows(IllegalArgumentException.class, () -> params.putParam("test", "2")); + assertEquals("Request parameter [test] is already registered", e.getMessage()); + + Map requestParams = params.getParams(); + assertEquals(1L, requestParams.size()); + assertEquals("1", requestParams.values().iterator().next()); + } + + public void testEndpoint() { + assertEquals("/", Request.endpoint()); + assertEquals("/", Request.endpoint(Strings.EMPTY_ARRAY)); + assertEquals("/", Request.endpoint("")); + assertEquals("/a/b", Request.endpoint("a", "b")); + assertEquals("/a/b/_create", Request.endpoint("a", "b", "_create")); + assertEquals("/a/b/c/_create", Request.endpoint("a", "b", "c", "_create")); + assertEquals("/a/_create", Request.endpoint("a", null, null, "_create")); + } + + public void testEnforceSameContentType() { + XContentType xContentType = randomFrom(XContentType.JSON, XContentType.SMILE); + IndexRequest indexRequest = new IndexRequest().source(singletonMap("field", "value"), xContentType); + assertEquals(xContentType, enforceSameContentType(indexRequest, null)); + assertEquals(xContentType, enforceSameContentType(indexRequest, xContentType)); + + XContentType bulkContentType = randomBoolean() ? xContentType : null; + + IllegalArgumentException exception = expectThrows(IllegalArgumentException.class, () -> + enforceSameContentType(new IndexRequest().source(singletonMap("field", "value"), XContentType.CBOR), bulkContentType)); + assertEquals("Unsupported content-type found for request with content-type [CBOR], only JSON and SMILE are supported", + exception.getMessage()); + + exception = expectThrows(IllegalArgumentException.class, () -> + enforceSameContentType(new IndexRequest().source(singletonMap("field", "value"), XContentType.YAML), bulkContentType)); + assertEquals("Unsupported content-type found for request with content-type [YAML], only JSON and SMILE are supported", + exception.getMessage()); + + XContentType requestContentType = xContentType == XContentType.JSON ? XContentType.SMILE : XContentType.JSON; + + exception = expectThrows(IllegalArgumentException.class, () -> + enforceSameContentType(new IndexRequest().source(singletonMap("field", "value"), requestContentType), xContentType)); + assertEquals("Mismatching content-type found for request with content-type [" + requestContentType + "], " + + "previous requests have content-type [" + xContentType + "]", exception.getMessage()); + } + + /** + * Randomize the {@link FetchSourceContext} request parameters. + */ + private static void randomizeFetchSourceContextParams(Consumer consumer, Map expectedParams) { + if (randomBoolean()) { + if (randomBoolean()) { + boolean fetchSource = randomBoolean(); + consumer.accept(new FetchSourceContext(fetchSource)); + if (fetchSource == false) { + expectedParams.put("_source", "false"); + } + } else { + int numIncludes = randomIntBetween(0, 5); + String[] includes = new String[numIncludes]; + StringBuilder includesParam = new StringBuilder(); + for (int i = 0; i < numIncludes; i++) { + String include = randomAlphaOfLengthBetween(3, 10); + includes[i] = include; + includesParam.append(include); + if (i < numIncludes - 1) { + includesParam.append(","); + } + } + if (numIncludes > 0) { + expectedParams.put("_source_include", includesParam.toString()); + } + int numExcludes = randomIntBetween(0, 5); + String[] excludes = new String[numExcludes]; + StringBuilder excludesParam = new StringBuilder(); + for (int i = 0; i < numExcludes; i++) { + String exclude = randomAlphaOfLengthBetween(3, 10); + excludes[i] = exclude; + excludesParam.append(exclude); + if (i < numExcludes - 1) { + excludesParam.append(","); + } + } + if (numExcludes > 0) { + expectedParams.put("_source_exclude", excludesParam.toString()); + } + consumer.accept(new FetchSourceContext(true, includes, excludes)); + } + } + } + + private static void setRandomTimeout(ReplicationRequest request, Map expectedParams) { + if (randomBoolean()) { + String timeout = randomTimeValue(); + request.timeout(timeout); + expectedParams.put("timeout", timeout); + } else { + expectedParams.put("timeout", ReplicationRequest.DEFAULT_TIMEOUT.getStringRep()); + } + } + + private static void setRandomRefreshPolicy(ReplicatedWriteRequest request, Map expectedParams) { + if (randomBoolean()) { + WriteRequest.RefreshPolicy refreshPolicy = randomFrom(WriteRequest.RefreshPolicy.values()); + request.setRefreshPolicy(refreshPolicy); + if (refreshPolicy != WriteRequest.RefreshPolicy.NONE) { + expectedParams.put("refresh", refreshPolicy.getValue()); + } + } + } + + private static void setRandomVersion(DocWriteRequest request, Map expectedParams) { + if (randomBoolean()) { + long version = randomFrom(Versions.MATCH_ANY, Versions.MATCH_DELETED, Versions.NOT_FOUND, randomNonNegativeLong()); + request.version(version); + if (version != Versions.MATCH_ANY) { + expectedParams.put("version", Long.toString(version)); + } + } + } + + private static void setRandomVersionType(DocWriteRequest request, Map expectedParams) { + if (randomBoolean()) { + VersionType versionType = randomFrom(VersionType.values()); + request.versionType(versionType); + if (versionType != VersionType.INTERNAL) { + expectedParams.put("version_type", versionType.name().toLowerCase(Locale.ROOT)); + } + } + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java new file mode 100644 index 0000000000000..cb32f9ae9dd93 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientExtTests.java @@ -0,0 +1,138 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.HttpEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.test.ESTestCase; +import org.junit.Before; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; + +import static org.hamcrest.CoreMatchers.instanceOf; +import static org.mockito.Mockito.mock; + +/** + * This test works against a {@link RestHighLevelClient} subclass that simulats how custom response sections returned by + * Elasticsearch plugins can be parsed using the high level client. + */ +public class RestHighLevelClientExtTests extends ESTestCase { + + private RestHighLevelClient restHighLevelClient; + + @Before + public void initClient() throws IOException { + RestClient restClient = mock(RestClient.class); + restHighLevelClient = new RestHighLevelClientExt(restClient); + } + + public void testParseEntityCustomResponseSection() throws IOException { + { + HttpEntity jsonEntity = new StringEntity("{\"custom1\":{ \"field\":\"value\"}}", ContentType.APPLICATION_JSON); + BaseCustomResponseSection customSection = restHighLevelClient.parseEntity(jsonEntity, BaseCustomResponseSection::fromXContent); + assertThat(customSection, instanceOf(CustomResponseSection1.class)); + CustomResponseSection1 customResponseSection1 = (CustomResponseSection1) customSection; + assertEquals("value", customResponseSection1.value); + } + { + HttpEntity jsonEntity = new StringEntity("{\"custom2\":{ \"array\": [\"item1\", \"item2\"]}}", ContentType.APPLICATION_JSON); + BaseCustomResponseSection customSection = restHighLevelClient.parseEntity(jsonEntity, BaseCustomResponseSection::fromXContent); + assertThat(customSection, instanceOf(CustomResponseSection2.class)); + CustomResponseSection2 customResponseSection2 = (CustomResponseSection2) customSection; + assertArrayEquals(new String[]{"item1", "item2"}, customResponseSection2.values); + } + } + + private static class RestHighLevelClientExt extends RestHighLevelClient { + + private RestHighLevelClientExt(RestClient restClient) { + super(restClient, getNamedXContentsExt()); + } + + private static List getNamedXContentsExt() { + List entries = new ArrayList<>(); + entries.add(new NamedXContentRegistry.Entry(BaseCustomResponseSection.class, new ParseField("custom1"), + CustomResponseSection1::fromXContent)); + entries.add(new NamedXContentRegistry.Entry(BaseCustomResponseSection.class, new ParseField("custom2"), + CustomResponseSection2::fromXContent)); + return entries; + } + } + + private abstract static class BaseCustomResponseSection { + + static BaseCustomResponseSection fromXContent(XContentParser parser) throws IOException { + assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken()); + assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken()); + BaseCustomResponseSection custom = parser.namedObject(BaseCustomResponseSection.class, parser.currentName(), null); + assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken()); + return custom; + } + } + + private static class CustomResponseSection1 extends BaseCustomResponseSection { + + private final String value; + + private CustomResponseSection1(String value) { + this.value = value; + } + + static CustomResponseSection1 fromXContent(XContentParser parser) throws IOException { + assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken()); + assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken()); + assertEquals("field", parser.currentName()); + assertEquals(XContentParser.Token.VALUE_STRING, parser.nextToken()); + CustomResponseSection1 responseSection1 = new CustomResponseSection1(parser.text()); + assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken()); + return responseSection1; + } + } + + private static class CustomResponseSection2 extends BaseCustomResponseSection { + + private final String[] values; + + private CustomResponseSection2(String[] values) { + this.values = values; + } + + static CustomResponseSection2 fromXContent(XContentParser parser) throws IOException { + assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken()); + assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken()); + assertEquals("array", parser.currentName()); + assertEquals(XContentParser.Token.START_ARRAY, parser.nextToken()); + List values = new ArrayList<>(); + while(parser.nextToken().isValue()) { + values.add(parser.text()); + } + assertEquals(XContentParser.Token.END_ARRAY, parser.currentToken()); + CustomResponseSection2 responseSection2 = new CustomResponseSection2(values.toArray(new String[values.size()])); + assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken()); + return responseSection2; + } + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java index 7d513e489982c..7fc0733a7f0c7 100644 --- a/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/RestHighLevelClientTests.java @@ -19,7 +19,48 @@ package org.elasticsearch.client; +import com.fasterxml.jackson.core.JsonParseException; import org.apache.http.Header; +import org.apache.http.HttpEntity; +import org.apache.http.HttpHost; +import org.apache.http.HttpResponse; +import org.apache.http.ProtocolVersion; +import org.apache.http.RequestLine; +import org.apache.http.StatusLine; +import org.apache.http.entity.ByteArrayEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.message.BasicHttpResponse; +import org.apache.http.message.BasicRequestLine; +import org.apache.http.message.BasicStatusLine; +import org.apache.http.nio.entity.NStringEntity; +import org.elasticsearch.Build; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.ActionRequestValidationException; +import org.elasticsearch.action.main.MainRequest; +import org.elasticsearch.action.main.MainResponse; +import org.elasticsearch.action.search.ClearScrollRequest; +import org.elasticsearch.action.search.ClearScrollResponse; +import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.action.search.SearchResponseSections; +import org.elasticsearch.action.search.SearchScrollRequest; +import org.elasticsearch.action.search.ShardSearchFailure; +import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.xcontent.NamedXContentRegistry; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.cbor.CborXContent; +import org.elasticsearch.common.xcontent.smile.SmileXContent; +import org.elasticsearch.rest.RestStatus; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.aggregations.Aggregation; +import org.elasticsearch.search.aggregations.InternalAggregations; +import org.elasticsearch.search.suggest.Suggest; import org.elasticsearch.test.ESTestCase; import org.junit.Before; import org.mockito.ArgumentMatcher; @@ -28,47 +69,581 @@ import java.io.IOException; import java.net.SocketTimeoutException; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; -import static org.mockito.Matchers.any; +import static org.elasticsearch.client.RestClientTestUtil.randomHeaders; +import static org.elasticsearch.common.xcontent.XContentHelper.toXContent; +import static org.hamcrest.CoreMatchers.instanceOf; +import static org.mockito.Matchers.anyMapOf; +import static org.mockito.Matchers.anyObject; +import static org.mockito.Matchers.anyString; +import static org.mockito.Matchers.anyVararg; import static org.mockito.Matchers.argThat; import static org.mockito.Matchers.eq; +import static org.mockito.Matchers.isNotNull; +import static org.mockito.Matchers.isNull; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; public class RestHighLevelClientTests extends ESTestCase { + private static final ProtocolVersion HTTP_PROTOCOL = new ProtocolVersion("http", 1, 1); + private static final RequestLine REQUEST_LINE = new BasicRequestLine("GET", "/", HTTP_PROTOCOL); + private RestClient restClient; private RestHighLevelClient restHighLevelClient; @Before - public void initClient() throws IOException { + public void initClient() { restClient = mock(RestClient.class); restHighLevelClient = new RestHighLevelClient(restClient); } - public void testPing() throws IOException { - assertTrue(restHighLevelClient.ping()); - verify(restClient).performRequest(eq("HEAD"), eq("/"), argThat(new HeadersVarargMatcher())); + public void testPingSuccessful() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + Response response = mock(Response.class); + when(response.getStatusLine()).thenReturn(newStatusLine(RestStatus.OK)); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenReturn(response); + assertTrue(restHighLevelClient.ping(headers)); + verify(restClient).performRequest(eq("HEAD"), eq("/"), eq(Collections.emptyMap()), + isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); } - public void testPingFailure() throws IOException { - when(restClient.performRequest(any(), any())).thenThrow(new IllegalStateException()); - expectThrows(IllegalStateException.class, () -> restHighLevelClient.ping()); + public void testPing404NotFound() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + Response response = mock(Response.class); + when(response.getStatusLine()).thenReturn(newStatusLine(RestStatus.NOT_FOUND)); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenReturn(response); + assertFalse(restHighLevelClient.ping(headers)); + verify(restClient).performRequest(eq("HEAD"), eq("/"), eq(Collections.emptyMap()), + isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); } - public void testPingFailed() throws IOException { - when(restClient.performRequest(any(), any())).thenThrow(new SocketTimeoutException()); - assertFalse(restHighLevelClient.ping()); + public void testPingSocketTimeout() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(new SocketTimeoutException()); + expectThrows(SocketTimeoutException.class, () -> restHighLevelClient.ping(headers)); + verify(restClient).performRequest(eq("HEAD"), eq("/"), eq(Collections.emptyMap()), + isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); } - public void testPingWithHeaders() throws IOException { - Header[] headers = RestClientTestUtil.randomHeaders(random(), "Header"); - assertTrue(restHighLevelClient.ping(headers)); - verify(restClient).performRequest(eq("HEAD"), eq("/"), argThat(new HeadersVarargMatcher(headers))); + public void testInfo() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + MainResponse testInfo = new MainResponse("nodeName", Version.CURRENT, new ClusterName("clusterName"), "clusterUuid", + Build.CURRENT, true); + mockResponse(testInfo); + MainResponse receivedInfo = restHighLevelClient.info(headers); + assertEquals(testInfo, receivedInfo); + verify(restClient).performRequest(eq("GET"), eq("/"), eq(Collections.emptyMap()), + isNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); + } + + public void testSearchScroll() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + SearchResponse mockSearchResponse = new SearchResponse(new SearchResponseSections(SearchHits.empty(), InternalAggregations.EMPTY, + null, false, false, null, 1), randomAlphaOfLengthBetween(5, 10), 5, 5, 100, new ShardSearchFailure[0]); + mockResponse(mockSearchResponse); + SearchResponse searchResponse = restHighLevelClient.searchScroll(new SearchScrollRequest(randomAlphaOfLengthBetween(5, 10)), + headers); + assertEquals(mockSearchResponse.getScrollId(), searchResponse.getScrollId()); + assertEquals(0, searchResponse.getHits().totalHits); + assertEquals(5, searchResponse.getTotalShards()); + assertEquals(5, searchResponse.getSuccessfulShards()); + assertEquals(100, searchResponse.getTook().getMillis()); + verify(restClient).performRequest(eq("GET"), eq("/_search/scroll"), eq(Collections.emptyMap()), + isNotNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); + } + + public void testClearScroll() throws IOException { + Header[] headers = randomHeaders(random(), "Header"); + ClearScrollResponse mockClearScrollResponse = new ClearScrollResponse(randomBoolean(), randomIntBetween(0, Integer.MAX_VALUE)); + mockResponse(mockClearScrollResponse); + ClearScrollRequest clearScrollRequest = new ClearScrollRequest(); + clearScrollRequest.addScrollId(randomAlphaOfLengthBetween(5, 10)); + ClearScrollResponse clearScrollResponse = restHighLevelClient.clearScroll(clearScrollRequest, headers); + assertEquals(mockClearScrollResponse.isSucceeded(), clearScrollResponse.isSucceeded()); + assertEquals(mockClearScrollResponse.getNumFreed(), clearScrollResponse.getNumFreed()); + verify(restClient).performRequest(eq("DELETE"), eq("/_search/scroll"), eq(Collections.emptyMap()), + isNotNull(HttpEntity.class), argThat(new HeadersVarargMatcher(headers))); + } + + private void mockResponse(ToXContent toXContent) throws IOException { + Response response = mock(Response.class); + ContentType contentType = ContentType.parse(Request.REQUEST_BODY_CONTENT_TYPE.mediaType()); + String requestBody = toXContent(toXContent, Request.REQUEST_BODY_CONTENT_TYPE, false).utf8ToString(); + when(response.getEntity()).thenReturn(new NStringEntity(requestBody, contentType)); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenReturn(response); + } + + public void testRequestValidation() { + ActionRequestValidationException validationException = new ActionRequestValidationException(); + validationException.addValidationError("validation error"); + ActionRequest request = new ActionRequest() { + @Override + public ActionRequestValidationException validate() { + return validationException; + } + }; + + { + ActionRequestValidationException actualException = expectThrows(ActionRequestValidationException.class, + () -> restHighLevelClient.performRequest(request, null, null, null)); + assertSame(validationException, actualException); + } + { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + restHighLevelClient.performRequestAsync(request, null, null, trackingActionListener, null); + assertSame(validationException, trackingActionListener.exception.get()); + } + } + + public void testParseEntity() throws IOException { + { + IllegalStateException ise = expectThrows(IllegalStateException.class, () -> restHighLevelClient.parseEntity(null, null)); + assertEquals("Response body expected but not returned", ise.getMessage()); + } + { + IllegalStateException ise = expectThrows(IllegalStateException.class, + () -> restHighLevelClient.parseEntity(new StringEntity("", (ContentType) null), null)); + assertEquals("Elasticsearch didn't return the [Content-Type] header, unable to parse response body", ise.getMessage()); + } + { + StringEntity entity = new StringEntity("", ContentType.APPLICATION_SVG_XML); + IllegalStateException ise = expectThrows(IllegalStateException.class, () -> restHighLevelClient.parseEntity(entity, null)); + assertEquals("Unsupported Content-Type: " + entity.getContentType().getValue(), ise.getMessage()); + } + { + CheckedFunction entityParser = parser -> { + assertEquals(XContentParser.Token.START_OBJECT, parser.nextToken()); + assertEquals(XContentParser.Token.FIELD_NAME, parser.nextToken()); + assertTrue(parser.nextToken().isValue()); + String value = parser.text(); + assertEquals(XContentParser.Token.END_OBJECT, parser.nextToken()); + return value; + }; + HttpEntity jsonEntity = new StringEntity("{\"field\":\"value\"}", ContentType.APPLICATION_JSON); + assertEquals("value", restHighLevelClient.parseEntity(jsonEntity, entityParser)); + HttpEntity yamlEntity = new StringEntity("---\nfield: value\n", ContentType.create("application/yaml")); + assertEquals("value", restHighLevelClient.parseEntity(yamlEntity, entityParser)); + HttpEntity smileEntity = createBinaryEntity(SmileXContent.contentBuilder(), ContentType.create("application/smile")); + assertEquals("value", restHighLevelClient.parseEntity(smileEntity, entityParser)); + HttpEntity cborEntity = createBinaryEntity(CborXContent.contentBuilder(), ContentType.create("application/cbor")); + assertEquals("value", restHighLevelClient.parseEntity(cborEntity, entityParser)); + } + } + + private static HttpEntity createBinaryEntity(XContentBuilder xContentBuilder, ContentType contentType) throws IOException { + try (XContentBuilder builder = xContentBuilder) { + builder.startObject(); + builder.field("field", "value"); + builder.endObject(); + return new ByteArrayEntity(builder.bytes().toBytesRef().bytes, contentType); + } + } + + public void testConvertExistsResponse() { + RestStatus restStatus = randomBoolean() ? RestStatus.OK : randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + boolean result = RestHighLevelClient.convertExistsResponse(response); + assertEquals(restStatus == RestStatus.OK, result); + } + + public void testParseResponseException() throws IOException { + { + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException); + assertEquals(responseException.getMessage(), elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + } + { + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":" + restStatus.getStatus() + "}", + ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException); + assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getSuppressed()[0]); + } + { + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":", ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IOException.class)); + } + { + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"status\":" + restStatus.getStatus() + "}", ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + ElasticsearchException elasticsearchException = restHighLevelClient.parseResponseException(responseException); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IllegalStateException.class)); + } + } + + public void testPerformRequestOnSuccess() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenReturn(mockResponse); + { + Integer result = restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.emptySet()); + assertEquals(restStatus.getStatus(), result.intValue()); + } + { + IOException ioe = expectThrows(IOException.class, () -> restHighLevelClient.performRequest(mainRequest, + requestConverter, response -> {throw new IllegalStateException();}, Collections.emptySet())); + assertEquals("Unable to parse response body for Response{requestLine=GET / http/1.1, host=http://localhost:9200, " + + "response=http/1.1 " + restStatus.getStatus() + " " + restStatus.name() + "}", ioe.getMessage()); + } + } + + public void testPerformRequestOnResponseExceptionWithoutEntity() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.emptySet())); + assertEquals(responseException.getMessage(), elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + } + + public void testPerformRequestOnResponseExceptionWithEntity() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":" + restStatus.getStatus() + "}", + ContentType.APPLICATION_JSON)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.emptySet())); + assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getSuppressed()[0]); + } + + public void testPerformRequestOnResponseExceptionWithBrokenEntity() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":", ContentType.APPLICATION_JSON)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.emptySet())); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(JsonParseException.class)); } - private class HeadersVarargMatcher extends ArgumentMatcher implements VarargMatcher { + public void testPerformRequestOnResponseExceptionWithBrokenEntity2() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"status\":" + restStatus.getStatus() + "}", ContentType.APPLICATION_JSON)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.emptySet())); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IllegalStateException.class)); + } + + public void testPerformRequestOnResponseExceptionWithIgnores() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + //although we got an exception, we turn it into a successful response because the status code was provided among ignores + assertEquals(Integer.valueOf(404), restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> response.getStatusLine().getStatusCode(), Collections.singleton(404))); + } + + public void testPerformRequestOnResponseExceptionWithIgnoresErrorNoBody() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> {throw new IllegalStateException();}, Collections.singleton(404))); + assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertEquals(responseException.getMessage(), elasticsearchException.getMessage()); + } + + public void testPerformRequestOnResponseExceptionWithIgnoresErrorValidBody() throws IOException { + MainRequest mainRequest = new MainRequest(); + CheckedFunction requestConverter = request -> + new Request("GET", "/", Collections.emptyMap(), null); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":404}", + ContentType.APPLICATION_JSON)); + Response mockResponse = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(mockResponse); + when(restClient.performRequest(anyString(), anyString(), anyMapOf(String.class, String.class), + anyObject(), anyVararg())).thenThrow(responseException); + ElasticsearchException elasticsearchException = expectThrows(ElasticsearchException.class, + () -> restHighLevelClient.performRequest(mainRequest, requestConverter, + response -> {throw new IllegalStateException();}, Collections.singleton(404))); + assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getSuppressed()[0]); + assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage()); + } + + public void testWrapResponseListenerOnSuccess() { + { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + responseListener.onSuccess(new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse)); + assertNull(trackingActionListener.exception.get()); + assertEquals(restStatus.getStatus(), trackingActionListener.statusCode.get()); + } + { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> {throw new IllegalStateException();}, trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + responseListener.onSuccess(new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse)); + assertThat(trackingActionListener.exception.get(), instanceOf(IOException.class)); + IOException ioe = (IOException) trackingActionListener.exception.get(); + assertEquals("Unable to parse response body for Response{requestLine=GET / http/1.1, host=http://localhost:9200, " + + "response=http/1.1 " + restStatus.getStatus() + " " + restStatus.name() + "}", ioe.getMessage()); + assertThat(ioe.getCause(), instanceOf(IllegalStateException.class)); + } + } + + public void testWrapResponseListenerOnException() { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + IllegalStateException exception = new IllegalStateException(); + responseListener.onFailure(exception); + assertSame(exception, trackingActionListener.exception.get()); + } + + public void testWrapResponseListenerOnResponseExceptionWithoutEntity() throws IOException { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException) trackingActionListener.exception.get(); + assertEquals(responseException.getMessage(), elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + } + + public void testWrapResponseListenerOnResponseExceptionWithEntity() throws IOException { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":" + restStatus.getStatus() + "}", + ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get(); + assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getSuppressed()[0]); + } + + public void testWrapResponseListenerOnResponseExceptionWithBrokenEntity() throws IOException { + { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"error\":", ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get(); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(JsonParseException.class)); + } + { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.emptySet()); + RestStatus restStatus = randomFrom(RestStatus.values()); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(restStatus)); + httpResponse.setEntity(new StringEntity("{\"status\":" + restStatus.getStatus() + "}", ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get(); + assertEquals("Unable to parse response body", elasticsearchException.getMessage()); + assertEquals(restStatus, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertThat(elasticsearchException.getSuppressed()[0], instanceOf(IllegalStateException.class)); + } + } + + public void testWrapResponseListenerOnResponseExceptionWithIgnores() throws IOException { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> response.getStatusLine().getStatusCode(), trackingActionListener, Collections.singleton(404)); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + //although we got an exception, we turn it into a successful response because the status code was provided among ignores + assertNull(trackingActionListener.exception.get()); + assertEquals(404, trackingActionListener.statusCode.get()); + } + + public void testWrapResponseListenerOnResponseExceptionWithIgnoresErrorNoBody() throws IOException { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + //response parsing throws exception while handling ignores. same as when GetResponse#fromXContent throws error when trying + //to parse a 404 response which contains an error rather than a valid document not found response. + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> { throw new IllegalStateException(); }, trackingActionListener, Collections.singleton(404)); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get(); + assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getCause()); + assertEquals(responseException.getMessage(), elasticsearchException.getMessage()); + } + + public void testWrapResponseListenerOnResponseExceptionWithIgnoresErrorValidBody() throws IOException { + TrackingActionListener trackingActionListener = new TrackingActionListener(); + //response parsing throws exception while handling ignores. same as when GetResponse#fromXContent throws error when trying + //to parse a 404 response which contains an error rather than a valid document not found response. + ResponseListener responseListener = restHighLevelClient.wrapResponseListener( + response -> { throw new IllegalStateException(); }, trackingActionListener, Collections.singleton(404)); + HttpResponse httpResponse = new BasicHttpResponse(newStatusLine(RestStatus.NOT_FOUND)); + httpResponse.setEntity(new StringEntity("{\"error\":\"test error message\",\"status\":404}", + ContentType.APPLICATION_JSON)); + Response response = new Response(REQUEST_LINE, new HttpHost("localhost", 9200), httpResponse); + ResponseException responseException = new ResponseException(response); + responseListener.onFailure(responseException); + assertThat(trackingActionListener.exception.get(), instanceOf(ElasticsearchException.class)); + ElasticsearchException elasticsearchException = (ElasticsearchException)trackingActionListener.exception.get(); + assertEquals(RestStatus.NOT_FOUND, elasticsearchException.status()); + assertSame(responseException, elasticsearchException.getSuppressed()[0]); + assertEquals("Elasticsearch exception [type=exception, reason=test error message]", elasticsearchException.getMessage()); + } + + public void testNamedXContents() { + List namedXContents = RestHighLevelClient.getDefaultNamedXContents(); + assertEquals(45, namedXContents.size()); + Map, Integer> categories = new HashMap<>(); + for (NamedXContentRegistry.Entry namedXContent : namedXContents) { + Integer counter = categories.putIfAbsent(namedXContent.categoryClass, 1); + if (counter != null) { + categories.put(namedXContent.categoryClass, counter + 1); + } + } + assertEquals(2, categories.size()); + assertEquals(Integer.valueOf(42), categories.get(Aggregation.class)); + assertEquals(Integer.valueOf(3), categories.get(Suggest.Suggestion.class)); + } + + private static class TrackingActionListener implements ActionListener { + private final AtomicInteger statusCode = new AtomicInteger(-1); + private final AtomicReference exception = new AtomicReference<>(); + + @Override + public void onResponse(Integer statusCode) { + assertTrue(this.statusCode.compareAndSet(-1, statusCode)); + } + + @Override + public void onFailure(Exception e) { + assertTrue(exception.compareAndSet(null, e)); + } + } + + private static class HeadersVarargMatcher extends ArgumentMatcher implements VarargMatcher { private Header[] expectedHeaders; HeadersVarargMatcher(Header... expectedHeaders) { @@ -84,4 +659,8 @@ public boolean matches(Object varargArgument) { return false; } } + + private static StatusLine newStatusLine(RestStatus restStatus) { + return new BasicStatusLine(HTTP_PROTOCOL, restStatus.getStatus(), restStatus.name()); + } } diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java new file mode 100644 index 0000000000000..328f2ee32f557 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/SearchIT.java @@ -0,0 +1,464 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.HttpEntity; +import org.apache.http.entity.ContentType; +import org.apache.http.entity.StringEntity; +import org.apache.http.nio.entity.NStringEntity; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ElasticsearchStatusException; +import org.elasticsearch.action.search.ClearScrollRequest; +import org.elasticsearch.action.search.ClearScrollResponse; +import org.elasticsearch.action.search.SearchRequest; +import org.elasticsearch.action.search.SearchResponse; +import org.elasticsearch.action.search.SearchScrollRequest; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.index.query.MatchQueryBuilder; +import org.elasticsearch.join.aggregations.Children; +import org.elasticsearch.join.aggregations.ChildrenAggregationBuilder; +import org.elasticsearch.rest.RestStatus; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.aggregations.bucket.range.Range; +import org.elasticsearch.search.aggregations.bucket.range.RangeAggregationBuilder; +import org.elasticsearch.search.aggregations.bucket.terms.Terms; +import org.elasticsearch.search.aggregations.bucket.terms.TermsAggregationBuilder; +import org.elasticsearch.search.aggregations.matrix.stats.MatrixStats; +import org.elasticsearch.search.aggregations.matrix.stats.MatrixStatsAggregationBuilder; +import org.elasticsearch.search.aggregations.support.ValueType; +import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.sort.SortOrder; +import org.elasticsearch.search.suggest.Suggest; +import org.elasticsearch.search.suggest.SuggestBuilder; +import org.elasticsearch.search.suggest.phrase.PhraseSuggestionBuilder; +import org.junit.Before; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Collections; + +import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; +import static org.hamcrest.Matchers.both; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.either; +import static org.hamcrest.Matchers.equalTo; +import static org.hamcrest.Matchers.greaterThan; +import static org.hamcrest.Matchers.greaterThanOrEqualTo; +import static org.hamcrest.Matchers.instanceOf; +import static org.hamcrest.Matchers.lessThan; + +public class SearchIT extends ESRestHighLevelClientTestCase { + + @Before + public void indexDocuments() throws IOException { + StringEntity doc1 = new StringEntity("{\"type\":\"type1\", \"num\":10, \"num2\":50}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/index/type/1", Collections.emptyMap(), doc1); + StringEntity doc2 = new StringEntity("{\"type\":\"type1\", \"num\":20, \"num2\":40}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/index/type/2", Collections.emptyMap(), doc2); + StringEntity doc3 = new StringEntity("{\"type\":\"type1\", \"num\":50, \"num2\":35}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/index/type/3", Collections.emptyMap(), doc3); + StringEntity doc4 = new StringEntity("{\"type\":\"type2\", \"num\":100, \"num2\":10}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/index/type/4", Collections.emptyMap(), doc4); + StringEntity doc5 = new StringEntity("{\"type\":\"type2\", \"num\":100, \"num2\":10}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/index/type/5", Collections.emptyMap(), doc5); + client().performRequest("POST", "/index/_refresh"); + } + + public void testSearchNoQuery() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getAggregations()); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(5, searchResponse.getHits().totalHits); + assertEquals(5, searchResponse.getHits().getHits().length); + for (SearchHit searchHit : searchResponse.getHits().getHits()) { + assertEquals("index", searchHit.getIndex()); + assertEquals("type", searchHit.getType()); + assertThat(Integer.valueOf(searchHit.getId()), both(greaterThan(0)).and(lessThan(6))); + assertEquals(1.0f, searchHit.getScore(), 0); + assertEquals(-1L, searchHit.getVersion()); + assertNotNull(searchHit.getSourceAsMap()); + assertEquals(3, searchHit.getSourceAsMap().size()); + assertTrue(searchHit.getSourceAsMap().containsKey("type")); + assertTrue(searchHit.getSourceAsMap().containsKey("num")); + assertTrue(searchHit.getSourceAsMap().containsKey("num2")); + } + } + + public void testSearchMatchQuery() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + searchRequest.source(new SearchSourceBuilder().query(new MatchQueryBuilder("num", 10))); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getAggregations()); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(1, searchResponse.getHits().totalHits); + assertEquals(1, searchResponse.getHits().getHits().length); + assertThat(searchResponse.getHits().getMaxScore(), greaterThan(0f)); + SearchHit searchHit = searchResponse.getHits().getHits()[0]; + assertEquals("index", searchHit.getIndex()); + assertEquals("type", searchHit.getType()); + assertEquals("1", searchHit.getId()); + assertThat(searchHit.getScore(), greaterThan(0f)); + assertEquals(-1L, searchHit.getVersion()); + assertNotNull(searchHit.getSourceAsMap()); + assertEquals(3, searchHit.getSourceAsMap().size()); + assertEquals("type1", searchHit.getSourceAsMap().get("type")); + assertEquals(50, searchHit.getSourceAsMap().get("num2")); + } + + public void testSearchWithTermsAgg() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.aggregation(new TermsAggregationBuilder("agg1", ValueType.STRING).field("type.keyword")); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + Terms termsAgg = searchResponse.getAggregations().get("agg1"); + assertEquals("agg1", termsAgg.getName()); + assertEquals(2, termsAgg.getBuckets().size()); + Terms.Bucket type1 = termsAgg.getBucketByKey("type1"); + assertEquals(3, type1.getDocCount()); + assertEquals(0, type1.getAggregations().asList().size()); + Terms.Bucket type2 = termsAgg.getBucketByKey("type2"); + assertEquals(2, type2.getDocCount()); + assertEquals(0, type2.getAggregations().asList().size()); + } + + public void testSearchWithRangeAgg() throws IOException { + { + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.aggregation(new RangeAggregationBuilder("agg1").field("num")); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, + () -> execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync)); + assertEquals(RestStatus.BAD_REQUEST, exception.status()); + } + + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.aggregation(new RangeAggregationBuilder("agg1").field("num") + .addRange("first", 0, 30).addRange("second", 31, 200)); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(5, searchResponse.getHits().totalHits); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + Range rangeAgg = searchResponse.getAggregations().get("agg1"); + assertEquals("agg1", rangeAgg.getName()); + assertEquals(2, rangeAgg.getBuckets().size()); + { + Range.Bucket bucket = rangeAgg.getBuckets().get(0); + assertEquals("first", bucket.getKeyAsString()); + assertEquals(2, bucket.getDocCount()); + } + { + Range.Bucket bucket = rangeAgg.getBuckets().get(1); + assertEquals("second", bucket.getKeyAsString()); + assertEquals(3, bucket.getDocCount()); + } + } + + public void testSearchWithTermsAndRangeAgg() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + TermsAggregationBuilder agg = new TermsAggregationBuilder("agg1", ValueType.STRING).field("type.keyword"); + agg.subAggregation(new RangeAggregationBuilder("subagg").field("num") + .addRange("first", 0, 30).addRange("second", 31, 200)); + searchSourceBuilder.aggregation(agg); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + Terms termsAgg = searchResponse.getAggregations().get("agg1"); + assertEquals("agg1", termsAgg.getName()); + assertEquals(2, termsAgg.getBuckets().size()); + Terms.Bucket type1 = termsAgg.getBucketByKey("type1"); + assertEquals(3, type1.getDocCount()); + assertEquals(1, type1.getAggregations().asList().size()); + { + Range rangeAgg = type1.getAggregations().get("subagg"); + assertEquals(2, rangeAgg.getBuckets().size()); + { + Range.Bucket bucket = rangeAgg.getBuckets().get(0); + assertEquals("first", bucket.getKeyAsString()); + assertEquals(2, bucket.getDocCount()); + } + { + Range.Bucket bucket = rangeAgg.getBuckets().get(1); + assertEquals("second", bucket.getKeyAsString()); + assertEquals(1, bucket.getDocCount()); + } + } + Terms.Bucket type2 = termsAgg.getBucketByKey("type2"); + assertEquals(2, type2.getDocCount()); + assertEquals(1, type2.getAggregations().asList().size()); + { + Range rangeAgg = type2.getAggregations().get("subagg"); + assertEquals(2, rangeAgg.getBuckets().size()); + { + Range.Bucket bucket = rangeAgg.getBuckets().get(0); + assertEquals("first", bucket.getKeyAsString()); + assertEquals(0, bucket.getDocCount()); + } + { + Range.Bucket bucket = rangeAgg.getBuckets().get(1); + assertEquals("second", bucket.getKeyAsString()); + assertEquals(2, bucket.getDocCount()); + } + } + } + + public void testSearchWithMatrixStats() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.aggregation(new MatrixStatsAggregationBuilder("agg1").fields(Arrays.asList("num", "num2"))); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(5, searchResponse.getHits().totalHits); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + assertEquals(1, searchResponse.getAggregations().asList().size()); + MatrixStats matrixStats = searchResponse.getAggregations().get("agg1"); + assertEquals(5, matrixStats.getFieldCount("num")); + assertEquals(56d, matrixStats.getMean("num"), 0d); + assertEquals(1830d, matrixStats.getVariance("num"), 0d); + assertEquals(0.09340198804973057, matrixStats.getSkewness("num"), 0d); + assertEquals(1.2741646510794589, matrixStats.getKurtosis("num"), 0d); + assertEquals(5, matrixStats.getFieldCount("num2")); + assertEquals(29d, matrixStats.getMean("num2"), 0d); + assertEquals(330d, matrixStats.getVariance("num2"), 0d); + assertEquals(-0.13568039346585542, matrixStats.getSkewness("num2"), 0d); + assertEquals(1.3517561983471074, matrixStats.getKurtosis("num2"), 0d); + assertEquals(-767.5, matrixStats.getCovariance("num", "num2"), 0d); + assertEquals(-0.9876336291667923, matrixStats.getCorrelation("num", "num2"), 0d); + } + + public void testSearchWithParentJoin() throws IOException { + StringEntity parentMapping = new StringEntity("{\n" + + " \"mappings\": {\n" + + " \"answer\" : {\n" + + " \"_parent\" : {\n" + + " \"type\" : \"question\"\n" + + " }\n" + + " }\n" + + " },\n" + + " \"settings\": {\n" + + " \"index.mapping.single_type\": false" + + " }\n" + + "}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/child_example", Collections.emptyMap(), parentMapping); + StringEntity questionDoc = new StringEntity("{\n" + + " \"body\": \"

I have Windows 2003 server and i bought a new Windows 2008 server...\",\n" + + " \"title\": \"Whats the best way to file transfer my site from server to a newer one?\",\n" + + " \"tags\": [\n" + + " \"windows-server-2003\",\n" + + " \"windows-server-2008\",\n" + + " \"file-transfer\"\n" + + " ]\n" + + "}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/child_example/question/1", Collections.emptyMap(), questionDoc); + StringEntity answerDoc1 = new StringEntity("{\n" + + " \"owner\": {\n" + + " \"location\": \"Norfolk, United Kingdom\",\n" + + " \"display_name\": \"Sam\",\n" + + " \"id\": 48\n" + + " },\n" + + " \"body\": \"

Unfortunately you're pretty much limited to FTP...\",\n" + + " \"creation_date\": \"2009-05-04T13:45:37.030\"\n" + + "}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "child_example/answer/1", Collections.singletonMap("parent", "1"), answerDoc1); + StringEntity answerDoc2 = new StringEntity("{\n" + + " \"owner\": {\n" + + " \"location\": \"Norfolk, United Kingdom\",\n" + + " \"display_name\": \"Troll\",\n" + + " \"id\": 49\n" + + " },\n" + + " \"body\": \"

Use Linux...\",\n" + + " \"creation_date\": \"2009-05-05T13:45:37.030\"\n" + + "}", ContentType.APPLICATION_JSON); + client().performRequest("PUT", "/child_example/answer/2", Collections.singletonMap("parent", "1"), answerDoc2); + client().performRequest("POST", "/_refresh"); + + TermsAggregationBuilder leafTermAgg = new TermsAggregationBuilder("top-names", ValueType.STRING) + .field("owner.display_name.keyword").size(10); + ChildrenAggregationBuilder childrenAgg = new ChildrenAggregationBuilder("to-answers", "answer").subAggregation(leafTermAgg); + TermsAggregationBuilder termsAgg = new TermsAggregationBuilder("top-tags", ValueType.STRING).field("tags.keyword") + .size(10).subAggregation(childrenAgg); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.size(0).aggregation(termsAgg); + SearchRequest searchRequest = new SearchRequest("child_example"); + searchRequest.source(searchSourceBuilder); + + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getSuggest()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(3, searchResponse.getHits().totalHits); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + assertEquals(1, searchResponse.getAggregations().asList().size()); + Terms terms = searchResponse.getAggregations().get("top-tags"); + assertEquals(0, terms.getDocCountError()); + assertEquals(0, terms.getSumOfOtherDocCounts()); + assertEquals(3, terms.getBuckets().size()); + for (Terms.Bucket bucket : terms.getBuckets()) { + assertThat(bucket.getKeyAsString(), + either(equalTo("file-transfer")).or(equalTo("windows-server-2003")).or(equalTo("windows-server-2008"))); + assertEquals(1, bucket.getDocCount()); + assertEquals(1, bucket.getAggregations().asList().size()); + Children children = bucket.getAggregations().get("to-answers"); + assertEquals(2, children.getDocCount()); + assertEquals(1, children.getAggregations().asList().size()); + Terms leafTerms = children.getAggregations().get("top-names"); + assertEquals(0, leafTerms.getDocCountError()); + assertEquals(0, leafTerms.getSumOfOtherDocCounts()); + assertEquals(2, leafTerms.getBuckets().size()); + assertEquals(2, leafTerms.getBuckets().size()); + Terms.Bucket sam = leafTerms.getBucketByKey("Sam"); + assertEquals(1, sam.getDocCount()); + Terms.Bucket troll = leafTerms.getBucketByKey("Troll"); + assertEquals(1, troll.getDocCount()); + } + } + + public void testSearchWithSuggest() throws IOException { + SearchRequest searchRequest = new SearchRequest(); + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder(); + searchSourceBuilder.suggest(new SuggestBuilder().addSuggestion("sugg1", new PhraseSuggestionBuilder("type")) + .setGlobalText("type")); + searchSourceBuilder.size(0); + searchRequest.source(searchSourceBuilder); + + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + assertSearchHeader(searchResponse); + assertNull(searchResponse.getAggregations()); + assertEquals(Collections.emptyMap(), searchResponse.getProfileResults()); + assertEquals(0, searchResponse.getHits().totalHits); + assertEquals(0f, searchResponse.getHits().getMaxScore(), 0f); + assertEquals(0, searchResponse.getHits().getHits().length); + assertEquals(1, searchResponse.getSuggest().size()); + + Suggest.Suggestion> sugg = searchResponse + .getSuggest().iterator().next(); + assertEquals("sugg1", sugg.getName()); + for (Suggest.Suggestion.Entry options : sugg) { + assertEquals("type", options.getText().string()); + assertEquals(0, options.getOffset()); + assertEquals(4, options.getLength()); + assertEquals(2 ,options.getOptions().size()); + for (Suggest.Suggestion.Entry.Option option : options) { + assertThat(option.getScore(), greaterThan(0f)); + assertThat(option.getText().string(), either(equalTo("type1")).or(equalTo("type2"))); + } + } + } + + public void testSearchScroll() throws Exception { + + for (int i = 0; i < 100; i++) { + XContentBuilder builder = jsonBuilder().startObject().field("field", i).endObject(); + HttpEntity entity = new NStringEntity(builder.string(), ContentType.APPLICATION_JSON); + client().performRequest("PUT", "test/type1/" + Integer.toString(i), Collections.emptyMap(), entity); + } + client().performRequest("POST", "/test/_refresh"); + + SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder().size(35).sort("field", SortOrder.ASC); + SearchRequest searchRequest = new SearchRequest("test").scroll(TimeValue.timeValueMinutes(2)).source(searchSourceBuilder); + SearchResponse searchResponse = execute(searchRequest, highLevelClient()::search, highLevelClient()::searchAsync); + + try { + long counter = 0; + assertSearchHeader(searchResponse); + assertThat(searchResponse.getHits().getTotalHits(), equalTo(100L)); + assertThat(searchResponse.getHits().getHits().length, equalTo(35)); + for (SearchHit hit : searchResponse.getHits()) { + assertThat(((Number) hit.getSortValues()[0]).longValue(), equalTo(counter++)); + } + + searchResponse = execute(new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2)), + highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync); + + assertThat(searchResponse.getHits().getTotalHits(), equalTo(100L)); + assertThat(searchResponse.getHits().getHits().length, equalTo(35)); + for (SearchHit hit : searchResponse.getHits()) { + assertEquals(counter++, ((Number) hit.getSortValues()[0]).longValue()); + } + + searchResponse = execute(new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2)), + highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync); + + assertThat(searchResponse.getHits().getTotalHits(), equalTo(100L)); + assertThat(searchResponse.getHits().getHits().length, equalTo(30)); + for (SearchHit hit : searchResponse.getHits()) { + assertEquals(counter++, ((Number) hit.getSortValues()[0]).longValue()); + } + } finally { + ClearScrollRequest clearScrollRequest = new ClearScrollRequest(); + clearScrollRequest.addScrollId(searchResponse.getScrollId()); + ClearScrollResponse clearScrollResponse = execute(clearScrollRequest, + // Not using a method reference to work around https://bugs.eclipse.org/bugs/show_bug.cgi?id=517951 + (request, headers) -> highLevelClient().clearScroll(request, headers), + (request, listener, headers) -> highLevelClient().clearScrollAsync(request, listener, headers)); + assertThat(clearScrollResponse.getNumFreed(), greaterThan(0)); + assertTrue(clearScrollResponse.isSucceeded()); + + SearchScrollRequest scrollRequest = new SearchScrollRequest(searchResponse.getScrollId()).scroll(TimeValue.timeValueMinutes(2)); + ElasticsearchStatusException exception = expectThrows(ElasticsearchStatusException.class, () -> execute(scrollRequest, + highLevelClient()::searchScroll, highLevelClient()::searchScrollAsync)); + assertEquals(RestStatus.NOT_FOUND, exception.status()); + assertThat(exception.getRootCause(), instanceOf(ElasticsearchException.class)); + ElasticsearchException rootCause = (ElasticsearchException) exception.getRootCause(); + assertThat(rootCause.getMessage(), containsString("No search context found for")); + } + } + + private static void assertSearchHeader(SearchResponse searchResponse) { + assertThat(searchResponse.getTook().nanos(), greaterThanOrEqualTo(0L)); + assertEquals(0, searchResponse.getFailedShards()); + assertThat(searchResponse.getTotalShards(), greaterThan(0)); + assertEquals(searchResponse.getTotalShards(), searchResponse.getSuccessfulShards()); + assertEquals(0, searchResponse.getShardFailures().length); + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/DeleteDocumentationIT.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/DeleteDocumentationIT.java new file mode 100644 index 0000000000000..00c19019f47e7 --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/DeleteDocumentationIT.java @@ -0,0 +1,112 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client.documentation; + +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.DocWriteResponse; +import org.elasticsearch.action.delete.DeleteRequest; +import org.elasticsearch.action.delete.DeleteResponse; +import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.client.ESRestHighLevelClientTestCase; +import org.elasticsearch.client.RestHighLevelClient; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.index.VersionType; +import org.elasticsearch.rest.RestStatus; + +import java.io.IOException; + +/** + * This class is used to generate the Java Delete API documentation. + * You need to wrap your code between two tags like: + * // tag::example[] + * // end::example[] + * + * Where example is your tag name. + * + * Then in the documentation, you can extract what is between tag and end tags with + * ["source","java",subs="attributes,callouts"] + * -------------------------------------------------- + * sys2::[perl -ne 'exit if /end::example/; print if $tag; $tag = $tag || /tag::example/' \ + * {docdir}/../../client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/DeleteDocumentationIT.java] + * -------------------------------------------------- + */ +public class DeleteDocumentationIT extends ESRestHighLevelClientTestCase { + + /** + * This test documents docs/java-rest/high-level/document/delete.asciidoc + */ + public void testDelete() throws IOException { + RestHighLevelClient client = highLevelClient(); + + // tag::delete-request + DeleteRequest request = new DeleteRequest( + "index", // <1> + "type", // <2> + "id"); // <3> + // end::delete-request + + // tag::delete-request-props + request.timeout(TimeValue.timeValueSeconds(1)); // <1> + request.timeout("1s"); // <2> + request.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL); // <3> + request.setRefreshPolicy("wait_for"); // <4> + request.version(2); // <5> + request.versionType(VersionType.EXTERNAL); // <6> + // end::delete-request-props + + // tag::delete-execute + DeleteResponse response = client.delete(request); + // end::delete-execute + + try { + // tag::delete-notfound + if (response.getResult().equals(DocWriteResponse.Result.NOT_FOUND)) { + throw new Exception("Can't find document to be removed"); // <1> + } + // end::delete-notfound + } catch (Exception ignored) { } + + // tag::delete-execute-async + client.deleteAsync(request, new ActionListener() { + @Override + public void onResponse(DeleteResponse deleteResponse) { + // <1> + } + + @Override + public void onFailure(Exception e) { + // <2> + } + }); + // end::delete-execute-async + + // tag::delete-conflict + try { + client.delete(request); + } catch (ElasticsearchException exception) { + if (exception.status().equals(RestStatus.CONFLICT)) { + // <1> + } + } + // end::delete-conflict + + } +} diff --git a/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/QueryDSLDocumentationTests.java b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/QueryDSLDocumentationTests.java new file mode 100644 index 0000000000000..01a5eb5dfc12d --- /dev/null +++ b/client/rest-high-level/src/test/java/org/elasticsearch/client/documentation/QueryDSLDocumentationTests.java @@ -0,0 +1,453 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client.documentation; + +import org.apache.lucene.search.join.ScoreMode; +import org.elasticsearch.common.geo.GeoPoint; +import org.elasticsearch.common.geo.ShapeRelation; +import org.elasticsearch.common.geo.builders.CoordinatesBuilder; +import org.elasticsearch.common.geo.builders.ShapeBuilders; +import org.elasticsearch.common.unit.DistanceUnit; +import org.elasticsearch.index.query.GeoShapeQueryBuilder; +import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder; +import org.elasticsearch.index.query.functionscore.FunctionScoreQueryBuilder.FilterFunctionBuilder; +import org.elasticsearch.script.Script; +import org.elasticsearch.script.ScriptType; +import org.elasticsearch.test.ESTestCase; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import static java.util.Collections.singletonMap; +import static org.elasticsearch.index.query.QueryBuilders.boolQuery; +import static org.elasticsearch.index.query.QueryBuilders.boostingQuery; +import static org.elasticsearch.index.query.QueryBuilders.commonTermsQuery; +import static org.elasticsearch.index.query.QueryBuilders.constantScoreQuery; +import static org.elasticsearch.index.query.QueryBuilders.disMaxQuery; +import static org.elasticsearch.index.query.QueryBuilders.existsQuery; +import static org.elasticsearch.index.query.QueryBuilders.functionScoreQuery; +import static org.elasticsearch.index.query.QueryBuilders.fuzzyQuery; +import static org.elasticsearch.index.query.QueryBuilders.geoBoundingBoxQuery; +import static org.elasticsearch.index.query.QueryBuilders.geoDistanceQuery; +import static org.elasticsearch.index.query.QueryBuilders.geoPolygonQuery; +import static org.elasticsearch.index.query.QueryBuilders.geoShapeQuery; +import static org.elasticsearch.index.query.QueryBuilders.idsQuery; +import static org.elasticsearch.index.query.QueryBuilders.matchAllQuery; +import static org.elasticsearch.index.query.QueryBuilders.matchQuery; +import static org.elasticsearch.index.query.QueryBuilders.moreLikeThisQuery; +import static org.elasticsearch.index.query.QueryBuilders.multiMatchQuery; +import static org.elasticsearch.index.query.QueryBuilders.nestedQuery; +import static org.elasticsearch.index.query.QueryBuilders.prefixQuery; +import static org.elasticsearch.index.query.QueryBuilders.queryStringQuery; +import static org.elasticsearch.index.query.QueryBuilders.rangeQuery; +import static org.elasticsearch.index.query.QueryBuilders.regexpQuery; +import static org.elasticsearch.index.query.QueryBuilders.scriptQuery; +import static org.elasticsearch.index.query.QueryBuilders.simpleQueryStringQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanContainingQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanFirstQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanMultiTermQueryBuilder; +import static org.elasticsearch.index.query.QueryBuilders.spanNearQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanNotQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanOrQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanTermQuery; +import static org.elasticsearch.index.query.QueryBuilders.spanWithinQuery; +import static org.elasticsearch.index.query.QueryBuilders.termQuery; +import static org.elasticsearch.index.query.QueryBuilders.termsQuery; +import static org.elasticsearch.index.query.QueryBuilders.typeQuery; +import static org.elasticsearch.index.query.QueryBuilders.wildcardQuery; +import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.exponentialDecayFunction; +import static org.elasticsearch.index.query.functionscore.ScoreFunctionBuilders.randomFunction; +import static org.elasticsearch.join.query.JoinQueryBuilders.hasChildQuery; +import static org.elasticsearch.join.query.JoinQueryBuilders.hasParentQuery; + +/** + * Examples of using the transport client that are imported into the transport client documentation. + * There are no assertions here because we're mostly concerned with making sure that the examples + * compile and don't throw weird runtime exceptions. Assertions and example data would be nice, but + * that is secondary. + */ +public class QueryDSLDocumentationTests extends ESTestCase { + public void testBool() { + // tag::bool + boolQuery() + .must(termQuery("content", "test1")) // <1> + .must(termQuery("content", "test4")) // <1> + .mustNot(termQuery("content", "test2")) // <2> + .should(termQuery("content", "test3")) // <3> + .filter(termQuery("content", "test5")); // <4> + // end::bool + } + + public void testBoosting() { + // tag::boosting + boostingQuery( + termQuery("name","kimchy"), // <1> + termQuery("name","dadoonet")) // <2> + .negativeBoost(0.2f); // <3> + // end::boosting + } + + public void testCommonTerms() { + // tag::common_terms + commonTermsQuery("name", // <1> + "kimchy"); // <2> + // end::common_terms + } + + public void testConstantScore() { + // tag::constant_score + constantScoreQuery( + termQuery("name","kimchy")) // <1> + .boost(2.0f); // <2> + // end::constant_score + } + + public void testDisMax() { + // tag::dis_max + disMaxQuery() + .add(termQuery("name", "kimchy")) // <1> + .add(termQuery("name", "elasticsearch")) // <2> + .boost(1.2f) // <3> + .tieBreaker(0.7f); // <4> + // end::dis_max + } + + public void testExists() { + // tag::exists + existsQuery("name"); // <1> + // end::exists + } + + public void testFunctionScore() { + // tag::function_score + FilterFunctionBuilder[] functions = { + new FunctionScoreQueryBuilder.FilterFunctionBuilder( + matchQuery("name", "kimchy"), // <1> + randomFunction("ABCDEF")), // <2> + new FunctionScoreQueryBuilder.FilterFunctionBuilder( + exponentialDecayFunction("age", 0L, 1L)) // <3> + }; + functionScoreQuery(functions); + // end::function_score + } + + public void testFuzzy() { + // tag::fuzzy + fuzzyQuery( + "name", // <1> + "kimchy"); // <2> + // end::fuzzy + } + + public void testGeoBoundingBox() { + // tag::geo_bounding_box + geoBoundingBoxQuery("pin.location") // <1> + .setCorners(40.73, -74.1, // <2> + 40.717, -73.99); // <3> + // end::geo_bounding_box + } + + public void testGeoDistance() { + // tag::geo_distance + geoDistanceQuery("pin.location") // <1> + .point(40, -70) // <2> + .distance(200, DistanceUnit.KILOMETERS); // <3> + // end::geo_distance + } + + public void testGeoPolygon() { + // tag::geo_polygon + List points = new ArrayList(); // <1> + points.add(new GeoPoint(40, -70)); + points.add(new GeoPoint(30, -80)); + points.add(new GeoPoint(20, -90)); + geoPolygonQuery("pin.location", points); // <2> + // end::geo_polygon + } + + public void testGeoShape() throws IOException { + { + // tag::geo_shape + GeoShapeQueryBuilder qb = geoShapeQuery( + "pin.location", // <1> + ShapeBuilders.newMultiPoint( // <2> + new CoordinatesBuilder() + .coordinate(0, 0) + .coordinate(0, 10) + .coordinate(10, 10) + .coordinate(10, 0) + .coordinate(0, 0) + .build())); + qb.relation(ShapeRelation.WITHIN); // <3> + // end::geo_shape + } + + { + // tag::indexed_geo_shape + // Using pre-indexed shapes + GeoShapeQueryBuilder qb = geoShapeQuery( + "pin.location", // <1> + "DEU", // <2> + "countries"); // <3> + qb.relation(ShapeRelation.WITHIN) // <4> + .indexedShapeIndex("shapes") // <5> + .indexedShapePath("location"); // <6> + // end::indexed_geo_shape + } + } + + public void testHasChild() { + // tag::has_child + hasChildQuery( + "blog_tag", // <1> + termQuery("tag","something"), // <2> + ScoreMode.None); // <3> + // end::has_child + } + + public void testHasParent() { + // tag::has_parent + hasParentQuery( + "blog", // <1> + termQuery("tag","something"), // <2> + false); // <3> + // end::has_parent + } + + public void testIds() { + // tag::ids + idsQuery("my_type", "type2") + .addIds("1", "4", "100"); + + idsQuery() // <1> + .addIds("1", "4", "100"); + // end::ids + } + + public void testMatchAll() { + // tag::match_all + matchAllQuery(); + // end::match_all + } + + public void testMatch() { + // tag::match + matchQuery( + "name", // <1> + "kimchy elasticsearch"); // <2> + // end::match + } + + public void testMoreLikeThis() { + // tag::more_like_this + String[] fields = {"name.first", "name.last"}; // <1> + String[] texts = {"text like this one"}; // <2> + + moreLikeThisQuery(fields, texts, null) + .minTermFreq(1) // <3> + .maxQueryTerms(12); // <4> + // end::more_like_this + } + + public void testMultiMatch() { + // tag::multi_match + multiMatchQuery( + "kimchy elasticsearch", // <1> + "user", "message"); // <2> + // end::multi_match + } + + public void testNested() { + // tag::nested + nestedQuery( + "obj1", // <1> + boolQuery() // <2> + .must(matchQuery("obj1.name", "blue")) + .must(rangeQuery("obj1.count").gt(5)), + ScoreMode.Avg); // <3> + // end::nested + } + + public void testPrefix() { + // tag::prefix + prefixQuery( + "brand", // <1> + "heine"); // <2> + // end::prefix + } + + public void testQueryString() { + // tag::query_string + queryStringQuery("+kimchy -elasticsearch"); + // end::query_string + } + + public void testRange() { + // tag::range + rangeQuery("price") // <1> + .from(5) // <2> + .to(10) // <3> + .includeLower(true) // <4> + .includeUpper(false); // <5> + // end::range + + // tag::range_simplified + // A simplified form using gte, gt, lt or lte + rangeQuery("age") // <1> + .gte("10") // <2> + .lt("20"); // <3> + // end::range_simplified + } + + public void testRegExp() { + // tag::regexp + regexpQuery( + "name.first", // <1> + "s.*y"); // <2> + // end::regexp + } + + public void testScript() { + // tag::script_inline + scriptQuery( + new Script("doc['num1'].value > 1") // <1> + ); + // end::script_inline + + // tag::script_file + Map parameters = new HashMap<>(); + parameters.put("param1", 5); + scriptQuery(new Script( + ScriptType.STORED, // <1> + "painless", // <2> + "myscript", // <3> + singletonMap("param1", 5))); // <4> + // end::script_file + } + + public void testSimpleQueryString() { + // tag::simple_query_string + simpleQueryStringQuery("+kimchy -elasticsearch"); + // end::simple_query_string + } + + public void testSpanContaining() { + // tag::span_containing + spanContainingQuery( + spanNearQuery(spanTermQuery("field1","bar"), 5) // <1> + .addClause(spanTermQuery("field1","baz")) + .inOrder(true), + spanTermQuery("field1","foo")); // <2> + // end::span_containing + } + + public void testSpanFirst() { + // tag::span_first + spanFirstQuery( + spanTermQuery("user", "kimchy"), // <1> + 3 // <2> + ); + // end::span_first + } + + public void testSpanMultiTerm() { + // tag::span_multi + spanMultiTermQueryBuilder( + prefixQuery("user", "ki")); // <1> + // end::span_multi + } + + public void testSpanNear() { + // tag::span_near + spanNearQuery( + spanTermQuery("field","value1"), // <1> + 12) // <2> + .addClause(spanTermQuery("field","value2")) // <1> + .addClause(spanTermQuery("field","value3")) // <1> + .inOrder(false); // <3> + // end::span_near + } + + public void testSpanNot() { + // tag::span_not + spanNotQuery( + spanTermQuery("field","value1"), // <1> + spanTermQuery("field","value2")); // <2> + // end::span_not + } + + public void testSpanOr() { + // tag::span_or + spanOrQuery(spanTermQuery("field","value1")) // <1> + .addClause(spanTermQuery("field","value2")) // <1> + .addClause(spanTermQuery("field","value3")); // <1> + // end::span_or + } + + public void testSpanTerm() { + // tag::span_term + spanTermQuery( + "user", // <1> + "kimchy"); // <2> + // end::span_term + } + + public void testSpanWithin() { + // tag::span_within + spanWithinQuery( + spanNearQuery(spanTermQuery("field1", "bar"), 5) // <1> + .addClause(spanTermQuery("field1", "baz")) + .inOrder(true), + spanTermQuery("field1", "foo")); // <2> + // end::span_within + } + + public void testTerm() { + // tag::term + termQuery( + "name", // <1> + "kimchy"); // <2> + // end::term + } + + public void testTerms() { + // tag::terms + termsQuery("tags", // <1> + "blue", "pill"); // <2> + // end::terms + } + + public void testType() { + // tag::type + typeQuery("my_type"); // <1> + // end::type + } + + public void testWildcard() { + // tag::wildcard + wildcardQuery( + "user", // <1> + "k?mch*"); // <2> + // end::wildcard + } +} diff --git a/client/rest/build.gradle b/client/rest/build.gradle index 67f8426fb5faa..19ec584a1032d 100644 --- a/client/rest/build.gradle +++ b/client/rest/build.gradle @@ -33,7 +33,7 @@ group = 'org.elasticsearch.client' dependencies { compile "org.apache.httpcomponents:httpclient:${versions.httpclient}" compile "org.apache.httpcomponents:httpcore:${versions.httpcore}" - compile "org.apache.httpcomponents:httpasyncclient:4.1.2" + compile "org.apache.httpcomponents:httpasyncclient:${versions.httpasyncclient}" compile "org.apache.httpcomponents:httpcore-nio:${versions.httpcore}" compile "commons-codec:commons-codec:${versions.commonscodec}" compile "commons-logging:commons-logging:${versions.commonslogging}" @@ -49,8 +49,9 @@ dependencies { } forbiddenApisMain { - //client does not depend on core, so only jdk signatures should be checked - signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt')] + //client does not depend on core, so only jdk and http signatures should be checked + signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt'), + PrecommitTasks.getResource('/forbidden/http-signatures.txt')] } forbiddenApisTest { @@ -58,7 +59,8 @@ forbiddenApisTest { bundledSignatures -= 'jdk-non-portable' bundledSignatures += 'jdk-internal' //client does not depend on core, so only jdk signatures should be checked - signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt')] + signaturesURLs = [PrecommitTasks.getResource('/forbidden/jdk-signatures.txt'), + PrecommitTasks.getResource('/forbidden/http-signatures.txt')] } dependencyLicenses { diff --git a/client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java b/client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java index a5e5b39bed567..1af9e0dcf0fa4 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java +++ b/client/rest/src/main/java/org/elasticsearch/client/HttpAsyncResponseConsumerFactory.java @@ -29,7 +29,7 @@ * consumer object. Users can implement this interface and pass their own instance to the specialized * performRequest methods that accept an {@link HttpAsyncResponseConsumerFactory} instance as argument. */ -interface HttpAsyncResponseConsumerFactory { +public interface HttpAsyncResponseConsumerFactory { /** * Creates the default type of {@link HttpAsyncResponseConsumer}, based on heap buffering with a buffer limit of 100MB. diff --git a/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java b/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java index ad2348762dd07..07ff89b7e3fb0 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java +++ b/client/rest/src/main/java/org/elasticsearch/client/RequestLogger.java @@ -139,11 +139,12 @@ static String buildTraceRequest(HttpUriRequest request, HttpHost host) throws IO * Creates curl output for given response */ static String buildTraceResponse(HttpResponse httpResponse) throws IOException { - String responseLine = "# " + httpResponse.getStatusLine().toString(); + StringBuilder responseLine = new StringBuilder(); + responseLine.append("# ").append(httpResponse.getStatusLine()); for (Header header : httpResponse.getAllHeaders()) { - responseLine += "\n# " + header.getName() + ": " + header.getValue(); + responseLine.append("\n# ").append(header.getName()).append(": ").append(header.getValue()); } - responseLine += "\n#"; + responseLine.append("\n#"); HttpEntity entity = httpResponse.getEntity(); if (entity != null) { if (entity.isRepeatable() == false) { @@ -158,11 +159,11 @@ static String buildTraceResponse(HttpResponse httpResponse) throws IOException { try (BufferedReader reader = new BufferedReader(new InputStreamReader(entity.getContent(), charset))) { String line; while( (line = reader.readLine()) != null) { - responseLine += "\n# " + line; + responseLine.append("\n# ").append(line); } } } - return responseLine; + return responseLine.toString(); } private static String getUri(RequestLine requestLine) { diff --git a/client/rest/src/main/java/org/elasticsearch/client/RestClient.java b/client/rest/src/main/java/org/elasticsearch/client/RestClient.java index 89c3309dbbdd1..ba3a07454ee48 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/RestClient.java +++ b/client/rest/src/main/java/org/elasticsearch/client/RestClient.java @@ -25,6 +25,7 @@ import org.apache.http.HttpHost; import org.apache.http.HttpRequest; import org.apache.http.HttpResponse; +import org.apache.http.client.AuthCache; import org.apache.http.client.ClientProtocolException; import org.apache.http.client.methods.HttpEntityEnclosingRequestBase; import org.apache.http.client.methods.HttpHead; @@ -34,8 +35,11 @@ import org.apache.http.client.methods.HttpPut; import org.apache.http.client.methods.HttpRequestBase; import org.apache.http.client.methods.HttpTrace; +import org.apache.http.client.protocol.HttpClientContext; import org.apache.http.client.utils.URIBuilder; import org.apache.http.concurrent.FutureCallback; +import org.apache.http.impl.auth.BasicScheme; +import org.apache.http.impl.client.BasicAuthCache; import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; import org.apache.http.nio.client.methods.HttpAsyncMethods; import org.apache.http.nio.protocol.HttpAsyncRequestProducer; @@ -49,6 +53,7 @@ import java.util.Collection; import java.util.Collections; import java.util.Comparator; +import java.util.HashMap; import java.util.HashSet; import java.util.Iterator; import java.util.List; @@ -91,7 +96,7 @@ public class RestClient implements Closeable { private final long maxRetryTimeoutMillis; private final String pathPrefix; private final AtomicInteger lastHostIndex = new AtomicInteger(0); - private volatile Set hosts; + private volatile HostTuple> hostTuple; private final ConcurrentMap blacklist = new ConcurrentHashMap<>(); private final FailureListener failureListener; @@ -121,11 +126,13 @@ public synchronized void setHosts(HttpHost... hosts) { throw new IllegalArgumentException("hosts must not be null nor empty"); } Set httpHosts = new HashSet<>(); + AuthCache authCache = new BasicAuthCache(); for (HttpHost host : hosts) { Objects.requireNonNull(host, "host cannot be null"); httpHosts.add(host); + authCache.put(host, new BasicScheme()); } - this.hosts = Collections.unmodifiableSet(httpHosts); + this.hostTuple = new HostTuple<>(Collections.unmodifiableSet(httpHosts), authCache); this.blacklist.clear(); } @@ -282,29 +289,65 @@ public void performRequestAsync(String method, String endpoint, Map params, HttpEntity entity, HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory, ResponseListener responseListener, Header... headers) { - URI uri = buildUri(pathPrefix, endpoint, params); - HttpRequestBase request = createHttpRequest(method, uri, entity); - setHeaders(request, headers); - FailureTrackingResponseListener failureTrackingResponseListener = new FailureTrackingResponseListener(responseListener); - long startTime = System.nanoTime(); - performRequestAsync(startTime, nextHost().iterator(), request, httpAsyncResponseConsumerFactory, failureTrackingResponseListener); + try { + Objects.requireNonNull(params, "params must not be null"); + Map requestParams = new HashMap<>(params); + //ignore is a special parameter supported by the clients, shouldn't be sent to es + String ignoreString = requestParams.remove("ignore"); + Set ignoreErrorCodes; + if (ignoreString == null) { + if (HttpHead.METHOD_NAME.equals(method)) { + //404 never causes error if returned for a HEAD request + ignoreErrorCodes = Collections.singleton(404); + } else { + ignoreErrorCodes = Collections.emptySet(); + } + } else { + String[] ignoresArray = ignoreString.split(","); + ignoreErrorCodes = new HashSet<>(); + if (HttpHead.METHOD_NAME.equals(method)) { + //404 never causes error if returned for a HEAD request + ignoreErrorCodes.add(404); + } + for (String ignoreCode : ignoresArray) { + try { + ignoreErrorCodes.add(Integer.valueOf(ignoreCode)); + } catch (NumberFormatException e) { + throw new IllegalArgumentException("ignore value should be a number, found [" + ignoreString + "] instead", e); + } + } + } + URI uri = buildUri(pathPrefix, endpoint, requestParams); + HttpRequestBase request = createHttpRequest(method, uri, entity); + setHeaders(request, headers); + FailureTrackingResponseListener failureTrackingResponseListener = new FailureTrackingResponseListener(responseListener); + long startTime = System.nanoTime(); + performRequestAsync(startTime, nextHost(), request, ignoreErrorCodes, httpAsyncResponseConsumerFactory, + failureTrackingResponseListener); + } catch (Exception e) { + responseListener.onFailure(e); + } } - private void performRequestAsync(final long startTime, final Iterator hosts, final HttpRequestBase request, + private void performRequestAsync(final long startTime, final HostTuple> hostTuple, final HttpRequestBase request, + final Set ignoreErrorCodes, final HttpAsyncResponseConsumerFactory httpAsyncResponseConsumerFactory, final FailureTrackingResponseListener listener) { - final HttpHost host = hosts.next(); + final HttpHost host = hostTuple.hosts.next(); //we stream the request body if the entity allows for it - HttpAsyncRequestProducer requestProducer = HttpAsyncMethods.create(host, request); - HttpAsyncResponseConsumer asyncResponseConsumer = httpAsyncResponseConsumerFactory.createHttpAsyncResponseConsumer(); - client.execute(requestProducer, asyncResponseConsumer, new FutureCallback() { + final HttpAsyncRequestProducer requestProducer = HttpAsyncMethods.create(host, request); + final HttpAsyncResponseConsumer asyncResponseConsumer = + httpAsyncResponseConsumerFactory.createHttpAsyncResponseConsumer(); + final HttpClientContext context = HttpClientContext.create(); + context.setAuthCache(hostTuple.authCache); + client.execute(requestProducer, asyncResponseConsumer, context, new FutureCallback() { @Override public void completed(HttpResponse httpResponse) { try { RequestLogger.logResponse(logger, request, host, httpResponse); int statusCode = httpResponse.getStatusLine().getStatusCode(); Response response = new Response(request.getRequestLine(), host, httpResponse); - if (isSuccessfulResponse(request.getMethod(), statusCode)) { + if (isSuccessfulResponse(statusCode) || ignoreErrorCodes.contains(response.getStatusLine().getStatusCode())) { onResponse(host); listener.onSuccess(response); } else { @@ -312,7 +355,7 @@ public void completed(HttpResponse httpResponse) { if (isRetryStatus(statusCode)) { //mark host dead and retry against next one onFailure(host); - retryIfPossible(responseException, hosts, request); + retryIfPossible(responseException); } else { //mark host alive and don't retry, as the error should be a request problem onResponse(host); @@ -329,14 +372,14 @@ public void failed(Exception failure) { try { RequestLogger.logFailedRequest(logger, request, host, failure); onFailure(host); - retryIfPossible(failure, hosts, request); + retryIfPossible(failure); } catch(Exception e) { listener.onDefinitiveFailure(e); } } - private void retryIfPossible(Exception exception, Iterator hosts, HttpRequestBase request) { - if (hosts.hasNext()) { + private void retryIfPossible(Exception exception) { + if (hostTuple.hosts.hasNext()) { //in case we are retrying, check whether maxRetryTimeout has been reached long timeElapsedMillis = TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startTime); long timeout = maxRetryTimeoutMillis - timeElapsedMillis; @@ -347,7 +390,7 @@ private void retryIfPossible(Exception exception, Iterator hosts, Http } else { listener.trackFailure(exception); request.reset(); - performRequestAsync(startTime, hosts, request, httpAsyncResponseConsumerFactory, listener); + performRequestAsync(startTime, hostTuple, request, ignoreErrorCodes, httpAsyncResponseConsumerFactory, listener); } } else { listener.onDefinitiveFailure(exception); @@ -385,17 +428,18 @@ private void setHeaders(HttpRequest httpRequest, Header[] requestHeaders) { * The iterator returned will never be empty. In case there are no healthy hosts available, or dead ones to be be retried, * one dead host gets returned so that it can be retried. */ - private Iterable nextHost() { + private HostTuple> nextHost() { + final HostTuple> hostTuple = this.hostTuple; Collection nextHosts = Collections.emptySet(); do { - Set filteredHosts = new HashSet<>(hosts); + Set filteredHosts = new HashSet<>(hostTuple.hosts); for (Map.Entry entry : blacklist.entrySet()) { if (System.nanoTime() - entry.getValue().getDeadUntilNanos() < 0) { filteredHosts.remove(entry.getKey()); } } if (filteredHosts.isEmpty()) { - //last resort: if there are no good hosts to use, return a single dead one, the one that's closest to being retried + //last resort: if there are no good host to use, return a single dead one, the one that's closest to being retried List> sortedHosts = new ArrayList<>(blacklist.entrySet()); if (sortedHosts.size() > 0) { Collections.sort(sortedHosts, new Comparator>() { @@ -414,7 +458,7 @@ public int compare(Map.Entry o1, Map.Entry(nextHosts.iterator(), hostTuple.authCache); } /** @@ -452,8 +496,8 @@ public void close() throws IOException { client.close(); } - private static boolean isSuccessfulResponse(String method, int statusCode) { - return statusCode < 300 || (HttpHead.METHOD_NAME.equals(method) && statusCode == 404); + private static boolean isSuccessfulResponse(int statusCode) { + return statusCode < 300; } private static boolean isRetryStatus(int statusCode) { @@ -510,7 +554,6 @@ private static HttpRequestBase addRequestBody(HttpRequestBase httpRequest, HttpE } private static URI buildUri(String pathPrefix, String path, Map params) { - Objects.requireNonNull(params, "params must not be null"); Objects.requireNonNull(path, "path must not be null"); try { String fullPath; @@ -657,4 +700,18 @@ public void onFailure(HttpHost host) { } } + + /** + * {@code HostTuple} enables the {@linkplain HttpHost}s and {@linkplain AuthCache} to be set together in a thread + * safe, volatile way. + */ + private static class HostTuple { + public final T hosts; + public final AuthCache authCache; + + HostTuple(final T hosts, final AuthCache authCache) { + this.hosts = hosts; + this.authCache = authCache; + } + } } diff --git a/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java b/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java index d881bd70a44d0..4466a61d9df6d 100644 --- a/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java +++ b/client/rest/src/main/java/org/elasticsearch/client/RestClientBuilder.java @@ -28,6 +28,8 @@ import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; import org.apache.http.nio.conn.SchemeIOSessionStrategy; +import java.security.AccessController; +import java.security.PrivilegedAction; import java.util.Objects; /** @@ -177,7 +179,12 @@ public RestClient build() { if (failureListener == null) { failureListener = new RestClient.FailureListener(); } - CloseableHttpAsyncClient httpClient = createHttpClient(); + CloseableHttpAsyncClient httpClient = AccessController.doPrivileged(new PrivilegedAction() { + @Override + public CloseableHttpAsyncClient run() { + return createHttpClient(); + } + }); RestClient restClient = new RestClient(httpClient, maxRetryTimeout, defaultHeaders, hosts, pathPrefix, failureListener); httpClient.start(); return restClient; diff --git a/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java b/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java index 2488ea4b4355a..fe82d5367e51a 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/HeapBufferedAsyncResponseConsumerTests.java @@ -20,20 +20,28 @@ package org.elasticsearch.client; import org.apache.http.ContentTooLongException; +import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.ProtocolVersion; import org.apache.http.StatusLine; -import org.apache.http.entity.BasicHttpEntity; import org.apache.http.entity.ContentType; import org.apache.http.entity.StringEntity; import org.apache.http.message.BasicHttpResponse; import org.apache.http.message.BasicStatusLine; import org.apache.http.nio.ContentDecoder; import org.apache.http.nio.IOControl; +import org.apache.http.nio.protocol.HttpAsyncResponseConsumer; import org.apache.http.protocol.HttpContext; +import java.lang.reflect.Constructor; +import java.lang.reflect.InvocationTargetException; +import java.lang.reflect.Modifier; +import java.util.concurrent.atomic.AtomicReference; + +import static org.hamcrest.CoreMatchers.instanceOf; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertSame; +import static org.junit.Assert.assertThat; import static org.junit.Assert.assertTrue; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.spy; @@ -56,7 +64,7 @@ public void testResponseProcessing() throws Exception { ProtocolVersion protocolVersion = new ProtocolVersion("HTTP", 1, 1); StatusLine statusLine = new BasicStatusLine(protocolVersion, 200, "OK"); HttpResponse httpResponse = new BasicHttpResponse(statusLine); - httpResponse.setEntity(new StringEntity("test")); + httpResponse.setEntity(new StringEntity("test", ContentType.TEXT_PLAIN)); //everything goes well consumer.responseReceived(httpResponse); @@ -94,16 +102,42 @@ public void testConfiguredBufferLimit() throws Exception { bufferLimitTest(consumer, bufferLimit); } + public void testCanConfigureHeapBufferLimitFromOutsidePackage() throws ClassNotFoundException, NoSuchMethodException, + IllegalAccessException, InvocationTargetException, InstantiationException { + int bufferLimit = randomIntBetween(1, Integer.MAX_VALUE); + //we use reflection to make sure that the class can be instantiated from the outside, and the constructor is public + Constructor constructor = HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory.class.getConstructor(Integer.TYPE); + assertEquals(Modifier.PUBLIC, constructor.getModifiers() & Modifier.PUBLIC); + Object object = constructor.newInstance(bufferLimit); + assertThat(object, instanceOf(HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory.class)); + HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory consumerFactory = + (HttpAsyncResponseConsumerFactory.HeapBufferedResponseConsumerFactory) object; + HttpAsyncResponseConsumer consumer = consumerFactory.createHttpAsyncResponseConsumer(); + assertThat(consumer, instanceOf(HeapBufferedAsyncResponseConsumer.class)); + HeapBufferedAsyncResponseConsumer bufferedAsyncResponseConsumer = (HeapBufferedAsyncResponseConsumer) consumer; + assertEquals(bufferLimit, bufferedAsyncResponseConsumer.getBufferLimit()); + } + + public void testHttpAsyncResponseConsumerFactoryVisibility() throws ClassNotFoundException { + assertEquals(Modifier.PUBLIC, HttpAsyncResponseConsumerFactory.class.getModifiers() & Modifier.PUBLIC); + } + private static void bufferLimitTest(HeapBufferedAsyncResponseConsumer consumer, int bufferLimit) throws Exception { ProtocolVersion protocolVersion = new ProtocolVersion("HTTP", 1, 1); StatusLine statusLine = new BasicStatusLine(protocolVersion, 200, "OK"); consumer.onResponseReceived(new BasicHttpResponse(statusLine)); - BasicHttpEntity entity = new BasicHttpEntity(); - entity.setContentLength(randomInt(bufferLimit)); + final AtomicReference contentLength = new AtomicReference<>(); + HttpEntity entity = new StringEntity("", ContentType.APPLICATION_JSON) { + @Override + public long getContentLength() { + return contentLength.get(); + } + }; + contentLength.set(randomLong(bufferLimit)); consumer.onEntityEnclosed(entity, ContentType.APPLICATION_JSON); - entity.setContentLength(randomIntBetween(bufferLimit + 1, MAX_TEST_BUFFER_SIZE)); + contentLength.set(randomLongBetween(bufferLimit + 1, MAX_TEST_BUFFER_SIZE)); try { consumer.onEntityEnclosed(entity, ContentType.APPLICATION_JSON); } catch(ContentTooLongException e) { diff --git a/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java b/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java index 68717dfe223cd..637e1807d2536 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RequestLoggerTests.java @@ -31,6 +31,7 @@ import org.apache.http.client.methods.HttpPut; import org.apache.http.client.methods.HttpTrace; import org.apache.http.client.methods.HttpUriRequest; +import org.apache.http.entity.ContentType; import org.apache.http.entity.InputStreamEntity; import org.apache.http.entity.StringEntity; import org.apache.http.message.BasicHeader; @@ -71,20 +72,21 @@ public void testTraceRequest() throws IOException, URISyntaxException { HttpEntity entity; switch(randomIntBetween(0, 4)) { case 0: - entity = new StringEntity(requestBody, StandardCharsets.UTF_8); + entity = new StringEntity(requestBody, ContentType.APPLICATION_JSON); break; case 1: - entity = new InputStreamEntity(new ByteArrayInputStream(requestBody.getBytes(StandardCharsets.UTF_8))); + entity = new InputStreamEntity(new ByteArrayInputStream(requestBody.getBytes(StandardCharsets.UTF_8)), + ContentType.APPLICATION_JSON); break; case 2: - entity = new NStringEntity(requestBody, StandardCharsets.UTF_8); + entity = new NStringEntity(requestBody, ContentType.APPLICATION_JSON); break; case 3: - entity = new NByteArrayEntity(requestBody.getBytes(StandardCharsets.UTF_8)); + entity = new NByteArrayEntity(requestBody.getBytes(StandardCharsets.UTF_8), ContentType.APPLICATION_JSON); break; case 4: // Evil entity without a charset - entity = new StringEntity(requestBody, (Charset) null); + entity = new StringEntity(requestBody, ContentType.create("application/json", (Charset) null)); break; default: throw new UnsupportedOperationException(); @@ -122,15 +124,16 @@ public void testTraceResponse() throws IOException { HttpEntity entity; switch(randomIntBetween(0, 2)) { case 0: - entity = new StringEntity(responseBody, StandardCharsets.UTF_8); + entity = new StringEntity(responseBody, ContentType.APPLICATION_JSON); break; case 1: //test a non repeatable entity - entity = new InputStreamEntity(new ByteArrayInputStream(responseBody.getBytes(StandardCharsets.UTF_8))); + entity = new InputStreamEntity(new ByteArrayInputStream(responseBody.getBytes(StandardCharsets.UTF_8)), + ContentType.APPLICATION_JSON); break; case 2: // Evil entity without a charset - entity = new StringEntity(responseBody, (Charset) null); + entity = new StringEntity(responseBody, ContentType.create("application/json", (Charset) null)); break; default: throw new UnsupportedOperationException(); diff --git a/client/rest/src/test/java/org/elasticsearch/client/ResponseExceptionTests.java b/client/rest/src/test/java/org/elasticsearch/client/ResponseExceptionTests.java index 9185222f5104d..1638693a44f5e 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/ResponseExceptionTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/ResponseExceptionTests.java @@ -25,6 +25,7 @@ import org.apache.http.ProtocolVersion; import org.apache.http.RequestLine; import org.apache.http.StatusLine; +import org.apache.http.entity.ContentType; import org.apache.http.entity.InputStreamEntity; import org.apache.http.entity.StringEntity; import org.apache.http.message.BasicHttpResponse; @@ -52,10 +53,11 @@ public void testResponseException() throws IOException { if (hasBody) { HttpEntity entity; if (getRandom().nextBoolean()) { - entity = new StringEntity(responseBody, StandardCharsets.UTF_8); + entity = new StringEntity(responseBody, ContentType.APPLICATION_JSON); } else { //test a non repeatable entity - entity = new InputStreamEntity(new ByteArrayInputStream(responseBody.getBytes(StandardCharsets.UTF_8))); + entity = new InputStreamEntity(new ByteArrayInputStream(responseBody.getBytes(StandardCharsets.UTF_8)), + ContentType.APPLICATION_JSON); } httpResponse.setEntity(entity); } diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java index 90ee44310090e..6f87a244ff59f 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientMultipleHostsTests.java @@ -26,8 +26,10 @@ import org.apache.http.ProtocolVersion; import org.apache.http.StatusLine; import org.apache.http.client.methods.HttpUriRequest; +import org.apache.http.client.protocol.HttpClientContext; import org.apache.http.concurrent.FutureCallback; import org.apache.http.conn.ConnectTimeoutException; +import org.apache.http.impl.auth.BasicScheme; import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; import org.apache.http.message.BasicHttpResponse; import org.apache.http.message.BasicStatusLine; @@ -73,13 +75,15 @@ public class RestClientMultipleHostsTests extends RestClientTestCase { public void createRestClient() throws IOException { CloseableHttpAsyncClient httpClient = mock(CloseableHttpAsyncClient.class); when(httpClient.execute(any(HttpAsyncRequestProducer.class), any(HttpAsyncResponseConsumer.class), - any(FutureCallback.class))).thenAnswer(new Answer>() { + any(HttpClientContext.class), any(FutureCallback.class))).thenAnswer(new Answer>() { @Override public Future answer(InvocationOnMock invocationOnMock) throws Throwable { HttpAsyncRequestProducer requestProducer = (HttpAsyncRequestProducer) invocationOnMock.getArguments()[0]; HttpUriRequest request = (HttpUriRequest)requestProducer.generateRequest(); HttpHost httpHost = requestProducer.getTarget(); - FutureCallback futureCallback = (FutureCallback) invocationOnMock.getArguments()[2]; + HttpClientContext context = (HttpClientContext) invocationOnMock.getArguments()[2]; + assertThat(context.getAuthCache().get(httpHost), instanceOf(BasicScheme.class)); + FutureCallback futureCallback = (FutureCallback) invocationOnMock.getArguments()[3]; //return the desired status code or exception depending on the path if (request.getURI().getPath().equals("/soe")) { futureCallback.failed(new SocketTimeoutException(httpHost.toString())); diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java index 8f4170add3d61..6d4e3ba4bc861 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostIntegTests.java @@ -26,7 +26,12 @@ import org.apache.http.Consts; import org.apache.http.Header; import org.apache.http.HttpHost; +import org.apache.http.auth.AuthScope; +import org.apache.http.auth.UsernamePasswordCredentials; +import org.apache.http.entity.ContentType; import org.apache.http.entity.StringEntity; +import org.apache.http.impl.client.BasicCredentialsProvider; +import org.apache.http.impl.nio.client.HttpAsyncClientBuilder; import org.apache.http.util.EntityUtils; import org.codehaus.mojo.animal_sniffer.IgnoreJRERequirement; import org.elasticsearch.mocksocket.MockHttpServer; @@ -48,7 +53,10 @@ import static org.elasticsearch.client.RestClientTestUtil.getAllStatusCodes; import static org.elasticsearch.client.RestClientTestUtil.getHttpMethods; import static org.elasticsearch.client.RestClientTestUtil.randomStatusCode; +import static org.hamcrest.Matchers.nullValue; +import static org.hamcrest.Matchers.startsWith; import static org.junit.Assert.assertEquals; +import static org.junit.Assert.assertThat; import static org.junit.Assert.assertTrue; /** @@ -66,22 +74,10 @@ public class RestClientSingleHostIntegTests extends RestClientTestCase { @BeforeClass public static void startHttpServer() throws Exception { - String pathPrefixWithoutLeadingSlash; - if (randomBoolean()) { - pathPrefixWithoutLeadingSlash = "testPathPrefix/" + randomAsciiOfLengthBetween(1, 5); - pathPrefix = "/" + pathPrefixWithoutLeadingSlash; - } else { - pathPrefix = pathPrefixWithoutLeadingSlash = ""; - } - + pathPrefix = randomBoolean() ? "/testPathPrefix/" + randomAsciiOfLengthBetween(1, 5) : ""; httpServer = createHttpServer(); defaultHeaders = RestClientTestUtil.randomHeaders(getRandom(), "Header-default"); - RestClientBuilder restClientBuilder = RestClient.builder( - new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort())).setDefaultHeaders(defaultHeaders); - if (pathPrefix.length() > 0) { - restClientBuilder.setPathPrefix((randomBoolean() ? "/" : "") + pathPrefixWithoutLeadingSlash); - } - restClient = restClientBuilder.build(); + restClient = createRestClient(false, true); } private static HttpServer createHttpServer() throws Exception { @@ -129,6 +125,35 @@ public void handle(HttpExchange httpExchange) throws IOException { } } + private static RestClient createRestClient(final boolean useAuth, final boolean usePreemptiveAuth) { + // provide the username/password for every request + final BasicCredentialsProvider credentialsProvider = new BasicCredentialsProvider(); + credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("user", "pass")); + + final RestClientBuilder restClientBuilder = RestClient.builder( + new HttpHost(httpServer.getAddress().getHostString(), httpServer.getAddress().getPort())).setDefaultHeaders(defaultHeaders); + if (pathPrefix.length() > 0) { + // sometimes cut off the leading slash + restClientBuilder.setPathPrefix(randomBoolean() ? pathPrefix.substring(1) : pathPrefix); + } + + if (useAuth) { + restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() { + @Override + public HttpAsyncClientBuilder customizeHttpClient(final HttpAsyncClientBuilder httpClientBuilder) { + if (usePreemptiveAuth == false) { + // disable preemptive auth by ignoring any authcache + httpClientBuilder.disableAuthCaching(); + } + + return httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider); + } + }); + } + + return restClientBuilder.build(); + } + @AfterClass public static void stopHttpServers() throws IOException { restClient.close(); @@ -159,7 +184,7 @@ public void testHeaders() throws IOException { assertEquals(method, esResponse.getRequestLine().getMethod()); assertEquals(statusCode, esResponse.getStatusLine().getStatusCode()); - assertEquals((pathPrefix.length() > 0 ? pathPrefix : "") + "/" + statusCode, esResponse.getRequestLine().getUri()); + assertEquals(pathPrefix + "/" + statusCode, esResponse.getRequestLine().getUri()); assertHeaders(defaultHeaders, requestHeaders, esResponse.getHeaders(), standardHeaders); for (final Header responseHeader : esResponse.getHeaders()) { String name = responseHeader.getName(); @@ -189,9 +214,43 @@ public void testGetWithBody() throws IOException { bodyTest("GET"); } - private void bodyTest(String method) throws IOException { + /** + * Verify that credentials are sent on the first request with preemptive auth enabled (default when provided with credentials). + */ + public void testPreemptiveAuthEnabled() throws IOException { + final String[] methods = { "POST", "PUT", "GET", "DELETE" }; + + try (RestClient restClient = createRestClient(true, true)) { + for (final String method : methods) { + final Response response = bodyTest(restClient, method); + + assertThat(response.getHeader("Authorization"), startsWith("Basic")); + } + } + } + + /** + * Verify that credentials are not sent on the first request with preemptive auth disabled. + */ + public void testPreemptiveAuthDisabled() throws IOException { + final String[] methods = { "POST", "PUT", "GET", "DELETE" }; + + try (RestClient restClient = createRestClient(true, false)) { + for (final String method : methods) { + final Response response = bodyTest(restClient, method); + + assertThat(response.getHeader("Authorization"), nullValue()); + } + } + } + + private Response bodyTest(final String method) throws IOException { + return bodyTest(restClient, method); + } + + private Response bodyTest(final RestClient restClient, final String method) throws IOException { String requestBody = "{ \"field\": \"value\" }"; - StringEntity entity = new StringEntity(requestBody); + StringEntity entity = new StringEntity(requestBody, ContentType.APPLICATION_JSON); int statusCode = randomStatusCode(getRandom()); Response esResponse; try { @@ -201,7 +260,9 @@ private void bodyTest(String method) throws IOException { } assertEquals(method, esResponse.getRequestLine().getMethod()); assertEquals(statusCode, esResponse.getStatusLine().getStatusCode()); - assertEquals((pathPrefix.length() > 0 ? pathPrefix : "") + "/" + statusCode, esResponse.getRequestLine().getUri()); + assertEquals(pathPrefix + "/" + statusCode, esResponse.getRequestLine().getUri()); assertEquals(requestBody, EntityUtils.toString(esResponse.getEntity())); + + return esResponse; } } diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java index 865f9b1817aff..541193c733d56 100644 --- a/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientSingleHostTests.java @@ -34,10 +34,13 @@ import org.apache.http.client.methods.HttpPut; import org.apache.http.client.methods.HttpTrace; import org.apache.http.client.methods.HttpUriRequest; +import org.apache.http.client.protocol.HttpClientContext; import org.apache.http.client.utils.URIBuilder; import org.apache.http.concurrent.FutureCallback; import org.apache.http.conn.ConnectTimeoutException; +import org.apache.http.entity.ContentType; import org.apache.http.entity.StringEntity; +import org.apache.http.impl.auth.BasicScheme; import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; import org.apache.http.message.BasicHttpResponse; import org.apache.http.message.BasicStatusLine; @@ -96,11 +99,13 @@ public class RestClientSingleHostTests extends RestClientTestCase { public void createRestClient() throws IOException { httpClient = mock(CloseableHttpAsyncClient.class); when(httpClient.execute(any(HttpAsyncRequestProducer.class), any(HttpAsyncResponseConsumer.class), - any(FutureCallback.class))).thenAnswer(new Answer>() { + any(HttpClientContext.class), any(FutureCallback.class))).thenAnswer(new Answer>() { @Override public Future answer(InvocationOnMock invocationOnMock) throws Throwable { HttpAsyncRequestProducer requestProducer = (HttpAsyncRequestProducer) invocationOnMock.getArguments()[0]; - FutureCallback futureCallback = (FutureCallback) invocationOnMock.getArguments()[2]; + HttpClientContext context = (HttpClientContext) invocationOnMock.getArguments()[2]; + assertThat(context.getAuthCache().get(httpHost), instanceOf(BasicScheme.class)); + FutureCallback futureCallback = (FutureCallback) invocationOnMock.getArguments()[3]; HttpUriRequest request = (HttpUriRequest)requestProducer.generateRequest(); //return the desired status code or exception depending on the path if (request.getURI().getPath().equals("/soe")) { @@ -156,7 +161,7 @@ public void testInternalHttpRequest() throws Exception { for (String httpMethod : getHttpMethods()) { HttpUriRequest expectedRequest = performRandomRequest(httpMethod); verify(httpClient, times(++times)).execute(requestArgumentCaptor.capture(), - any(HttpAsyncResponseConsumer.class), any(FutureCallback.class)); + any(HttpAsyncResponseConsumer.class), any(HttpClientContext.class), any(FutureCallback.class)); HttpUriRequest actualRequest = (HttpUriRequest)requestArgumentCaptor.getValue().generateRequest(); assertEquals(expectedRequest.getURI(), actualRequest.getURI()); assertEquals(expectedRequest.getClass(), actualRequest.getClass()); @@ -216,23 +221,45 @@ public void testOkStatusCodes() throws IOException { */ public void testErrorStatusCodes() throws IOException { for (String method : getHttpMethods()) { + Set expectedIgnores = new HashSet<>(); + String ignoreParam = ""; + if (HttpHead.METHOD_NAME.equals(method)) { + expectedIgnores.add(404); + } + if (randomBoolean()) { + int numIgnores = randomIntBetween(1, 3); + for (int i = 0; i < numIgnores; i++) { + Integer code = randomFrom(getAllErrorStatusCodes()); + expectedIgnores.add(code); + ignoreParam += code; + if (i < numIgnores - 1) { + ignoreParam += ","; + } + } + } //error status codes should cause an exception to be thrown for (int errorStatusCode : getAllErrorStatusCodes()) { try { - Response response = performRequest(method, "/" + errorStatusCode); - if (method.equals("HEAD") && errorStatusCode == 404) { - //no exception gets thrown although we got a 404 - assertThat(response.getStatusLine().getStatusCode(), equalTo(errorStatusCode)); + Map params; + if (ignoreParam.isEmpty()) { + params = Collections.emptyMap(); + } else { + params = Collections.singletonMap("ignore", ignoreParam); + } + Response response = performRequest(method, "/" + errorStatusCode, params); + if (expectedIgnores.contains(errorStatusCode)) { + //no exception gets thrown although we got an error status code, as it was configured to be ignored + assertEquals(errorStatusCode, response.getStatusLine().getStatusCode()); } else { fail("request should have failed"); } } catch(ResponseException e) { - if (method.equals("HEAD") && errorStatusCode == 404) { + if (expectedIgnores.contains(errorStatusCode)) { throw e; } - assertThat(e.getResponse().getStatusLine().getStatusCode(), equalTo(errorStatusCode)); + assertEquals(errorStatusCode, e.getResponse().getStatusLine().getStatusCode()); } - if (errorStatusCode <= 500) { + if (errorStatusCode <= 500 || expectedIgnores.contains(errorStatusCode)) { failureListener.assertNotCalled(); } else { failureListener.assertCalled(httpHost); @@ -267,7 +294,7 @@ public void testIOExceptions() throws IOException { */ public void testBody() throws IOException { String body = "{ \"field\": \"value\" }"; - StringEntity entity = new StringEntity(body); + StringEntity entity = new StringEntity(body, ContentType.APPLICATION_JSON); for (String method : Arrays.asList("DELETE", "GET", "PATCH", "POST", "PUT")) { for (int okStatusCode : getOkStatusCodes()) { Response response = restClient.performRequest(method, "/" + okStatusCode, Collections.emptyMap(), entity); @@ -351,11 +378,10 @@ public void testHeaders() throws IOException { private HttpUriRequest performRandomRequest(String method) throws Exception { String uriAsString = "/" + randomStatusCode(getRandom()); URIBuilder uriBuilder = new URIBuilder(uriAsString); - Map params = Collections.emptyMap(); + final Map params = new HashMap<>(); boolean hasParams = randomBoolean(); if (hasParams) { int numParams = randomIntBetween(1, 3); - params = new HashMap<>(numParams); for (int i = 0; i < numParams; i++) { String paramKey = "param-" + i; String paramValue = randomAsciiOfLengthBetween(3, 10); @@ -363,6 +389,14 @@ private HttpUriRequest performRandomRequest(String method) throws Exception { uriBuilder.addParameter(paramKey, paramValue); } } + if (randomBoolean()) { + //randomly add some ignore parameter, which doesn't get sent as part of the request + String ignore = Integer.toString(randomFrom(RestClientTestUtil.getAllErrorStatusCodes())); + if (randomBoolean()) { + ignore += "," + Integer.toString(randomFrom(RestClientTestUtil.getAllErrorStatusCodes())); + } + params.put("ignore", ignore); + } URI uri = uriBuilder.build(); HttpUriRequest request; @@ -398,7 +432,7 @@ private HttpUriRequest performRandomRequest(String method) throws Exception { HttpEntity entity = null; boolean hasBody = request instanceof HttpEntityEnclosingRequest && getRandom().nextBoolean(); if (hasBody) { - entity = new StringEntity(randomAsciiOfLengthBetween(10, 100)); + entity = new StringEntity(randomAsciiOfLengthBetween(10, 100), ContentType.APPLICATION_JSON); ((HttpEntityEnclosingRequest) request).setEntity(entity); } @@ -433,16 +467,25 @@ private HttpUriRequest performRandomRequest(String method) throws Exception { } private Response performRequest(String method, String endpoint, Header... headers) throws IOException { - switch(randomIntBetween(0, 2)) { + return performRequest(method, endpoint, Collections.emptyMap(), headers); + } + + private Response performRequest(String method, String endpoint, Map params, Header... headers) throws IOException { + int methodSelector; + if (params.isEmpty()) { + methodSelector = randomIntBetween(0, 2); + } else { + methodSelector = randomIntBetween(1, 2); + } + switch(methodSelector) { case 0: return restClient.performRequest(method, endpoint, headers); case 1: - return restClient.performRequest(method, endpoint, Collections.emptyMap(), headers); + return restClient.performRequest(method, endpoint, params, headers); case 2: - return restClient.performRequest(method, endpoint, Collections.emptyMap(), (HttpEntity)null, headers); + return restClient.performRequest(method, endpoint, params, (HttpEntity)null, headers); default: throw new UnsupportedOperationException(); } } - } diff --git a/client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java b/client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java new file mode 100644 index 0000000000000..d8c297ed099c3 --- /dev/null +++ b/client/rest/src/test/java/org/elasticsearch/client/RestClientTests.java @@ -0,0 +1,84 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.client; + +import org.apache.http.Header; +import org.apache.http.HttpHost; +import org.apache.http.impl.nio.client.CloseableHttpAsyncClient; + +import static org.junit.Assert.assertEquals; +import static org.junit.Assert.fail; +import static org.mockito.Mockito.mock; + +public class RestClientTests extends RestClientTestCase { + + public void testPerformAsyncWithUnsupportedMethod() throws Exception { + RestClient.SyncResponseListener listener = new RestClient.SyncResponseListener(10000); + try (RestClient restClient = createRestClient()) { + restClient.performRequestAsync("unsupported", randomAsciiOfLength(5), listener); + listener.get(); + + fail("should have failed because of unsupported method"); + } catch (UnsupportedOperationException exception) { + assertEquals("http method not supported: unsupported", exception.getMessage()); + } + } + + public void testPerformAsyncWithNullParams() throws Exception { + RestClient.SyncResponseListener listener = new RestClient.SyncResponseListener(10000); + try (RestClient restClient = createRestClient()) { + restClient.performRequestAsync(randomAsciiOfLength(5), randomAsciiOfLength(5), null, listener); + listener.get(); + + fail("should have failed because of null parameters"); + } catch (NullPointerException exception) { + assertEquals("params must not be null", exception.getMessage()); + } + } + + public void testPerformAsyncWithNullHeaders() throws Exception { + RestClient.SyncResponseListener listener = new RestClient.SyncResponseListener(10000); + try (RestClient restClient = createRestClient()) { + restClient.performRequestAsync("GET", randomAsciiOfLength(5), listener, (Header) null); + listener.get(); + + fail("should have failed because of null headers"); + } catch (NullPointerException exception) { + assertEquals("request header must not be null", exception.getMessage()); + } + } + + public void testPerformAsyncWithWrongEndpoint() throws Exception { + RestClient.SyncResponseListener listener = new RestClient.SyncResponseListener(10000); + try (RestClient restClient = createRestClient()) { + restClient.performRequestAsync("GET", "::http:///", listener); + listener.get(); + + fail("should have failed because of wrong endpoint"); + } catch (IllegalArgumentException exception) { + assertEquals("Expected scheme name at index 0: ::http:///", exception.getMessage()); + } + } + + private static RestClient createRestClient() { + HttpHost[] hosts = new HttpHost[]{new HttpHost("localhost", 9200)}; + return new RestClient(mock(CloseableHttpAsyncClient.class), randomLongBetween(1_000, 30_000), new Header[]{}, hosts, null, null); + } +} diff --git a/client/sniffer/licenses/jackson-core-2.8.1.jar.sha1 b/client/sniffer/licenses/jackson-core-2.8.1.jar.sha1 deleted file mode 100644 index b92131d6fab45..0000000000000 --- a/client/sniffer/licenses/jackson-core-2.8.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fd13b1c033741d48291315c6370f7d475a42dccf \ No newline at end of file diff --git a/client/sniffer/licenses/jackson-core-2.8.6.jar.sha1 b/client/sniffer/licenses/jackson-core-2.8.6.jar.sha1 new file mode 100644 index 0000000000000..af7677d13c28c --- /dev/null +++ b/client/sniffer/licenses/jackson-core-2.8.6.jar.sha1 @@ -0,0 +1 @@ +2ef7b1cc34de149600f5e75bc2d5bf40de894e60 \ No newline at end of file diff --git a/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java b/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java index dbf85578b199b..a0a6641abbc5f 100644 --- a/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java +++ b/client/test/src/main/java/org/elasticsearch/client/RestClientTestUtil.java @@ -59,7 +59,7 @@ static String randomHttpMethod(Random random) { } static int randomStatusCode(Random random) { - return RandomPicks.randomFrom(random, ALL_ERROR_STATUS_CODES); + return RandomPicks.randomFrom(random, ALL_STATUS_CODES); } static int randomOkStatusCode(Random random) { diff --git a/client/transport/build.gradle b/client/transport/build.gradle index 77833c1f2672d..b2edc9c8fcd8f 100644 --- a/client/transport/build.gradle +++ b/client/transport/build.gradle @@ -31,6 +31,7 @@ dependencies { compile "org.elasticsearch.plugin:reindex-client:${version}" compile "org.elasticsearch.plugin:lang-mustache-client:${version}" compile "org.elasticsearch.plugin:percolator-client:${version}" + compile "org.elasticsearch.plugin:parent-join-client:${version}" testCompile "com.carrotsearch.randomizedtesting:randomizedtesting-runner:${versions.randomizedrunner}" testCompile "junit:junit:${versions.junit}" testCompile "org.hamcrest:hamcrest-all:${versions.hamcrest}" diff --git a/client/transport/src/main/java/org/elasticsearch/transport/client/PreBuiltTransportClient.java b/client/transport/src/main/java/org/elasticsearch/transport/client/PreBuiltTransportClient.java index 3233470a253a4..2c28253e3f1ad 100644 --- a/client/transport/src/main/java/org/elasticsearch/transport/client/PreBuiltTransportClient.java +++ b/client/transport/src/main/java/org/elasticsearch/transport/client/PreBuiltTransportClient.java @@ -26,6 +26,7 @@ import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.reindex.ReindexPlugin; +import org.elasticsearch.join.ParentJoinPlugin; import org.elasticsearch.percolator.PercolatorPlugin; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.script.mustache.MustachePlugin; @@ -41,7 +42,8 @@ * {@link Netty4Plugin}, * {@link ReindexPlugin}, * {@link PercolatorPlugin}, - * and {@link MustachePlugin} + * {@link MustachePlugin}, + * {@link ParentJoinPlugin} * plugins for the client. These plugins are all the required modules for Elasticsearch. */ @SuppressWarnings({"unchecked","varargs"}) @@ -53,27 +55,29 @@ public class PreBuiltTransportClient extends TransportClient { } /** - * Netty wants to do some unsafe things like use unsafe and replace a private field. This method disables these things by default, but - * can be overridden by setting the corresponding system properties. + * Netty wants to do some unwelcome things like use unsafe and replace a private field, or use a poorly considered buffer recycler. This + * method disables these things by default, but can be overridden by setting the corresponding system properties. */ - @SuppressForbidden(reason = "set system properties to configure Netty") private static void initializeNetty() { - final String noUnsafeKey = "io.netty.noUnsafe"; - final String noUnsafe = System.getProperty(noUnsafeKey); - if (noUnsafe == null) { - // disable Netty from using unsafe - // while permissions are needed to set this, if a security exception is thrown the permission needed can either be granted or - // the system property can be set directly before starting the JVM; therefore, we do not catch a security exception here - System.setProperty(noUnsafeKey, Boolean.toString(true)); - } + /* + * We disable three pieces of Netty functionality here: + * - we disable Netty from being unsafe + * - we disable Netty from replacing the selector key set + * - we disable Netty from using the recycler + * + * While permissions are needed to read and set these, the permissions needed here are innocuous and thus should simply be granted + * rather than us handling a security exception here. + */ + setSystemPropertyIfUnset("io.netty.noUnsafe", Boolean.toString(true)); + setSystemPropertyIfUnset("io.netty.noKeySetOptimization", Boolean.toString(true)); + setSystemPropertyIfUnset("io.netty.recycler.maxCapacityPerThread", Integer.toString(0)); + } - final String noKeySetOptimizationKey = "io.netty.noKeySetOptimization"; - final String noKeySetOptimization = System.getProperty(noKeySetOptimizationKey); - if (noKeySetOptimization == null) { - // disable Netty from replacing the selector key set - // while permissions are needed to set this, if a security exception is thrown the permission needed can either be granted or - // the system property can be set directly before starting the JVM; therefore, we do not catch a security exception here - System.setProperty(noKeySetOptimizationKey, Boolean.toString(true)); + @SuppressForbidden(reason = "set system properties to configure Netty") + private static void setSystemPropertyIfUnset(final String key, final String value) { + final String currentValue = System.getProperty(key); + if (currentValue == null) { + System.setProperty(key, value); } } @@ -83,7 +87,8 @@ private static void initializeNetty() { Netty4Plugin.class, ReindexPlugin.class, PercolatorPlugin.class, - MustachePlugin.class)); + MustachePlugin.class, + ParentJoinPlugin.class)); /** * Creates a new transport client with pre-installed plugins. diff --git a/client/transport/src/test/java/org/elasticsearch/transport/client/PreBuiltTransportClientTests.java b/client/transport/src/test/java/org/elasticsearch/transport/client/PreBuiltTransportClientTests.java index dd628fad31fa1..dbcf3571125de 100644 --- a/client/transport/src/test/java/org/elasticsearch/transport/client/PreBuiltTransportClientTests.java +++ b/client/transport/src/test/java/org/elasticsearch/transport/client/PreBuiltTransportClientTests.java @@ -25,6 +25,7 @@ import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.reindex.ReindexPlugin; +import org.elasticsearch.join.ParentJoinPlugin; import org.elasticsearch.percolator.PercolatorPlugin; import org.elasticsearch.plugins.Plugin; import org.elasticsearch.script.mustache.MustachePlugin; @@ -41,8 +42,6 @@ public class PreBuiltTransportClientTests extends RandomizedTest { @Test public void testPluginInstalled() { - // TODO: remove when Netty 4.1.6 is upgraded to Netty 4.1.7 including https://github.com/netty/netty/pull/6068 - assumeFalse(Constants.JRE_IS_MINIMUM_JAVA9); try (TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)) { Settings settings = client.settings(); assertEquals(Netty4Plugin.NETTY_TRANSPORT_NAME, NetworkModule.HTTP_DEFAULT_TYPE_SETTING.get(settings)); @@ -52,7 +51,8 @@ public void testPluginInstalled() { @Test public void testInstallPluginTwice() { - for (Class plugin : Arrays.asList(ReindexPlugin.class, PercolatorPlugin.class, MustachePlugin.class)) { + for (Class plugin : + Arrays.asList(ParentJoinPlugin.class, ReindexPlugin.class, PercolatorPlugin.class, MustachePlugin.class)) { try { new PreBuiltTransportClient(Settings.EMPTY, plugin); fail("exception expected"); diff --git a/core/build.gradle b/core/build.gradle index 7a5803355712f..2e2a7fc2fde57 100644 --- a/core/build.gradle +++ b/core/build.gradle @@ -74,19 +74,20 @@ dependencies { // percentiles aggregation compile 'com.tdunning:t-digest:3.0' // precentil ranks aggregation - compile 'org.hdrhistogram:HdrHistogram:2.1.6' + compile 'org.hdrhistogram:HdrHistogram:2.1.9' // lucene spatial compile "org.locationtech.spatial4j:spatial4j:${versions.spatial4j}", optional compile "com.vividsolutions:jts:${versions.jts}", optional // logging - compile "org.apache.logging.log4j:log4j-api:${versions.log4j}", optional + compile "org.apache.logging.log4j:log4j-api:${versions.log4j}" compile "org.apache.logging.log4j:log4j-core:${versions.log4j}", optional // to bridge dependencies that are still on Log4j 1 to Log4j 2 compile "org.apache.logging.log4j:log4j-1.2-api:${versions.log4j}", optional - compile "net.java.dev.jna:jna:${versions.jna}" + // repackaged jna with native bits linked against all elastic supported platforms + compile "org.elasticsearch:jna:${versions.jna}" if (isEclipse == false || project.path == ":core-tests") { testCompile("org.elasticsearch.test:framework:${version}") { @@ -94,6 +95,8 @@ dependencies { exclude group: 'org.elasticsearch', module: 'elasticsearch' } } + testCompile 'com.google.jimfs:jimfs:1.1' + testCompile 'com.google.guava:guava:18.0' } if (isEclipse) { @@ -115,14 +118,13 @@ compileTestJava.options.compilerArgs << "-Xlint:-cast,-deprecation,-rawtypes,-tr forbiddenPatterns { exclude '**/*.json' exclude '**/*.jmx' - exclude '**/org/elasticsearch/cluster/routing/shard_routes.txt' } task generateModulesList { List modules = project(':modules').subprojects.collect { it.name } File modulesFile = new File(buildDir, 'generated-resources/modules.txt') - processResources.from(modulesFile) - inputs.property('modules', modules) + processResources.from(modulesFile) + inputs.property('modules', modules) outputs.file(modulesFile) doLast { modulesFile.parentFile.mkdirs() @@ -135,8 +137,8 @@ task generatePluginsList { .findAll { it.name.contains('example') == false } .collect { it.name } File pluginsFile = new File(buildDir, 'generated-resources/plugins.txt') - processResources.from(pluginsFile) - inputs.property('plugins', plugins) + processResources.from(pluginsFile) + inputs.property('plugins', plugins) outputs.file(pluginsFile) doLast { pluginsFile.parentFile.mkdirs() @@ -229,9 +231,11 @@ thirdPartyAudit.excludes = [ 'org.apache.commons.compress.utils.IOUtils', 'org.apache.commons.csv.CSVFormat', 'org.apache.commons.csv.QuoteMode', + 'org.apache.kafka.clients.producer.Callback', 'org.apache.kafka.clients.producer.KafkaProducer', 'org.apache.kafka.clients.producer.Producer', 'org.apache.kafka.clients.producer.ProducerRecord', + 'org.apache.kafka.clients.producer.RecordMetadata', 'org.codehaus.stax2.XMLStreamWriter2', 'org.jctools.queues.MessagePassingQueue$Consumer', 'org.jctools.queues.MpscArrayQueue', @@ -251,7 +255,7 @@ thirdPartyAudit.excludes = [ 'org.zeromq.ZMQ', // from org.locationtech.spatial4j.io.GeoJSONReader (spatial4j) - 'org.noggit.JSONParser', + 'org.noggit.JSONParser', ] dependencyLicenses { diff --git a/core/licenses/HdrHistogram-2.1.6.jar.sha1 b/core/licenses/HdrHistogram-2.1.6.jar.sha1 deleted file mode 100644 index 26fc16f2e87f0..0000000000000 --- a/core/licenses/HdrHistogram-2.1.6.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -7495feb7f71ee124bd2a7e7d83590e296d71d80e \ No newline at end of file diff --git a/core/licenses/HdrHistogram-2.1.9.jar.sha1 b/core/licenses/HdrHistogram-2.1.9.jar.sha1 new file mode 100644 index 0000000000000..2378df07b2c0c --- /dev/null +++ b/core/licenses/HdrHistogram-2.1.9.jar.sha1 @@ -0,0 +1 @@ +e4631ce165eb400edecfa32e03d3f1be53dee754 \ No newline at end of file diff --git a/core/licenses/jackson-core-2.8.1.jar.sha1 b/core/licenses/jackson-core-2.8.1.jar.sha1 deleted file mode 100644 index b92131d6fab45..0000000000000 --- a/core/licenses/jackson-core-2.8.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fd13b1c033741d48291315c6370f7d475a42dccf \ No newline at end of file diff --git a/core/licenses/jackson-core-2.8.6.jar.sha1 b/core/licenses/jackson-core-2.8.6.jar.sha1 new file mode 100644 index 0000000000000..af7677d13c28c --- /dev/null +++ b/core/licenses/jackson-core-2.8.6.jar.sha1 @@ -0,0 +1 @@ +2ef7b1cc34de149600f5e75bc2d5bf40de894e60 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-cbor-2.8.1.jar.sha1 b/core/licenses/jackson-dataformat-cbor-2.8.1.jar.sha1 deleted file mode 100644 index 7f1609bfd85b2..0000000000000 --- a/core/licenses/jackson-dataformat-cbor-2.8.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -3a6fb7e75c9972559a78cf5cfc5a48a41a13ea40 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-cbor-2.8.6.jar.sha1 b/core/licenses/jackson-dataformat-cbor-2.8.6.jar.sha1 new file mode 100644 index 0000000000000..6a2e980235381 --- /dev/null +++ b/core/licenses/jackson-dataformat-cbor-2.8.6.jar.sha1 @@ -0,0 +1 @@ +b88721371cfa2d7242bb5e52fe70861aa061c050 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-smile-2.8.1.jar.sha1 b/core/licenses/jackson-dataformat-smile-2.8.1.jar.sha1 deleted file mode 100644 index 114d656a388d3..0000000000000 --- a/core/licenses/jackson-dataformat-smile-2.8.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -005b73867bc12224946fc67fc8d49d9f5e698d7f \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-smile-2.8.6.jar.sha1 b/core/licenses/jackson-dataformat-smile-2.8.6.jar.sha1 new file mode 100644 index 0000000000000..19be9a2040bed --- /dev/null +++ b/core/licenses/jackson-dataformat-smile-2.8.6.jar.sha1 @@ -0,0 +1 @@ +71590ad45cee21249774e2f93e5eca66e446cef3 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-yaml-2.8.1.jar.sha1 b/core/licenses/jackson-dataformat-yaml-2.8.1.jar.sha1 deleted file mode 100644 index 32ce0f7434484..0000000000000 --- a/core/licenses/jackson-dataformat-yaml-2.8.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -eb63166c723b0b4b9fb5298fca232a2f6612ec34 \ No newline at end of file diff --git a/core/licenses/jackson-dataformat-yaml-2.8.6.jar.sha1 b/core/licenses/jackson-dataformat-yaml-2.8.6.jar.sha1 new file mode 100644 index 0000000000000..c61dad3bbcdd7 --- /dev/null +++ b/core/licenses/jackson-dataformat-yaml-2.8.6.jar.sha1 @@ -0,0 +1 @@ +8bd44d50f9a6cdff9c7578ea39d524eb519e35ab \ No newline at end of file diff --git a/core/licenses/jna-4.2.2.jar.sha1 b/core/licenses/jna-4.2.2.jar.sha1 deleted file mode 100644 index 8b1acbbe5d7ab..0000000000000 --- a/core/licenses/jna-4.2.2.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -5012450aee579c3118ff09461d5ce210e0cdc2a9 \ No newline at end of file diff --git a/core/licenses/jna-4.4.0.jar.sha1 b/core/licenses/jna-4.4.0.jar.sha1 new file mode 100644 index 0000000000000..f760fe11e11ee --- /dev/null +++ b/core/licenses/jna-4.4.0.jar.sha1 @@ -0,0 +1 @@ +6edc9b4514969d768039acf43f04210b15658cd7 \ No newline at end of file diff --git a/core/licenses/log4j-1.2-api-2.7.jar.sha1 b/core/licenses/log4j-1.2-api-2.7.jar.sha1 deleted file mode 100644 index f364441414880..0000000000000 --- a/core/licenses/log4j-1.2-api-2.7.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -39f4e6c2d68d4ef8fd4b0883d165682dedd5be52 \ No newline at end of file diff --git a/core/licenses/log4j-1.2-api-2.8.2.jar.sha1 b/core/licenses/log4j-1.2-api-2.8.2.jar.sha1 new file mode 100644 index 0000000000000..39d09bec71767 --- /dev/null +++ b/core/licenses/log4j-1.2-api-2.8.2.jar.sha1 @@ -0,0 +1 @@ +f1543534b8413aac91fa54d1fff65dfff76818cd \ No newline at end of file diff --git a/core/licenses/log4j-api-2.7.jar.sha1 b/core/licenses/log4j-api-2.7.jar.sha1 deleted file mode 100644 index 8f676d9dbdd0e..0000000000000 --- a/core/licenses/log4j-api-2.7.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -8de00e382a817981b737be84cb8def687d392963 \ No newline at end of file diff --git a/core/licenses/log4j-api-2.8.2.jar.sha1 b/core/licenses/log4j-api-2.8.2.jar.sha1 new file mode 100644 index 0000000000000..7c7c1da835c92 --- /dev/null +++ b/core/licenses/log4j-api-2.8.2.jar.sha1 @@ -0,0 +1 @@ +e590eeb783348ce8ddef205b82127f9084d82bf3 \ No newline at end of file diff --git a/core/licenses/log4j-core-2.7.jar.sha1 b/core/licenses/log4j-core-2.7.jar.sha1 deleted file mode 100644 index 07bb057a984ed..0000000000000 --- a/core/licenses/log4j-core-2.7.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a3f2b4e64c61a7fc1ed8f1e5ba371933404ed98a \ No newline at end of file diff --git a/core/licenses/log4j-core-2.8.2.jar.sha1 b/core/licenses/log4j-core-2.8.2.jar.sha1 new file mode 100644 index 0000000000000..4e6c7b4fcc365 --- /dev/null +++ b/core/licenses/log4j-core-2.8.2.jar.sha1 @@ -0,0 +1 @@ +979fc0cf8460302e4ffbfe38c1b66a99450b0bb7 \ No newline at end of file diff --git a/core/licenses/lucene-analyzers-common-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-analyzers-common-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index ffa2b42fb9081..0000000000000 --- a/core/licenses/lucene-analyzers-common-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -ad1553dd2eed3a7cd5778bc7520821ac926b56df \ No newline at end of file diff --git a/core/licenses/lucene-analyzers-common-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-analyzers-common-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..9a1f65be58fe6 --- /dev/null +++ b/core/licenses/lucene-analyzers-common-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +5e191674c50c9d99c9838da52cbf67c411998f4e \ No newline at end of file diff --git a/core/licenses/lucene-backward-codecs-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-backward-codecs-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index 58587dc58b851..0000000000000 --- a/core/licenses/lucene-backward-codecs-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -dde630b1d09ff928a1f358951747cfad5c46b334 \ No newline at end of file diff --git a/core/licenses/lucene-backward-codecs-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-backward-codecs-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..8ffb313c69496 --- /dev/null +++ b/core/licenses/lucene-backward-codecs-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +45bc34ab640d5d1a7491b523631b902f20db5384 \ No newline at end of file diff --git a/core/licenses/lucene-core-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-core-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index 66a9a3208e699..0000000000000 --- a/core/licenses/lucene-core-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -1789bff323a0c013b126f4e51f1f269ebc631277 \ No newline at end of file diff --git a/core/licenses/lucene-core-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-core-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..220b0ea521223 --- /dev/null +++ b/core/licenses/lucene-core-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +b44d86e9077443c3ba4918a85603734461c6b448 \ No newline at end of file diff --git a/core/licenses/lucene-grouping-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-grouping-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index 74441065e0d66..0000000000000 --- a/core/licenses/lucene-grouping-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -8cb17916d0e63705f1f715fe0d03ed32916a077a \ No newline at end of file diff --git a/core/licenses/lucene-grouping-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-grouping-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..99612cc340968 --- /dev/null +++ b/core/licenses/lucene-grouping-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +409b616d40e2041a02890b2dc477ed845e3121e9 \ No newline at end of file diff --git a/core/licenses/lucene-highlighter-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-highlighter-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index 9aaa848b476f0..0000000000000 --- a/core/licenses/lucene-highlighter-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -79d6ba8fa629a52ad3eb829d085836f5fd2f7a87 \ No newline at end of file diff --git a/core/licenses/lucene-highlighter-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-highlighter-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..a3bd96546f378 --- /dev/null +++ b/core/licenses/lucene-highlighter-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +cfac105541315e2ca54955f681b410a7aa3bbb9d \ No newline at end of file diff --git a/core/licenses/lucene-join-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-join-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index 4ea4443a65007..0000000000000 --- a/core/licenses/lucene-join-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -19794d8f15402c991d9533bfcd67e2e7a34677ef \ No newline at end of file diff --git a/core/licenses/lucene-join-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-join-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..92c0c80f6a498 --- /dev/null +++ b/core/licenses/lucene-join-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +993c1331130dd26c632b964fd8caac259bb9f3fc \ No newline at end of file diff --git a/core/licenses/lucene-memory-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-memory-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index 8128c115c1302..0000000000000 --- a/core/licenses/lucene-memory-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -33e42d3019e072752258bd778912c8d4365470a1 \ No newline at end of file diff --git a/core/licenses/lucene-memory-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-memory-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..6de623ae884a4 --- /dev/null +++ b/core/licenses/lucene-memory-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +ec1460a28850410112a6349a7fff27df31242295 \ No newline at end of file diff --git a/core/licenses/lucene-misc-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-misc-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index d55fa646119d5..0000000000000 --- a/core/licenses/lucene-misc-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a1b3271b3800da349c8b98f7b1a25b2b6192252a \ No newline at end of file diff --git a/core/licenses/lucene-misc-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-misc-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..fd7a6b53d34f9 --- /dev/null +++ b/core/licenses/lucene-misc-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +57d342dbe68cf05361ccfda6bb76f2410cac900b \ No newline at end of file diff --git a/core/licenses/lucene-queries-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-queries-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index 99948c1260d9a..0000000000000 --- a/core/licenses/lucene-queries-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -792716d805fcc5091931874c2f2f86f35da8b401 \ No newline at end of file diff --git a/core/licenses/lucene-queries-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-queries-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..e04c283d0fa0e --- /dev/null +++ b/core/licenses/lucene-queries-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +5ed10847b6a2353ac66decd5a2ee1a1d34353049 \ No newline at end of file diff --git a/core/licenses/lucene-queryparser-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-queryparser-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index 06cade5307514..0000000000000 --- a/core/licenses/lucene-queryparser-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -c3f8bbc6ebe8d31da41fcdb1fa73f13d8170ee62 \ No newline at end of file diff --git a/core/licenses/lucene-queryparser-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-queryparser-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..87871dc29d558 --- /dev/null +++ b/core/licenses/lucene-queryparser-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +23ce6c2ea59287d8fe4fe31f466e9a58a1efe7b5 \ No newline at end of file diff --git a/core/licenses/lucene-sandbox-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-sandbox-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index 33dc3fac466bf..0000000000000 --- a/core/licenses/lucene-sandbox-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -263901a19686c6cce7dd5c32a4934c42c62454dc \ No newline at end of file diff --git a/core/licenses/lucene-sandbox-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-sandbox-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..ea065b272cf6a --- /dev/null +++ b/core/licenses/lucene-sandbox-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +78bda71c8e65428927136f81112a031aa9cd04d4 \ No newline at end of file diff --git a/core/licenses/lucene-spatial-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-spatial-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index 8bcd008672265..0000000000000 --- a/core/licenses/lucene-spatial-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -85426164fcc264a7e3bacc1a70602513540a261a \ No newline at end of file diff --git a/core/licenses/lucene-spatial-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-spatial-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..c623088ce2af8 --- /dev/null +++ b/core/licenses/lucene-spatial-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +1e7ea95e6197176015b13551c7496be4867ede45 \ No newline at end of file diff --git a/core/licenses/lucene-spatial-extras-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-spatial-extras-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index d2041b9a4dd52..0000000000000 --- a/core/licenses/lucene-spatial-extras-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -332cbfaa6b1ee0bf4d820018872988e15cd413d2 \ No newline at end of file diff --git a/core/licenses/lucene-spatial-extras-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-spatial-extras-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..e51de2208ee36 --- /dev/null +++ b/core/licenses/lucene-spatial-extras-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +5ae4ecd6c478456395ae9a3f954b8afc13629bb9 \ No newline at end of file diff --git a/core/licenses/lucene-spatial3d-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-spatial3d-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index b699c89a6d340..0000000000000 --- a/core/licenses/lucene-spatial3d-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -3fe3e902b971f4aa2b4a3a417ba5dcf83e968428 \ No newline at end of file diff --git a/core/licenses/lucene-spatial3d-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-spatial3d-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..25d042e923a04 --- /dev/null +++ b/core/licenses/lucene-spatial3d-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +d5d1a81fc290b9660a49557f848dc2a3c4f2048b \ No newline at end of file diff --git a/core/licenses/lucene-suggest-6.4.0-snapshot-084f7a0.jar.sha1 b/core/licenses/lucene-suggest-6.4.0-snapshot-084f7a0.jar.sha1 deleted file mode 100644 index 69bb10621f1f4..0000000000000 --- a/core/licenses/lucene-suggest-6.4.0-snapshot-084f7a0.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -c4863fe45853163abfbe5c8b8bd7bdcf9a9c7b40 \ No newline at end of file diff --git a/core/licenses/lucene-suggest-7.0.0-snapshot-a0aef2f.jar.sha1 b/core/licenses/lucene-suggest-7.0.0-snapshot-a0aef2f.jar.sha1 new file mode 100644 index 0000000000000..5ac114c4547df --- /dev/null +++ b/core/licenses/lucene-suggest-7.0.0-snapshot-a0aef2f.jar.sha1 @@ -0,0 +1 @@ +d77cdd8f2782062a3b4c319c64f0fa4d804aafed \ No newline at end of file diff --git a/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DeDuplicatingTokenFilter.java b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DeDuplicatingTokenFilter.java new file mode 100644 index 0000000000000..3265ab8addce8 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DeDuplicatingTokenFilter.java @@ -0,0 +1,201 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.analysis.miscellaneous; + +import org.apache.lucene.analysis.FilteringTokenFilter; +import org.apache.lucene.analysis.TokenFilter; +import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.analysis.tokenattributes.TermToBytesRefAttribute; +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.common.hash.MurmurHash3; + +import java.io.IOException; +import java.util.ArrayList; + +/** + * Inspects token streams for duplicate sequences of tokens. Token sequences + * have a minimum length - 6 is a good heuristic as it avoids filtering common + * idioms/phrases but detects longer sections that are typical of cut+paste + * copies of text. + * + *

+ * Internally each token is hashed/moduloed into a single byte (so 256 possible + * values for each token) and then recorded in a trie of seen byte sequences + * using a {@link DuplicateByteSequenceSpotter}. This trie is passed into the + * TokenFilter constructor so a single object can be reused across multiple + * documents. + * + *

+ * The emitDuplicates setting controls if duplicate tokens are filtered from + * results or are output (the {@link DuplicateSequenceAttribute} attribute can + * be used to inspect the number of prior sightings when emitDuplicates is true) + */ +public class DeDuplicatingTokenFilter extends FilteringTokenFilter { + private final DuplicateSequenceAttribute seqAtt = addAttribute(DuplicateSequenceAttribute.class); + private final boolean emitDuplicates; + static final MurmurHash3.Hash128 seed = new MurmurHash3.Hash128(); + + public DeDuplicatingTokenFilter(TokenStream in, DuplicateByteSequenceSpotter byteStreamDuplicateSpotter) { + this(in, byteStreamDuplicateSpotter, false); + } + + /** + * + * @param in + * The input token stream + * @param byteStreamDuplicateSpotter + * object which retains trie of token sequences + * @param emitDuplicates + * true if duplicate tokens are to be emitted (use + * {@link DuplicateSequenceAttribute} attribute to inspect number + * of prior sightings of tokens as part of a sequence). + */ + public DeDuplicatingTokenFilter(TokenStream in, DuplicateByteSequenceSpotter byteStreamDuplicateSpotter, boolean emitDuplicates) { + super(new DuplicateTaggingFilter(byteStreamDuplicateSpotter, in)); + this.emitDuplicates = emitDuplicates; + } + + @Override + protected boolean accept() throws IOException { + return emitDuplicates || seqAtt.getNumPriorUsesInASequence() < 1; + } + + private static class DuplicateTaggingFilter extends TokenFilter { + private final DuplicateSequenceAttribute seqAtt = addAttribute(DuplicateSequenceAttribute.class); + + TermToBytesRefAttribute termBytesAtt = addAttribute(TermToBytesRefAttribute.class); + private DuplicateByteSequenceSpotter byteStreamDuplicateSpotter; + private ArrayList allTokens; + int pos = 0; + private final int windowSize; + + protected DuplicateTaggingFilter(DuplicateByteSequenceSpotter byteStreamDuplicateSpotter, TokenStream input) { + super(input); + this.byteStreamDuplicateSpotter = byteStreamDuplicateSpotter; + this.windowSize = DuplicateByteSequenceSpotter.TREE_DEPTH; + } + + + @Override + public final boolean incrementToken() throws IOException { + if (allTokens == null) { + loadAllTokens(); + } + clearAttributes(); + if (pos < allTokens.size()) { + State earlierToken = allTokens.get(pos); + pos++; + restoreState(earlierToken); + return true; + } else { + return false; + } + } + + public void loadAllTokens() throws IOException { + // TODO consider changing this implementation to emit tokens as-we-go + // rather than buffering all. However this array is perhaps not the + // bulk of memory usage (in practice the dupSequenceSpotter requires + // ~5x the original content size in its internal tree ). + allTokens = new ArrayList(256); + + /* + * Given the bytes 123456123456 and a duplicate sequence size of 6 + * the byteStreamDuplicateSpotter will only flag the final byte as + * part of a duplicate sequence due to the byte-at-a-time streaming + * nature of its assessments. When this happens we retain a buffer + * of the last 6 tokens so that we can mark the states of prior + * tokens (bytes 7 to 11) as also being duplicates + */ + + pos = 0; + boolean isWrapped = false; + State priorStatesBuffer[] = new State[windowSize]; + short priorMaxNumSightings[] = new short[windowSize]; + int cursor = 0; + while (input.incrementToken()) { + BytesRef bytesRef = termBytesAtt.getBytesRef(); + long tokenHash = MurmurHash3.hash128(bytesRef.bytes, bytesRef.offset, bytesRef.length, 0, seed).h1; + byte tokenByte = (byte) (tokenHash & 0xFF); + short numSightings = byteStreamDuplicateSpotter.addByte(tokenByte); + priorStatesBuffer[cursor] = captureState(); + // Revise prior captured State objects if the latest + // token is marked as a duplicate + if (numSightings >= 1) { + int numLengthsToRecord = windowSize; + int pos = cursor; + while (numLengthsToRecord > 0) { + if (pos < 0) { + pos = windowSize - 1; + } + priorMaxNumSightings[pos] = (short) Math.max(priorMaxNumSightings[pos], numSightings); + numLengthsToRecord--; + pos--; + } + } + // Reposition cursor to next free slot + cursor++; + if (cursor >= windowSize) { + // wrap around the buffer + cursor = 0; + isWrapped = true; + } + // clean out the end of the tail that we may overwrite if the + // next iteration adds a new head + if (isWrapped) { + // tokenPos is now positioned on tail - emit any valid + // tokens we may about to overwrite in the next iteration + if (priorStatesBuffer[cursor] != null) { + recordLengthInfoState(priorMaxNumSightings, priorStatesBuffer, cursor); + } + } + } // end loop reading all tokens from stream + + // Flush the buffered tokens + int pos = isWrapped ? nextAfter(cursor) : 0; + while (pos != cursor) { + recordLengthInfoState(priorMaxNumSightings, priorStatesBuffer, pos); + pos = nextAfter(pos); + } + } + + private int nextAfter(int pos) { + pos++; + if (pos >= windowSize) { + pos = 0; + } + return pos; + } + + private void recordLengthInfoState(short[] maxNumSightings, State[] tokenStates, int cursor) { + if (maxNumSightings[cursor] > 0) { + // We need to patch in the max sequence length we recorded at + // this position into the token state + restoreState(tokenStates[cursor]); + seqAtt.setNumPriorUsesInASequence(maxNumSightings[cursor]); + maxNumSightings[cursor] = 0; + // record the patched state + tokenStates[cursor] = captureState(); + } + allTokens.add(tokenStates[cursor]); + } + + } +} \ No newline at end of file diff --git a/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttribute.java b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttribute.java new file mode 100644 index 0000000000000..16b9ecd1a5247 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttribute.java @@ -0,0 +1,32 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.analysis.miscellaneous; + +import org.apache.lucene.analysis.TokenStream; +import org.apache.lucene.util.Attribute; +import org.apache.lucene.analysis.tokenattributes.PositionLengthAttribute; + +/** + * This attribute can be used to indicate that the {@link PositionLengthAttribute} + * should not be taken in account in this {@link TokenStream}. + * Query parsers can extract this information to decide if this token stream should be analyzed + * as a graph or not. + */ +public interface DisableGraphAttribute extends Attribute {} diff --git a/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttributeImpl.java b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttributeImpl.java new file mode 100644 index 0000000000000..5a4e7f79f238e --- /dev/null +++ b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DisableGraphAttributeImpl.java @@ -0,0 +1,38 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.analysis.miscellaneous; + +import org.apache.lucene.util.AttributeImpl; +import org.apache.lucene.util.AttributeReflector; + +/** Default implementation of {@link DisableGraphAttribute}. */ +public class DisableGraphAttributeImpl extends AttributeImpl implements DisableGraphAttribute { + public DisableGraphAttributeImpl() {} + + @Override + public void clear() {} + + @Override + public void reflectWith(AttributeReflector reflector) { + } + + @Override + public void copyTo(AttributeImpl target) {} +} diff --git a/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DuplicateByteSequenceSpotter.java b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DuplicateByteSequenceSpotter.java new file mode 100644 index 0000000000000..7a58eaa1375f1 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DuplicateByteSequenceSpotter.java @@ -0,0 +1,311 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.analysis.miscellaneous; + +import org.apache.lucene.util.RamUsageEstimator; + +/** + * A Trie structure for analysing byte streams for duplicate sequences. Bytes + * from a stream are added one at a time using the addByte method and the number + * of times it has been seen as part of a sequence is returned. + * + * The minimum required length for a duplicate sequence detected is 6 bytes. + * + * The design goals are to maximize speed of lookup while minimizing the space + * required to do so. This has led to a hybrid solution for representing the + * bytes that make up a sequence in the trie. + * + * If we have 6 bytes in sequence e.g. abcdef then they are represented as + * object nodes in the tree as follows: + *

+ * (a)-(b)-(c)-(def as an int) + *

+ * + * + * {@link RootTreeNode} objects are used for the first two levels of the tree + * (representing bytes a and b in the example sequence). The combinations of + * objects at these 2 levels are few so internally these objects allocate an + * array of 256 child node objects to quickly address children by indexing + * directly into the densely packed array using a byte value. The third level in + * the tree holds {@link LightweightTreeNode} nodes that have few children + * (typically much less than 256) and so use a dynamically-grown array to hold + * child nodes as simple int primitives. These ints represent the final 3 bytes + * of a sequence and also hold a count of the number of times the entire sequence + * path has been visited (count is a single byte). + *

+ * The Trie grows indefinitely as more content is added and while theoretically + * it could be massive (a 6-depth tree could produce 256^6 nodes) non-random + * content e.g English text contains fewer variations. + *

+ * In future we may look at using one of these strategies when memory is tight: + *

    + *
  1. auto-pruning methods to remove less-visited parts of the tree + *
  2. auto-reset to wipe the whole tree and restart when a memory threshold is + * reached + *
  3. halting any growth of the tree + *
+ * + * Tests on real-world-text show that the size of the tree is a multiple of the + * input text where that multiplier varies between 10 and 5 times as the content + * size increased from 10 to 100 megabytes of content. + * + */ +public class DuplicateByteSequenceSpotter { + public static final int TREE_DEPTH = 6; + // The maximum number of repetitions that are counted + public static final int MAX_HIT_COUNT = 255; + private final TreeNode root; + private boolean sequenceBufferFilled = false; + private final byte[] sequenceBuffer = new byte[TREE_DEPTH]; + private int nextFreePos = 0; + + // ==Performance info + private final int[] nodesAllocatedByDepth; + private int nodesResizedByDepth; + // ==== RAM usage estimation settings ==== + private long bytesAllocated; + // Root node object plus inner-class reference to containing "this" + // (profiler suggested this was a cost) + static final long TREE_NODE_OBJECT_SIZE = RamUsageEstimator.NUM_BYTES_OBJECT_HEADER + RamUsageEstimator.NUM_BYTES_OBJECT_REF; + // A TreeNode specialization with an array ref (dynamically allocated and + // fixed-size) + static final long ROOT_TREE_NODE_OBJECT_SIZE = TREE_NODE_OBJECT_SIZE + RamUsageEstimator.NUM_BYTES_OBJECT_REF; + // A KeyedTreeNode specialization with an array ref (dynamically allocated + // and grown) + static final long LIGHTWEIGHT_TREE_NODE_OBJECT_SIZE = TREE_NODE_OBJECT_SIZE + RamUsageEstimator.NUM_BYTES_OBJECT_REF; + // A KeyedTreeNode specialization with a short-based hit count and a + // sequence of bytes encoded as an int + static final long LEAF_NODE_OBJECT_SIZE = TREE_NODE_OBJECT_SIZE + Short.BYTES + Integer.BYTES; + + public DuplicateByteSequenceSpotter() { + this.nodesAllocatedByDepth = new int[4]; + this.bytesAllocated = 0; + root = new RootTreeNode((byte) 1, null, 0); + } + + /** + * Reset the sequence detection logic to avoid any continuation of the + * immediately previous bytes. A minimum of dupSequenceSize bytes need to be + * added before any new duplicate sequences will be reported. + * Hit counts are not reset by calling this method. + */ + public void startNewSequence() { + sequenceBufferFilled = false; + nextFreePos = 0; + } + + /** + * Add a byte to the sequence. + * @param b + * the next byte in a sequence + * @return number of times this byte and the preceding 6 bytes have been + * seen before as a sequence (only counts up to 255) + * + */ + public short addByte(byte b) { + // Add latest byte to circular buffer + sequenceBuffer[nextFreePos] = b; + nextFreePos++; + if (nextFreePos >= sequenceBuffer.length) { + nextFreePos = 0; + sequenceBufferFilled = true; + } + if (sequenceBufferFilled == false) { + return 0; + } + TreeNode node = root; + // replay updated sequence of bytes represented in the circular + // buffer starting from the tail + int p = nextFreePos; + + // The first tier of nodes are addressed using individual bytes from the + // sequence + node = node.add(sequenceBuffer[p], 0); + p = nextBufferPos(p); + node = node.add(sequenceBuffer[p], 1); + p = nextBufferPos(p); + node = node.add(sequenceBuffer[p], 2); + + // The final 3 bytes in the sequence are represented in an int + // where the 4th byte will contain a hit count. + + + p = nextBufferPos(p); + int sequence = 0xFF & sequenceBuffer[p]; + p = nextBufferPos(p); + sequence = sequence << 8 | (0xFF & sequenceBuffer[p]); + p = nextBufferPos(p); + sequence = sequence << 8 | (0xFF & sequenceBuffer[p]); + return (short) (node.add(sequence << 8) - 1); + } + + private int nextBufferPos(int p) { + p++; + if (p >= sequenceBuffer.length) { + p = 0; + } + return p; + } + + /** + * Base class for nodes in the tree. Subclasses are optimised for use at + * different locations in the tree - speed-optimized nodes represent + * branches near the root while space-optimized nodes are used for deeper + * leaves/branches. + */ + abstract class TreeNode { + + TreeNode(byte key, TreeNode parentNode, int depth) { + nodesAllocatedByDepth[depth]++; + } + + public abstract TreeNode add(byte b, int depth); + + /** + * + * @param byteSequence + * a sequence of bytes encoded as an int + * @return the number of times the full sequence has been seen (counting + * up to a maximum of 32767). + */ + public abstract short add(int byteSequence); + } + + // Node implementation for use at the root of the tree that sacrifices space + // for speed. + class RootTreeNode extends TreeNode { + + // A null-or-256 sized array that can be indexed into using a byte for + // fast access. + // Being near the root of the tree it is expected that this is a + // non-sparse array. + TreeNode[] children; + + RootTreeNode(byte key, TreeNode parentNode, int depth) { + super(key, parentNode, depth); + bytesAllocated += ROOT_TREE_NODE_OBJECT_SIZE; + } + + public TreeNode add(byte b, int depth) { + if (children == null) { + children = new TreeNode[256]; + bytesAllocated += (RamUsageEstimator.NUM_BYTES_OBJECT_REF * 256); + } + int bIndex = 0xFF & b; + TreeNode node = children[bIndex]; + if (node == null) { + if (depth <= 1) { + // Depths 0 and 1 use RootTreeNode impl and create + // RootTreeNodeImpl children + node = new RootTreeNode(b, this, depth); + } else { + // Deeper-level nodes are less visited but more numerous + // so use a more space-friendly data structure + node = new LightweightTreeNode(b, this, depth); + } + children[bIndex] = node; + } + return node; + } + + @Override + public short add(int byteSequence) { + throw new UnsupportedOperationException("Root nodes do not support byte sequences encoded as integers"); + } + + } + + // Node implementation for use by the depth 3 branches of the tree that + // sacrifices speed for space. + final class LightweightTreeNode extends TreeNode { + + // An array dynamically resized but frequently only sized 1 as most + // sequences leading to end leaves are one-off paths. + // It is scanned for matches sequentially and benchmarks showed + // that sorting contents on insertion didn't improve performance. + int[] children = null; + + LightweightTreeNode(byte key, TreeNode parentNode, int depth) { + super(key, parentNode, depth); + bytesAllocated += LIGHTWEIGHT_TREE_NODE_OBJECT_SIZE; + + } + + @Override + public short add(int byteSequence) { + if (children == null) { + // Create array adding new child with the byte sequence combined with hitcount of 1. + // Most nodes at this level we expect to have only 1 child so we start with the + // smallest possible child array. + children = new int[1]; + bytesAllocated += RamUsageEstimator.NUM_BYTES_ARRAY_HEADER + Integer.BYTES; + children[0] = byteSequence + 1; + return 1; + } + // Find existing child and if discovered increment count + for (int i = 0; i < children.length; i++) { + int child = children[i]; + if (byteSequence == (child & 0xFFFFFF00)) { + int hitCount = child & 0xFF; + if (hitCount < MAX_HIT_COUNT) { + children[i]++; + } + return (short) (hitCount + 1); + } + } + // Grow array adding new child + int[] newChildren = new int[children.length + 1]; + bytesAllocated += Integer.BYTES; + + System.arraycopy(children, 0, newChildren, 0, children.length); + children = newChildren; + // Combine the byte sequence with a hit count of 1 into an int. + children[newChildren.length - 1] = byteSequence + 1; + nodesResizedByDepth++; + return 1; + } + + @Override + public TreeNode add(byte b, int depth) { + throw new UnsupportedOperationException("Leaf nodes do not take byte sequences"); + } + + } + + public final long getEstimatedSizeInBytes() { + return bytesAllocated; + } + + /** + * @return Performance info - the number of nodes allocated at each depth + */ + public int[] getNodesAllocatedByDepth() { + return nodesAllocatedByDepth.clone(); + } + + /** + * @return Performance info - the number of resizing of children arrays, at + * each depth + */ + public int getNodesResizedByDepth() { + return nodesResizedByDepth; + } + +} diff --git a/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DuplicateSequenceAttribute.java b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DuplicateSequenceAttribute.java new file mode 100644 index 0000000000000..bd797823d6835 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DuplicateSequenceAttribute.java @@ -0,0 +1,35 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.analysis.miscellaneous; + +import org.apache.lucene.util.Attribute; + +/** + * Provides statistics useful for detecting duplicate sections of text + */ +public interface DuplicateSequenceAttribute extends Attribute { + /** + * @return The number of times this token has been seen previously as part + * of a sequence (counts to a max of 255) + */ + short getNumPriorUsesInASequence(); + + void setNumPriorUsesInASequence(short len); +} \ No newline at end of file diff --git a/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DuplicateSequenceAttributeImpl.java b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DuplicateSequenceAttributeImpl.java new file mode 100644 index 0000000000000..4c989a5b3cc38 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/analysis/miscellaneous/DuplicateSequenceAttributeImpl.java @@ -0,0 +1,53 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.analysis.miscellaneous; + +import org.apache.lucene.util.AttributeImpl; +import org.apache.lucene.util.AttributeReflector; + +public class DuplicateSequenceAttributeImpl extends AttributeImpl implements DuplicateSequenceAttribute { + protected short numPriorUsesInASequence = 0; + + @Override + public void clear() { + numPriorUsesInASequence = 0; + } + + @Override + public void copyTo(AttributeImpl target) { + DuplicateSequenceAttributeImpl t = (DuplicateSequenceAttributeImpl) target; + t.numPriorUsesInASequence = numPriorUsesInASequence; + } + + @Override + public short getNumPriorUsesInASequence() { + return numPriorUsesInASequence; + } + + @Override + public void setNumPriorUsesInASequence(short len) { + numPriorUsesInASequence = len; + } + + @Override + public void reflectWith(AttributeReflector reflector) { + reflector.reflect(DuplicateSequenceAttribute.class, "sequenceLength", numPriorUsesInASequence); + } +} diff --git a/core/src/main/java/org/apache/lucene/index/OneMergeHelper.java b/core/src/main/java/org/apache/lucene/index/OneMergeHelper.java index 99ef7f4dd7fef..f8b8c6178225b 100644 --- a/core/src/main/java/org/apache/lucene/index/OneMergeHelper.java +++ b/core/src/main/java/org/apache/lucene/index/OneMergeHelper.java @@ -19,6 +19,8 @@ package org.apache.lucene.index; +import java.io.IOException; + /** * Allows pkg private access */ @@ -27,4 +29,33 @@ private OneMergeHelper() {} public static String getSegmentName(MergePolicy.OneMerge merge) { return merge.info != null ? merge.info.info.name : "_na_"; } + + /** + * The current MB per second rate limit for this merge. + **/ + public static double getMbPerSec(Thread thread, MergePolicy.OneMerge merge) { + if (thread instanceof ConcurrentMergeScheduler.MergeThread) { + return ((ConcurrentMergeScheduler.MergeThread) thread).rateLimiter.getMBPerSec(); + } + assert false: "this is not merge thread"; + return Double.POSITIVE_INFINITY; + } + + /** + * Returns total bytes written by this merge. + **/ + public static long getTotalBytesWritten(Thread thread, + MergePolicy.OneMerge merge) throws IOException { + /** + * TODO: The number of bytes written during the merge should be accessible in OneMerge. + */ + if (thread instanceof ConcurrentMergeScheduler.MergeThread) { + return ((ConcurrentMergeScheduler.MergeThread) thread).rateLimiter + .getTotalBytesWritten(); + } + assert false: "this is not merge thread"; + return merge.totalBytesSize(); + } + + } diff --git a/core/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java b/core/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java index a4b94b007fd28..cd5da674b8e71 100644 --- a/core/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java +++ b/core/src/main/java/org/apache/lucene/queries/BlendedTermQuery.java @@ -163,7 +163,7 @@ protected int compare(int i, int j) { if (prev > current) { actualDf++; } - contexts[i] = ctx = adjustDF(ctx, Math.min(maxDoc, actualDf)); + contexts[i] = ctx = adjustDF(reader.getContext(), ctx, Math.min(maxDoc, actualDf)); prev = current; if (sumTTF >= 0 && ctx.totalTermFreq() >= 0) { sumTTF += ctx.totalTermFreq(); @@ -179,16 +179,17 @@ protected int compare(int i, int j) { } // the blended sumTTF can't be greater than the sumTTTF on the field final long fixedTTF = sumTTF == -1 ? -1 : sumTTF; - contexts[i] = adjustTTF(contexts[i], fixedTTF); + contexts[i] = adjustTTF(reader.getContext(), contexts[i], fixedTTF); } } - private TermContext adjustTTF(TermContext termContext, long sumTTF) { + private TermContext adjustTTF(IndexReaderContext readerContext, TermContext termContext, long sumTTF) { + assert termContext.wasBuiltFor(readerContext); if (sumTTF == -1 && termContext.totalTermFreq() == -1) { return termContext; } - TermContext newTermContext = new TermContext(termContext.topReaderContext); - List leaves = termContext.topReaderContext.leaves(); + TermContext newTermContext = new TermContext(readerContext); + List leaves = readerContext.leaves(); final int len; if (leaves == null) { len = 1; @@ -209,7 +210,8 @@ private TermContext adjustTTF(TermContext termContext, long sumTTF) { return newTermContext; } - private static TermContext adjustDF(TermContext ctx, int newDocFreq) { + private static TermContext adjustDF(IndexReaderContext readerContext, TermContext ctx, int newDocFreq) { + assert ctx.wasBuiltFor(readerContext); // Use a value of ttf that is consistent with the doc freq (ie. gte) long newTTF; if (ctx.totalTermFreq() < 0) { @@ -217,14 +219,14 @@ private static TermContext adjustDF(TermContext ctx, int newDocFreq) { } else { newTTF = Math.max(ctx.totalTermFreq(), newDocFreq); } - List leaves = ctx.topReaderContext.leaves(); + List leaves = readerContext.leaves(); final int len; if (leaves == null) { len = 1; } else { len = leaves.size(); } - TermContext newCtx = new TermContext(ctx.topReaderContext); + TermContext newCtx = new TermContext(readerContext); for (int i = 0; i < len; ++i) { TermState termState = ctx.get(i); if (termState == null) { @@ -294,36 +296,12 @@ public int hashCode() { return Objects.hash(classHash(), Arrays.hashCode(equalsTerms())); } - public static BlendedTermQuery booleanBlendedQuery(Term[] terms, final boolean disableCoord) { - return booleanBlendedQuery(terms, null, disableCoord); - } - - public static BlendedTermQuery booleanBlendedQuery(Term[] terms, final float[] boosts, final boolean disableCoord) { - return new BlendedTermQuery(terms, boosts) { - @Override - protected Query topLevelQuery(Term[] terms, TermContext[] ctx, int[] docFreqs, int maxDoc) { - BooleanQuery.Builder booleanQueryBuilder = new BooleanQuery.Builder(); - booleanQueryBuilder.setDisableCoord(disableCoord); - for (int i = 0; i < terms.length; i++) { - Query query = new TermQuery(terms[i], ctx[i]); - if (boosts != null && boosts[i] != 1f) { - query = new BoostQuery(query, boosts[i]); - } - booleanQueryBuilder.add(query, BooleanClause.Occur.SHOULD); - } - return booleanQueryBuilder.build(); - } - }; - } - - public static BlendedTermQuery commonTermsBlendedQuery(Term[] terms, final float[] boosts, final boolean disableCoord, final float maxTermFrequency) { + public static BlendedTermQuery commonTermsBlendedQuery(Term[] terms, final float[] boosts, final float maxTermFrequency) { return new BlendedTermQuery(terms, boosts) { @Override protected Query topLevelQuery(Term[] terms, TermContext[] ctx, int[] docFreqs, int maxDoc) { BooleanQuery.Builder highBuilder = new BooleanQuery.Builder(); - highBuilder.setDisableCoord(disableCoord); BooleanQuery.Builder lowBuilder = new BooleanQuery.Builder(); - lowBuilder.setDisableCoord(disableCoord); for (int i = 0; i < terms.length; i++) { Query query = new TermQuery(terms[i], ctx[i]); if (boosts != null && boosts[i] != 1f) { @@ -341,7 +319,6 @@ protected Query topLevelQuery(Term[] terms, TermContext[] ctx, int[] docFreqs, i BooleanQuery low = lowBuilder.build(); if (low.clauses().isEmpty()) { BooleanQuery.Builder queryBuilder = new BooleanQuery.Builder(); - queryBuilder.setDisableCoord(disableCoord); for (BooleanClause booleanClause : high) { queryBuilder.add(booleanClause.getQuery(), Occur.MUST); } @@ -350,7 +327,6 @@ protected Query topLevelQuery(Term[] terms, TermContext[] ctx, int[] docFreqs, i return low; } else { return new BooleanQuery.Builder() - .setDisableCoord(true) .add(high, BooleanClause.Occur.SHOULD) .add(low, BooleanClause.Occur.MUST) .build(); diff --git a/core/src/main/java/org/apache/lucene/queries/ExtendedCommonTermsQuery.java b/core/src/main/java/org/apache/lucene/queries/ExtendedCommonTermsQuery.java index 1889c6e759b11..4580de4cc4a00 100644 --- a/core/src/main/java/org/apache/lucene/queries/ExtendedCommonTermsQuery.java +++ b/core/src/main/java/org/apache/lucene/queries/ExtendedCommonTermsQuery.java @@ -35,8 +35,8 @@ public class ExtendedCommonTermsQuery extends CommonTermsQuery { private final MappedFieldType fieldType; - public ExtendedCommonTermsQuery(Occur highFreqOccur, Occur lowFreqOccur, float maxTermFrequency, boolean disableCoord, MappedFieldType fieldType) { - super(highFreqOccur, lowFreqOccur, maxTermFrequency, disableCoord); + public ExtendedCommonTermsQuery(Occur highFreqOccur, Occur lowFreqOccur, float maxTermFrequency, MappedFieldType fieldType) { + super(highFreqOccur, lowFreqOccur, maxTermFrequency); this.fieldType = fieldType; } diff --git a/core/src/main/java/org/apache/lucene/queries/MinDocQuery.java b/core/src/main/java/org/apache/lucene/queries/MinDocQuery.java index a8b7dc9299ff0..d4f9ab729736c 100644 --- a/core/src/main/java/org/apache/lucene/queries/MinDocQuery.java +++ b/core/src/main/java/org/apache/lucene/queries/MinDocQuery.java @@ -57,8 +57,8 @@ public boolean equals(Object obj) { } @Override - public Weight createWeight(IndexSearcher searcher, boolean needsScores) throws IOException { - return new ConstantScoreWeight(this) { + public Weight createWeight(IndexSearcher searcher, boolean needsScores, float boost) throws IOException { + return new ConstantScoreWeight(this, boost) { @Override public Scorer scorer(LeafReaderContext context) throws IOException { final int maxDoc = context.reader().maxDoc(); diff --git a/core/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java b/core/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java index 7c3e8652c072d..b74dbb1184d24 100644 --- a/core/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java +++ b/core/src/main/java/org/apache/lucene/queryparser/classic/ExistsFieldQueryExtension.java @@ -19,8 +19,10 @@ package org.apache.lucene.queryparser.classic; -import org.apache.lucene.search.ConstantScoreQuery; +import org.apache.lucene.index.Term; import org.apache.lucene.search.Query; +import org.apache.lucene.search.WildcardQuery; +import org.elasticsearch.index.mapper.FieldNamesFieldMapper; import org.elasticsearch.index.query.ExistsQueryBuilder; import org.elasticsearch.index.query.QueryShardContext; @@ -30,6 +32,13 @@ public class ExistsFieldQueryExtension implements FieldQueryExtension { @Override public Query query(QueryShardContext context, String queryText) { - return new ConstantScoreQuery(ExistsQueryBuilder.newFilter(context, queryText)); + final FieldNamesFieldMapper.FieldNamesFieldType fieldNamesFieldType = + (FieldNamesFieldMapper.FieldNamesFieldType) context.getMapperService().fullName(FieldNamesFieldMapper.NAME); + if (fieldNamesFieldType.isEnabled() == false) { + // The field_names_field is disabled so we switch to a wildcard query that matches all terms + return new WildcardQuery(new Term(queryText, "*")); + } + + return ExistsQueryBuilder.newFilter(context, queryText); } } diff --git a/core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java b/core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java index 976c4706725ea..07f646a89d1cc 100644 --- a/core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java +++ b/core/src/main/java/org/apache/lucene/queryparser/classic/MapperQueryParser.java @@ -20,11 +20,11 @@ package org.apache.lucene.queryparser.classic; import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.analysis.miscellaneous.DisableGraphAttribute; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; import org.apache.lucene.index.Term; -import org.apache.lucene.queryparser.analyzing.AnalyzingQueryParser; import org.apache.lucene.search.BooleanClause; import org.apache.lucene.search.BooleanQuery; import org.apache.lucene.search.BoostQuery; @@ -35,17 +35,22 @@ import org.apache.lucene.search.PhraseQuery; import org.apache.lucene.search.Query; import org.apache.lucene.search.SynonymQuery; +import org.apache.lucene.search.spans.SpanNearQuery; +import org.apache.lucene.search.spans.SpanOrQuery; +import org.apache.lucene.search.spans.SpanQuery; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.IOUtils; import org.apache.lucene.util.automaton.RegExp; import org.elasticsearch.common.lucene.search.Queries; import org.elasticsearch.common.unit.Fuzziness; +import org.elasticsearch.index.mapper.AllFieldMapper; import org.elasticsearch.index.mapper.DateFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.mapper.StringFieldType; import org.elasticsearch.index.query.QueryShardContext; import org.elasticsearch.index.query.support.QueryParsers; +import org.elasticsearch.index.analysis.ShingleTokenFilterFactory; import java.io.IOException; import java.util.ArrayList; @@ -53,8 +58,7 @@ import java.util.HashMap; import java.util.List; import java.util.Map; -import java.util.Objects; - +import java.util.Collections; import static java.util.Collections.unmodifiableMap; import static org.elasticsearch.common.lucene.search.Queries.fixNegativeQueryIfNeeded; @@ -65,7 +69,7 @@ * Also breaks fields with [type].[name] into a boolean query that must include the type * as well as the query on the name. */ -public class MapperQueryParser extends AnalyzingQueryParser { +public class MapperQueryParser extends QueryParser { public static final Map FIELD_QUERY_EXTENSIONS; @@ -88,7 +92,8 @@ public MapperQueryParser(QueryShardContext context) { public void reset(QueryParserSettings settings) { this.settings = settings; - if (settings.fieldsAndWeights().isEmpty()) { + if (settings.fieldsAndWeights() == null) { + // this query has no explicit fields to query so we fallback to the default field this.field = settings.defaultField(); } else if (settings.fieldsAndWeights().size() == 1) { this.field = settings.fieldsAndWeights().keySet().iterator().next(); @@ -98,14 +103,13 @@ public void reset(QueryParserSettings settings) { setAnalyzer(settings.analyzer()); setMultiTermRewriteMethod(settings.rewriteMethod()); setEnablePositionIncrements(settings.enablePositionIncrements()); + setSplitOnWhitespace(settings.splitOnWhitespace()); setAutoGeneratePhraseQueries(settings.autoGeneratePhraseQueries()); setMaxDeterminizedStates(settings.maxDeterminizedStates()); setAllowLeadingWildcard(settings.allowLeadingWildcard()); - setLowercaseExpandedTerms(false); setPhraseSlop(settings.phraseSlop()); setDefaultOperator(settings.defaultOperator()); setFuzzyPrefixLength(settings.fuzzyPrefixLength()); - setSplitOnWhitespace(settings.splitOnWhitespace()); } /** @@ -146,32 +150,26 @@ public Query getFieldQuery(String field, String queryText, boolean quoted) throw if (fields != null) { if (fields.size() == 1) { return getFieldQuerySingle(fields.iterator().next(), queryText, quoted); + } else if (fields.isEmpty()) { + // the requested fields do not match any field in the mapping + // happens for wildcard fields only since we cannot expand to a valid field name + // if there is no match in the mappings. + return new MatchNoDocsQuery("empty fields"); } - if (settings.useDisMax()) { - List queries = new ArrayList<>(); - boolean added = false; - for (String mField : fields) { - Query q = getFieldQuerySingle(mField, queryText, quoted); - if (q != null) { - added = true; - queries.add(applyBoost(mField, q)); - } - } - if (!added) { - return null; - } - return new DisjunctionMaxQuery(queries, settings.tieBreaker()); - } else { - List clauses = new ArrayList<>(); - for (String mField : fields) { - Query q = getFieldQuerySingle(mField, queryText, quoted); - if (q != null) { - clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD)); - } + float tiebreaker = settings.useDisMax() ? settings.tieBreaker() : 1.0f; + List queries = new ArrayList<>(); + boolean added = false; + for (String mField : fields) { + Query q = getFieldQuerySingle(mField, queryText, quoted); + if (q != null) { + added = true; + queries.add(applyBoost(mField, q)); } - if (clauses.isEmpty()) return null; // happens for stopwords - return getBooleanQueryCoordDisabled(clauses); } + if (!added) { + return null; + } + return new DisjunctionMaxQuery(queries, tiebreaker); } else { return getFieldQuerySingle(field, queryText, quoted); } @@ -247,33 +245,21 @@ private Query getFieldQuerySingle(String field, String queryText, boolean quoted protected Query getFieldQuery(String field, String queryText, int slop) throws ParseException { Collection fields = extractMultiFields(field); if (fields != null) { - if (settings.useDisMax()) { - List queries = new ArrayList<>(); - boolean added = false; - for (String mField : fields) { - Query q = super.getFieldQuery(mField, queryText, slop); - if (q != null) { - added = true; - q = applySlop(q, slop); - queries.add(applyBoost(mField, q)); - } - } - if (!added) { - return null; - } - return new DisjunctionMaxQuery(queries, settings.tieBreaker()); - } else { - List clauses = new ArrayList<>(); - for (String mField : fields) { - Query q = super.getFieldQuery(mField, queryText, slop); - if (q != null) { - q = applySlop(q, slop); - clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD)); - } + float tiebreaker = settings.useDisMax() ? settings.tieBreaker() : 1.0f; + List queries = new ArrayList<>(); + boolean added = false; + for (String mField : fields) { + Query q = super.getFieldQuery(mField, queryText, slop); + if (q != null) { + added = true; + q = applySlop(q, slop); + queries.add(applyBoost(mField, q)); } - if (clauses.isEmpty()) return null; // happens for stopwords - return getBooleanQueryCoordDisabled(clauses); } + if (!added) { + return null; + } + return new DisjunctionMaxQuery(queries, tiebreaker); } else { return super.getFieldQuery(field, queryText, slop); } @@ -300,31 +286,20 @@ protected Query getRangeQuery(String field, String part1, String part2, return getRangeQuerySingle(fields.iterator().next(), part1, part2, startInclusive, endInclusive, context); } - if (settings.useDisMax()) { - List queries = new ArrayList<>(); - boolean added = false; - for (String mField : fields) { - Query q = getRangeQuerySingle(mField, part1, part2, startInclusive, endInclusive, context); - if (q != null) { - added = true; - queries.add(applyBoost(mField, q)); - } - } - if (!added) { - return null; + float tiebreaker = settings.useDisMax() ? settings.tieBreaker() : 1.0f; + List queries = new ArrayList<>(); + boolean added = false; + for (String mField : fields) { + Query q = getRangeQuerySingle(mField, part1, part2, startInclusive, endInclusive, context); + if (q != null) { + added = true; + queries.add(applyBoost(mField, q)); } - return new DisjunctionMaxQuery(queries, settings.tieBreaker()); - } else { - List clauses = new ArrayList<>(); - for (String mField : fields) { - Query q = getRangeQuerySingle(mField, part1, part2, startInclusive, endInclusive, context); - if (q != null) { - clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD)); - } - } - if (clauses.isEmpty()) return null; // happens for stopwords - return getBooleanQueryCoordDisabled(clauses); } + if (!added) { + return null; + } + return new DisjunctionMaxQuery(queries, tiebreaker); } private Query getRangeQuerySingle(String field, String part1, String part2, @@ -359,30 +334,20 @@ protected Query getFuzzyQuery(String field, String termStr, String minSimilarity if (fields.size() == 1) { return getFuzzyQuerySingle(fields.iterator().next(), termStr, minSimilarity); } - if (settings.useDisMax()) { - List queries = new ArrayList<>(); - boolean added = false; - for (String mField : fields) { - Query q = getFuzzyQuerySingle(mField, termStr, minSimilarity); - if (q != null) { - added = true; - queries.add(applyBoost(mField, q)); - } - } - if (!added) { - return null; - } - return new DisjunctionMaxQuery(queries, settings.tieBreaker()); - } else { - List clauses = new ArrayList<>(); - for (String mField : fields) { - Query q = getFuzzyQuerySingle(mField, termStr, minSimilarity); - if (q != null) { - clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD)); - } + float tiebreaker = settings.useDisMax() ? settings.tieBreaker() : 1.0f; + List queries = new ArrayList<>(); + boolean added = false; + for (String mField : fields) { + Query q = getFuzzyQuerySingle(mField, termStr, minSimilarity); + if (q != null) { + added = true; + queries.add(applyBoost(mField, q)); } - return getBooleanQueryCoordDisabled(clauses); } + if (!added) { + return null; + } + return new DisjunctionMaxQuery(queries, tiebreaker); } else { return getFuzzyQuerySingle(field, termStr, minSimilarity); } @@ -422,31 +387,20 @@ protected Query getPrefixQuery(String field, String termStr) throws ParseExcepti if (fields.size() == 1) { return getPrefixQuerySingle(fields.iterator().next(), termStr); } - if (settings.useDisMax()) { - List queries = new ArrayList<>(); - boolean added = false; - for (String mField : fields) { - Query q = getPrefixQuerySingle(mField, termStr); - if (q != null) { - added = true; - queries.add(applyBoost(mField, q)); - } - } - if (!added) { - return null; - } - return new DisjunctionMaxQuery(queries, settings.tieBreaker()); - } else { - List clauses = new ArrayList<>(); - for (String mField : fields) { - Query q = getPrefixQuerySingle(mField, termStr); - if (q != null) { - clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD)); - } + float tiebreaker = settings.useDisMax() ? settings.tieBreaker() : 1.0f; + List queries = new ArrayList<>(); + boolean added = false; + for (String mField : fields) { + Query q = getPrefixQuerySingle(mField, termStr); + if (q != null) { + added = true; + queries.add(applyBoost(mField, q)); } - if (clauses.isEmpty()) return null; // happens for stopwords - return getBooleanQueryCoordDisabled(clauses); } + if (!added) { + return null; + } + return new DisjunctionMaxQuery(queries, tiebreaker); } else { return getPrefixQuerySingle(field, termStr); } @@ -554,7 +508,7 @@ private Query getPossiblyAnalyzedPrefixQuery(String field, String termStr) throw innerClauses.add(new BooleanClause(super.getPrefixQuery(field, token), BooleanClause.Occur.SHOULD)); } - posQuery = getBooleanQueryCoordDisabled(innerClauses); + posQuery = getBooleanQuery(innerClauses); } clauses.add(new BooleanClause(posQuery, getDefaultOperator() == Operator.AND ? BooleanClause.Occur.MUST : BooleanClause.Occur.SHOULD)); @@ -564,59 +518,50 @@ private Query getPossiblyAnalyzedPrefixQuery(String field, String termStr) throw @Override protected Query getWildcardQuery(String field, String termStr) throws ParseException { - if (termStr.equals("*")) { - // we want to optimize for match all query for the "*:*", and "*" cases - if ("*".equals(field) || Objects.equals(field, this.field)) { - String actualField = field; - if (actualField == null) { - actualField = this.field; - } - if (actualField == null) { - return newMatchAllDocsQuery(); - } - if ("*".equals(actualField) || "_all".equals(actualField)) { - return newMatchAllDocsQuery(); - } - // effectively, we check if a field exists or not - return FIELD_QUERY_EXTENSIONS.get(ExistsFieldQueryExtension.NAME).query(context, actualField); + if (termStr.equals("*") && field != null) { + /** + * We rewrite _all:* to a match all query. + * TODO: We can remove this special case when _all is completely removed. + */ + if ("*".equals(field) || AllFieldMapper.NAME.equals(field)) { + return newMatchAllDocsQuery(); } + String actualField = field; + if (actualField == null) { + actualField = this.field; + } + // effectively, we check if a field exists or not + return FIELD_QUERY_EXTENSIONS.get(ExistsFieldQueryExtension.NAME).query(context, actualField); } Collection fields = extractMultiFields(field); if (fields != null) { if (fields.size() == 1) { return getWildcardQuerySingle(fields.iterator().next(), termStr); } - if (settings.useDisMax()) { - List queries = new ArrayList<>(); - boolean added = false; - for (String mField : fields) { - Query q = getWildcardQuerySingle(mField, termStr); - if (q != null) { - added = true; - queries.add(applyBoost(mField, q)); - } - } - if (!added) { - return null; - } - return new DisjunctionMaxQuery(queries, settings.tieBreaker()); - } else { - List clauses = new ArrayList<>(); - for (String mField : fields) { - Query q = getWildcardQuerySingle(mField, termStr); - if (q != null) { - clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD)); - } + float tiebreaker = settings.useDisMax() ? settings.tieBreaker() : 1.0f; + List queries = new ArrayList<>(); + boolean added = false; + for (String mField : fields) { + Query q = getWildcardQuerySingle(mField, termStr); + if (q != null) { + added = true; + queries.add(applyBoost(mField, q)); } - if (clauses.isEmpty()) return null; // happens for stopwords - return getBooleanQueryCoordDisabled(clauses); } + if (!added) { + return null; + } + return new DisjunctionMaxQuery(queries, tiebreaker); } else { return getWildcardQuerySingle(field, termStr); } } private Query getWildcardQuerySingle(String field, String termStr) throws ParseException { + if ("*".equals(termStr)) { + // effectively, we check if a field exists or not + return FIELD_QUERY_EXTENSIONS.get(ExistsFieldQueryExtension.NAME).query(context, field); + } String indexedNameField = field; currentFieldType = null; Analyzer oldAnalyzer = getAnalyzer(); @@ -646,31 +591,20 @@ protected Query getRegexpQuery(String field, String termStr) throws ParseExcepti if (fields.size() == 1) { return getRegexpQuerySingle(fields.iterator().next(), termStr); } - if (settings.useDisMax()) { - List queries = new ArrayList<>(); - boolean added = false; - for (String mField : fields) { - Query q = getRegexpQuerySingle(mField, termStr); - if (q != null) { - added = true; - queries.add(applyBoost(mField, q)); - } - } - if (!added) { - return null; - } - return new DisjunctionMaxQuery(queries, settings.tieBreaker()); - } else { - List clauses = new ArrayList<>(); - for (String mField : fields) { - Query q = getRegexpQuerySingle(mField, termStr); - if (q != null) { - clauses.add(new BooleanClause(applyBoost(mField, q), BooleanClause.Occur.SHOULD)); - } + float tiebreaker = settings.useDisMax() ? settings.tieBreaker() : 1.0f; + List queries = new ArrayList<>(); + boolean added = false; + for (String mField : fields) { + Query q = getRegexpQuerySingle(mField, termStr); + if (q != null) { + added = true; + queries.add(applyBoost(mField, q)); } - if (clauses.isEmpty()) return null; // happens for stopwords - return getBooleanQueryCoordDisabled(clauses); } + if (!added) { + return null; + } + return new DisjunctionMaxQuery(queries, tiebreaker); } else { return getRegexpQuerySingle(field, termStr); } @@ -706,19 +640,6 @@ private Query getRegexpQuerySingle(String field, String termStr) throws ParseExc } } - /** - * @deprecated review all use of this, don't rely on coord - */ - @Deprecated - protected Query getBooleanQueryCoordDisabled(List clauses) throws ParseException { - BooleanQuery.Builder builder = new BooleanQuery.Builder(); - builder.setDisableCoord(true); - for (BooleanClause clause : clauses) { - builder.add(clause); - } - return fixNegativeQueryIfNeeded(builder.build()); - } - @Override protected Query getBooleanQuery(List clauses) throws ParseException { @@ -730,7 +651,7 @@ protected Query getBooleanQuery(List clauses) throws ParseExcepti } private Query applyBoost(String field, Query q) { - Float fieldBoost = settings.fieldsAndWeights().get(field); + Float fieldBoost = settings.fieldsAndWeights() == null ? null : settings.fieldsAndWeights().get(field); if (fieldBoost != null && fieldBoost != 1f) { return new BoostQuery(q, fieldBoost); } @@ -739,33 +660,58 @@ private Query applyBoost(String field, Query q) { private Query applySlop(Query q, int slop) { if (q instanceof PhraseQuery) { - PhraseQuery pq = (PhraseQuery) q; - PhraseQuery.Builder builder = new PhraseQuery.Builder(); - builder.setSlop(slop); - final Term[] terms = pq.getTerms(); - final int[] positions = pq.getPositions(); - for (int i = 0; i < terms.length; ++i) { - builder.add(terms[i], positions[i]); - } - pq = builder.build(); //make sure that the boost hasn't been set beforehand, otherwise we'd lose it assert q instanceof BoostQuery == false; - return pq; + return addSlopToPhrase((PhraseQuery) q, slop); } else if (q instanceof MultiPhraseQuery) { MultiPhraseQuery.Builder builder = new MultiPhraseQuery.Builder((MultiPhraseQuery) q); builder.setSlop(slop); return builder.build(); + } else if (q instanceof SpanQuery) { + return addSlopToSpan((SpanQuery) q, slop); } else { return q; } } + private Query addSlopToSpan(SpanQuery query, int slop) { + if (query instanceof SpanNearQuery) { + return new SpanNearQuery(((SpanNearQuery) query).getClauses(), slop, + ((SpanNearQuery) query).isInOrder()); + } else if (query instanceof SpanOrQuery) { + SpanQuery[] clauses = new SpanQuery[((SpanOrQuery) query).getClauses().length]; + int pos = 0; + for (SpanQuery clause : ((SpanOrQuery) query).getClauses()) { + clauses[pos++] = (SpanQuery) addSlopToSpan(clause, slop); + } + return new SpanOrQuery(clauses); + } else { + return query; + } + } + + /** + * Rebuild a phrase query with a slop value + */ + private PhraseQuery addSlopToPhrase(PhraseQuery query, int slop) { + PhraseQuery.Builder builder = new PhraseQuery.Builder(); + builder.setSlop(slop); + final Term[] terms = query.getTerms(); + final int[] positions = query.getPositions(); + for (int i = 0; i < terms.length; ++i) { + builder.add(terms[i], positions[i]); + } + + return builder.build(); + } + private Collection extractMultiFields(String field) { Collection fields; if (field != null) { fields = context.simpleMatchToIndexNames(field); } else { - fields = settings.fieldsAndWeights().keySet(); + Map fieldsAndWeights = settings.fieldsAndWeights(); + fields = fieldsAndWeights == null ? Collections.emptyList() : fieldsAndWeights.keySet(); } return fields; } @@ -780,4 +726,30 @@ public Query parse(String query) throws ParseException { } return super.parse(query); } + + /** + * Checks if graph analysis should be enabled for the field depending + * on the provided {@link Analyzer} + */ + protected Query createFieldQuery(Analyzer analyzer, BooleanClause.Occur operator, String field, + String queryText, boolean quoted, int phraseSlop) { + assert operator == BooleanClause.Occur.SHOULD || operator == BooleanClause.Occur.MUST; + + // Use the analyzer to get all the tokens, and then build an appropriate + // query based on the analysis chain. + try (TokenStream source = analyzer.tokenStream(field, queryText)) { + if (source.hasAttribute(DisableGraphAttribute.class)) { + /** + * A {@link TokenFilter} in this {@link TokenStream} disabled the graph analysis to avoid + * paths explosion. See {@link ShingleTokenFilterFactory} for details. + */ + setEnableGraphQueries(false); + } + Query query = super.createFieldQuery(source, operator, field, quoted, phraseSlop); + setEnableGraphQueries(true); + return query; + } catch (IOException e) { + throw new RuntimeException("Error analyzing query text", e); + } + } } diff --git a/core/src/main/java/org/apache/lucene/search/grouping/CollapseTopFieldDocs.java b/core/src/main/java/org/apache/lucene/search/grouping/CollapseTopFieldDocs.java new file mode 100644 index 0000000000000..b4d3c82343957 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/search/grouping/CollapseTopFieldDocs.java @@ -0,0 +1,242 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.lucene.search.grouping; + +import org.apache.lucene.search.FieldComparator; +import org.apache.lucene.search.FieldDoc; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.TopFieldDocs; +import org.apache.lucene.util.PriorityQueue; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +/** + * Represents hits returned by {@link CollapsingTopDocsCollector#getTopDocs()}. + */ +public final class CollapseTopFieldDocs extends TopFieldDocs { + /** The field used for collapsing **/ + public final String field; + /** The collapse value for each top doc */ + public final Object[] collapseValues; + + public CollapseTopFieldDocs(String field, int totalHits, ScoreDoc[] scoreDocs, + SortField[] sortFields, Object[] values, float maxScore) { + super(totalHits, scoreDocs, sortFields, maxScore); + this.field = field; + this.collapseValues = values; + } + + // Refers to one hit: + private static final class ShardRef { + // Which shard (index into shardHits[]): + final int shardIndex; + + // True if we should use the incoming ScoreDoc.shardIndex for sort order + final boolean useScoreDocIndex; + + // Which hit within the shard: + int hitIndex; + + ShardRef(int shardIndex, boolean useScoreDocIndex) { + this.shardIndex = shardIndex; + this.useScoreDocIndex = useScoreDocIndex; + } + + @Override + public String toString() { + return "ShardRef(shardIndex=" + shardIndex + " hitIndex=" + hitIndex + ")"; + } + + int getShardIndex(ScoreDoc scoreDoc) { + if (useScoreDocIndex) { + if (scoreDoc.shardIndex == -1) { + throw new IllegalArgumentException("setShardIndex is false but TopDocs[" + + shardIndex + "].scoreDocs[" + hitIndex + "] is not set"); + } + return scoreDoc.shardIndex; + } else { + // NOTE: we don't assert that shardIndex is -1 here, because caller could in fact have set it but asked us to ignore it now + return shardIndex; + } + } + } + + /** + * if we need to tie-break since score / sort value are the same we first compare shard index (lower shard wins) + * and then iff shard index is the same we use the hit index. + */ + static boolean tieBreakLessThan(ShardRef first, ScoreDoc firstDoc, ShardRef second, ScoreDoc secondDoc) { + final int firstShardIndex = first.getShardIndex(firstDoc); + final int secondShardIndex = second.getShardIndex(secondDoc); + // Tie break: earlier shard wins + if (firstShardIndex < secondShardIndex) { + return true; + } else if (firstShardIndex > secondShardIndex) { + return false; + } else { + // Tie break in same shard: resolve however the + // shard had resolved it: + assert first.hitIndex != second.hitIndex; + return first.hitIndex < second.hitIndex; + } + } + + private static class MergeSortQueue extends PriorityQueue { + // These are really FieldDoc instances: + final ScoreDoc[][] shardHits; + final FieldComparator[] comparators; + final int[] reverseMul; + + MergeSortQueue(Sort sort, CollapseTopFieldDocs[] shardHits) { + super(shardHits.length); + this.shardHits = new ScoreDoc[shardHits.length][]; + for (int shardIDX = 0; shardIDX < shardHits.length; shardIDX++) { + final ScoreDoc[] shard = shardHits[shardIDX].scoreDocs; + if (shard != null) { + this.shardHits[shardIDX] = shard; + // Fail gracefully if API is misused: + for (int hitIDX = 0; hitIDX < shard.length; hitIDX++) { + final ScoreDoc sd = shard[hitIDX]; + final FieldDoc gd = (FieldDoc) sd; + assert gd.fields != null; + } + } + } + + final SortField[] sortFields = sort.getSort(); + comparators = new FieldComparator[sortFields.length]; + reverseMul = new int[sortFields.length]; + for (int compIDX = 0; compIDX < sortFields.length; compIDX++) { + final SortField sortField = sortFields[compIDX]; + comparators[compIDX] = sortField.getComparator(1, compIDX); + reverseMul[compIDX] = sortField.getReverse() ? -1 : 1; + } + } + + // Returns true if first is < second + @Override + public boolean lessThan(ShardRef first, ShardRef second) { + assert first != second; + final FieldDoc firstFD = (FieldDoc) shardHits[first.shardIndex][first.hitIndex]; + final FieldDoc secondFD = (FieldDoc) shardHits[second.shardIndex][second.hitIndex]; + + for (int compIDX = 0; compIDX < comparators.length; compIDX++) { + final FieldComparator comp = comparators[compIDX]; + + final int cmp = + reverseMul[compIDX] * comp.compareValues(firstFD.fields[compIDX], secondFD.fields[compIDX]); + + if (cmp != 0) { + return cmp < 0; + } + } + return tieBreakLessThan(first, firstFD, second, secondFD); + } + } + + /** + * Returns a new CollapseTopDocs, containing topN collapsed results across + * the provided CollapseTopDocs, sorting by score. Each {@link CollapseTopFieldDocs} instance must be sorted. + **/ + public static CollapseTopFieldDocs merge(Sort sort, int start, int size, + CollapseTopFieldDocs[] shardHits, boolean setShardIndex) { + String collapseField = shardHits[0].field; + for (int i = 1; i < shardHits.length; i++) { + if (collapseField.equals(shardHits[i].field) == false) { + throw new IllegalArgumentException("collapse field differ across shards [" + + collapseField + "] != [" + shardHits[i].field + "]"); + } + } + final PriorityQueue queue = new MergeSortQueue(sort, shardHits); + + int totalHitCount = 0; + int availHitCount = 0; + float maxScore = Float.MIN_VALUE; + for(int shardIDX=0;shardIDX 0) { + availHitCount += shard.scoreDocs.length; + queue.add(new ShardRef(shardIDX, setShardIndex == false)); + maxScore = Math.max(maxScore, shard.getMaxScore()); + } + } + + if (availHitCount == 0) { + maxScore = Float.NaN; + } + + final ScoreDoc[] hits; + final Object[] values; + if (availHitCount <= start) { + hits = new ScoreDoc[0]; + values = new Object[0]; + } else { + List hitList = new ArrayList<>(); + List collapseList = new ArrayList<>(); + int requestedResultWindow = start + size; + int numIterOnHits = Math.min(availHitCount, requestedResultWindow); + int hitUpto = 0; + Set seen = new HashSet<>(); + while (hitUpto < numIterOnHits) { + if (queue.size() == 0) { + break; + } + ShardRef ref = queue.top(); + final ScoreDoc hit = shardHits[ref.shardIndex].scoreDocs[ref.hitIndex]; + final Object collapseValue = shardHits[ref.shardIndex].collapseValues[ref.hitIndex++]; + if (seen.contains(collapseValue)) { + if (ref.hitIndex < shardHits[ref.shardIndex].scoreDocs.length) { + queue.updateTop(); + } else { + queue.pop(); + } + continue; + } + seen.add(collapseValue); + if (setShardIndex) { + hit.shardIndex = ref.shardIndex; + } + if (hitUpto >= start) { + hitList.add(hit); + collapseList.add(collapseValue); + } + + hitUpto++; + + if (ref.hitIndex < shardHits[ref.shardIndex].scoreDocs.length) { + // Not done with this these TopDocs yet: + queue.updateTop(); + } else { + queue.pop(); + } + } + hits = hitList.toArray(new ScoreDoc[0]); + values = collapseList.toArray(new Object[0]); + } + return new CollapseTopFieldDocs(collapseField, totalHitCount, hits, sort.getSort(), values, maxScore); + } +} diff --git a/core/src/main/java/org/apache/lucene/search/grouping/CollapsingDocValuesSource.java b/core/src/main/java/org/apache/lucene/search/grouping/CollapsingDocValuesSource.java new file mode 100644 index 0000000000000..cbcd1e3a4117d --- /dev/null +++ b/core/src/main/java/org/apache/lucene/search/grouping/CollapsingDocValuesSource.java @@ -0,0 +1,262 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.lucene.search.grouping; + +import org.apache.lucene.index.DocValues; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.LeafReader; +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.NumericDocValues; +import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.SortedSetDocValues; +import org.apache.lucene.util.BytesRef; +import org.elasticsearch.index.fielddata.AbstractNumericDocValues; +import org.elasticsearch.index.fielddata.AbstractSortedDocValues; + +import java.io.IOException; +import java.util.Collection; + +/** + * Utility class that ensures that a single collapse key is extracted per document. + */ +abstract class CollapsingDocValuesSource extends GroupSelector { + protected final String field; + + CollapsingDocValuesSource(String field) { + this.field = field; + } + + @Override + public void setGroups(Collection> groups) { + throw new UnsupportedOperationException(); + } + + /** + * Implementation for {@link NumericDocValues} and {@link SortedNumericDocValues}. + * Fails with an {@link IllegalStateException} if a document contains multiple values for the specified field. + */ + static class Numeric extends CollapsingDocValuesSource { + private NumericDocValues values; + private long value; + private boolean hasValue; + + Numeric(String field) { + super(field); + } + + @Override + public State advanceTo(int doc) throws IOException { + if (values.advanceExact(doc)) { + hasValue = true; + value = values.longValue(); + return State.ACCEPT; + } else { + hasValue = false; + return State.SKIP; + } + } + + @Override + public Long currentValue() { + return hasValue ? value : null; + } + + @Override + public Long copyValue() { + return currentValue(); + } + + @Override + public void setNextReader(LeafReaderContext readerContext) throws IOException { + LeafReader reader = readerContext.reader(); + DocValuesType type = getDocValuesType(reader, field); + if (type == null || type == DocValuesType.NONE) { + values = DocValues.emptyNumeric(); + return ; + } + switch (type) { + case NUMERIC: + values = DocValues.getNumeric(reader, field); + break; + + case SORTED_NUMERIC: + final SortedNumericDocValues sorted = DocValues.getSortedNumeric(reader, field); + values = DocValues.unwrapSingleton(sorted); + if (values == null) { + values = new AbstractNumericDocValues() { + + private long value; + + @Override + public boolean advanceExact(int target) throws IOException { + if (sorted.advanceExact(target)) { + if (sorted.docValueCount() > 1) { + throw new IllegalStateException("failed to collapse " + target + + ", the collapse field must be single valued"); + } + value = sorted.nextValue(); + return true; + } else { + return false; + } + } + + @Override + public int docID() { + return sorted.docID(); + } + + @Override + public long longValue() throws IOException { + return value; + } + + }; + } + break; + + default: + throw new IllegalStateException("unexpected doc values type " + + type + "` for field `" + field + "`"); + } + } + } + + /** + * Implementation for {@link SortedDocValues} and {@link SortedSetDocValues}. + * Fails with an {@link IllegalStateException} if a document contains multiple values for the specified field. + */ + static class Keyword extends CollapsingDocValuesSource { + private SortedDocValues values; + private int ord; + + Keyword(String field) { + super(field); + } + + @Override + public org.apache.lucene.search.grouping.GroupSelector.State advanceTo(int doc) + throws IOException { + if (values.advanceExact(doc)) { + ord = values.ordValue(); + return State.ACCEPT; + } else { + ord = -1; + return State.SKIP; + } + } + + @Override + public BytesRef currentValue() { + if (ord == -1) { + return null; + } else { + try { + return values.lookupOrd(ord); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + } + + @Override + public BytesRef copyValue() { + BytesRef value = currentValue(); + if (value == null) { + return null; + } else { + return BytesRef.deepCopyOf(value); + } + } + + @Override + public void setNextReader(LeafReaderContext readerContext) throws IOException { + LeafReader reader = readerContext.reader(); + DocValuesType type = getDocValuesType(reader, field); + if (type == null || type == DocValuesType.NONE) { + values = DocValues.emptySorted(); + return ; + } + switch (type) { + case SORTED: + values = DocValues.getSorted(reader, field); + break; + + case SORTED_SET: + final SortedSetDocValues sorted = DocValues.getSortedSet(reader, field); + values = DocValues.unwrapSingleton(sorted); + if (values == null) { + values = new AbstractSortedDocValues() { + + private int ord; + + @Override + public boolean advanceExact(int target) throws IOException { + if (sorted.advanceExact(target)) { + ord = (int) sorted.nextOrd(); + if (sorted.nextOrd() != SortedSetDocValues.NO_MORE_ORDS) { + throw new IllegalStateException("failed to collapse " + target + + ", the collapse field must be single valued"); + } + return true; + } else { + return false; + } + } + + @Override + public int docID() { + return sorted.docID(); + } + + @Override + public int ordValue() { + return ord; + } + + @Override + public BytesRef lookupOrd(int ord) throws IOException { + return sorted.lookupOrd(ord); + } + + @Override + public int getValueCount() { + return (int) sorted.getValueCount(); + } + }; + } + break; + + default: + throw new IllegalStateException("unexpected doc values type " + + type + "` for field `" + field + "`"); + } + } + } + + private static DocValuesType getDocValuesType(LeafReader in, String field) { + FieldInfo fi = in.getFieldInfos().fieldInfo(field); + if (fi != null) { + return fi.getDocValuesType(); + } + return null; + } +} diff --git a/core/src/main/java/org/apache/lucene/search/grouping/CollapsingTopDocsCollector.java b/core/src/main/java/org/apache/lucene/search/grouping/CollapsingTopDocsCollector.java new file mode 100644 index 0000000000000..fedda3ead596b --- /dev/null +++ b/core/src/main/java/org/apache/lucene/search/grouping/CollapsingTopDocsCollector.java @@ -0,0 +1,160 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.apache.lucene.search.grouping; + +import org.apache.lucene.search.FieldDoc; +import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.search.Scorer; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; + +import java.io.IOException; +import java.util.Collection; +import java.util.Iterator; + +import static org.apache.lucene.search.SortField.Type.SCORE; + +/** + * A collector that groups documents based on field values and returns {@link CollapseTopFieldDocs} + * output. The collapsing is done in a single pass by selecting only the top sorted document per collapse key. + * The value used for the collapse key of each group can be found in {@link CollapseTopFieldDocs#collapseValues}. + */ +public final class CollapsingTopDocsCollector extends FirstPassGroupingCollector { + protected final String collapseField; + + protected final Sort sort; + protected Scorer scorer; + + private int totalHitCount; + private float maxScore; + private final boolean trackMaxScore; + + CollapsingTopDocsCollector(GroupSelector groupSelector, String collapseField, Sort sort, + int topN, boolean trackMaxScore) { + super(groupSelector, sort, topN); + this.collapseField = collapseField; + this.trackMaxScore = trackMaxScore; + if (trackMaxScore) { + maxScore = Float.NEGATIVE_INFINITY; + } else { + maxScore = Float.NaN; + } + this.sort = sort; + } + + /** + * Transform {@link FirstPassGroupingCollector#getTopGroups(int, boolean)} output in + * {@link CollapseTopFieldDocs}. The collapsing needs only one pass so we can get the final top docs at the end + * of the first pass. + */ + public CollapseTopFieldDocs getTopDocs() throws IOException { + Collection> groups = super.getTopGroups(0, true); + if (groups == null) { + return new CollapseTopFieldDocs(collapseField, totalHitCount, new ScoreDoc[0], + sort.getSort(), new Object[0], Float.NaN); + } + FieldDoc[] docs = new FieldDoc[groups.size()]; + Object[] collapseValues = new Object[groups.size()]; + int scorePos = -1; + for (int index = 0; index < sort.getSort().length; index++) { + SortField sortField = sort.getSort()[index]; + if (sortField.getType() == SCORE) { + scorePos = index; + break; + } + } + int pos = 0; + Iterator> it = orderedGroups.iterator(); + for (SearchGroup group : groups) { + assert it.hasNext(); + CollectedSearchGroup col = it.next(); + float score = Float.NaN; + if (scorePos != -1) { + score = (float) group.sortValues[scorePos]; + } + docs[pos] = new FieldDoc(col.topDoc, score, group.sortValues); + collapseValues[pos] = group.groupValue; + pos++; + } + return new CollapseTopFieldDocs(collapseField, totalHitCount, docs, sort.getSort(), + collapseValues, maxScore); + } + + @Override + public boolean needsScores() { + if (super.needsScores() == false) { + return trackMaxScore; + } + return true; + } + + @Override + public void setScorer(Scorer scorer) throws IOException { + super.setScorer(scorer); + this.scorer = scorer; + } + + @Override + public void collect(int doc) throws IOException { + super.collect(doc); + if (trackMaxScore) { + maxScore = Math.max(maxScore, scorer.score()); + } + totalHitCount++; + } + + /** + * Create a collapsing top docs collector on a {@link org.apache.lucene.index.NumericDocValues} field. + * It accepts also {@link org.apache.lucene.index.SortedNumericDocValues} field but + * the collect will fail with an {@link IllegalStateException} if a document contains more than one value for the + * field. + * + * @param collapseField The sort field used to group + * documents. + * @param sort The {@link Sort} used to sort the collapsed hits. + * The collapsing keeps only the top sorted document per collapsed key. + * This must be non-null, ie, if you want to groupSort by relevance + * use Sort.RELEVANCE. + * @param topN How many top groups to keep. + */ + public static CollapsingTopDocsCollector createNumeric(String collapseField, Sort sort, + int topN, boolean trackMaxScore) { + return new CollapsingTopDocsCollector<>(new CollapsingDocValuesSource.Numeric(collapseField), + collapseField, sort, topN, trackMaxScore); + } + + /** + * Create a collapsing top docs collector on a {@link org.apache.lucene.index.SortedDocValues} field. + * It accepts also {@link org.apache.lucene.index.SortedSetDocValues} field but + * the collect will fail with an {@link IllegalStateException} if a document contains more than one value for the + * field. + * + * @param collapseField The sort field used to group + * documents. + * @param sort The {@link Sort} used to sort the collapsed hits. The collapsing keeps only the top sorted + * document per collapsed key. + * This must be non-null, ie, if you want to groupSort by relevance use Sort.RELEVANCE. + * @param topN How many top groups to keep. + */ + public static CollapsingTopDocsCollector createKeyword(String collapseField, Sort sort, + int topN, boolean trackMaxScore) { + return new CollapsingTopDocsCollector<>(new CollapsingDocValuesSource.Keyword(collapseField), + collapseField, sort, topN, trackMaxScore); + } +} diff --git a/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighter.java b/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighter.java deleted file mode 100644 index 30f57b2626c4b..0000000000000 --- a/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPostingsHighlighter.java +++ /dev/null @@ -1,137 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.apache.lucene.search.postingshighlight; - -import org.apache.lucene.analysis.Analyzer; -import org.apache.lucene.search.IndexSearcher; -import org.apache.lucene.search.Query; - -import java.io.IOException; -import java.text.BreakIterator; -import java.util.Map; - -/** - * Subclass of the {@link PostingsHighlighter} that works for a single field in a single document. - * Uses a custom {@link PassageFormatter}. Accepts field content as a constructor argument, given that loading - * is custom and can be done reading from _source field. Supports using different {@link BreakIterator} to break - * the text into fragments. Considers every distinct field value as a discrete passage for highlighting (unless - * the whole content needs to be highlighted). Supports both returning empty snippets and non highlighted snippets - * when no highlighting can be performed. - * - * The use that we make of the postings highlighter is not optimal. It would be much better to highlight - * multiple docs in a single call, as we actually lose its sequential IO. That would require to - * refactor the elasticsearch highlight api which currently works per hit. - */ -public final class CustomPostingsHighlighter extends PostingsHighlighter { - - private static final Snippet[] EMPTY_SNIPPET = new Snippet[0]; - private static final Passage[] EMPTY_PASSAGE = new Passage[0]; - - private final Analyzer analyzer; - private final CustomPassageFormatter passageFormatter; - private final BreakIterator breakIterator; - private final boolean returnNonHighlightedSnippets; - private final String fieldValue; - - /** - * Creates a new instance of {@link CustomPostingsHighlighter} - * - * @param analyzer the analyzer used for the field at index time, used for multi term queries internally - * @param passageFormatter our own {@link PassageFormatter} which generates snippets in forms of {@link Snippet} objects - * @param fieldValue the original field values as constructor argument, loaded from te _source field or the relevant stored field. - * @param returnNonHighlightedSnippets whether non highlighted snippets should be returned rather than empty snippets when - * no highlighting can be performed - */ - public CustomPostingsHighlighter(Analyzer analyzer, CustomPassageFormatter passageFormatter, String fieldValue, boolean returnNonHighlightedSnippets) { - this(analyzer, passageFormatter, null, fieldValue, returnNonHighlightedSnippets); - } - - /** - * Creates a new instance of {@link CustomPostingsHighlighter} - * - * @param analyzer the analyzer used for the field at index time, used for multi term queries internally - * @param passageFormatter our own {@link PassageFormatter} which generates snippets in forms of {@link Snippet} objects - * @param breakIterator an instance {@link BreakIterator} selected depending on the highlighting options - * @param fieldValue the original field values as constructor argument, loaded from te _source field or the relevant stored field. - * @param returnNonHighlightedSnippets whether non highlighted snippets should be returned rather than empty snippets when - * no highlighting can be performed - */ - public CustomPostingsHighlighter(Analyzer analyzer, CustomPassageFormatter passageFormatter, BreakIterator breakIterator, String fieldValue, boolean returnNonHighlightedSnippets) { - this.analyzer = analyzer; - this.passageFormatter = passageFormatter; - this.breakIterator = breakIterator; - this.returnNonHighlightedSnippets = returnNonHighlightedSnippets; - this.fieldValue = fieldValue; - } - - /** - * Highlights terms extracted from the provided query within the content of the provided field name - */ - public Snippet[] highlightField(String field, Query query, IndexSearcher searcher, int docId, int maxPassages) throws IOException { - Map fieldsAsObjects = super.highlightFieldsAsObjects(new String[]{field}, query, searcher, new int[]{docId}, new int[]{maxPassages}); - Object[] snippetObjects = fieldsAsObjects.get(field); - if (snippetObjects != null) { - //one single document at a time - assert snippetObjects.length == 1; - Object snippetObject = snippetObjects[0]; - if (snippetObject != null && snippetObject instanceof Snippet[]) { - return (Snippet[]) snippetObject; - } - } - return EMPTY_SNIPPET; - } - - @Override - protected PassageFormatter getFormatter(String field) { - return passageFormatter; - } - - @Override - protected BreakIterator getBreakIterator(String field) { - if (breakIterator == null) { - return super.getBreakIterator(field); - } - return breakIterator; - } - - /* - By default the postings highlighter returns non highlighted snippet when there are no matches. - We want to return no snippets by default, unless no_match_size is greater than 0 - */ - @Override - protected Passage[] getEmptyHighlight(String fieldName, BreakIterator bi, int maxPassages) { - if (returnNonHighlightedSnippets) { - //we want to return the first sentence of the first snippet only - return super.getEmptyHighlight(fieldName, bi, 1); - } - return EMPTY_PASSAGE; - } - - @Override - protected Analyzer getIndexAnalyzer(String field) { - return analyzer; - } - - @Override - protected String[][] loadFieldValues(IndexSearcher searcher, String[] fields, int[] docids, int maxLength) throws IOException { - //we only highlight one field, one document at a time - return new String[][]{new String[]{fieldValue}}; - } -} diff --git a/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java b/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java index c65f962dbb8b0..312b4b3dd0b34 100644 --- a/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java +++ b/core/src/main/java/org/apache/lucene/search/suggest/analyzing/XAnalyzingSuggester.java @@ -392,7 +392,7 @@ private static final class EscapingTokenStreamToAutomaton extends TokenStreamTo final BytesRefBuilder spare = new BytesRefBuilder(); private char sepLabel; - public EscapingTokenStreamToAutomaton(char sepLabel) { + EscapingTokenStreamToAutomaton(char sepLabel) { this.sepLabel = sepLabel; } @@ -432,7 +432,7 @@ private static class AnalyzingComparator implements Comparator { private final boolean hasPayloads; - public AnalyzingComparator(boolean hasPayloads) { + AnalyzingComparator(boolean hasPayloads) { this.hasPayloads = hasPayloads; } @@ -486,7 +486,7 @@ public int compare(BytesRef a, BytesRef b) { } } - /** Non-null if this sugggester created a temp dir, needed only during build */ + /** Non-null if this suggester created a temp dir, needed only during build */ private static FSDirectory tmpBuildDir; @SuppressForbidden(reason = "access temp directory for building index") @@ -1114,7 +1114,7 @@ private static final class SurfaceFormAndPayload implements Comparable windowStart && offset < windowEnd) { + innerStart = innerEnd; + innerEnd = windowEnd; + } else { + windowStart = innerStart = mainBreak.preceding(offset); + windowEnd = innerEnd = mainBreak.following(offset-1); + } + + if (innerEnd - innerStart > maxLen) { + // the current split is too big, + // so starting from the current term we try to find boundaries on the left first + if (offset - maxLen > innerStart) { + innerStart = Math.max(innerStart, + innerBreak.preceding(offset - maxLen)); + } + // and then we try to expand the passage to the right with the remaining size + int remaining = Math.max(0, maxLen - (offset - innerStart)); + if (offset + remaining < windowEnd) { + innerEnd = Math.min(windowEnd, + innerBreak.following(offset + remaining)); + } + } + lastPrecedingOffset = offset - 1; + return innerStart; + } + + /** + * Can be invoked only after a call to preceding(offset+1). + * See {@link FieldHighlighter} for usage. + */ + @Override + public int following(int offset) { + if (offset != lastPrecedingOffset || innerEnd == -1) { + throw new IllegalArgumentException("offset != lastPrecedingOffset: " + + "usage doesn't look like UnifiedHighlighter"); + } + return innerEnd; + } + + /** + * Returns a {@link BreakIterator#getSentenceInstance(Locale)} bounded to maxLen. + * Secondary boundaries are found using a {@link BreakIterator#getWordInstance(Locale)}. + */ + public static BreakIterator getSentence(Locale locale, int maxLen) { + final BreakIterator sBreak = BreakIterator.getSentenceInstance(locale); + final BreakIterator wBreak = BreakIterator.getWordInstance(locale); + return new BoundedBreakIteratorScanner(sBreak, wBreak, maxLen); + } + + + @Override + public int current() { + // Returns the last offset of the current split + return this.innerEnd; + } + + @Override + public int first() { + throw new IllegalStateException("first() should not be called in this context"); + } + + @Override + public int next() { + throw new IllegalStateException("next() should not be called in this context"); + } + + @Override + public int last() { + throw new IllegalStateException("last() should not be called in this context"); + } + + @Override + public int next(int n) { + throw new IllegalStateException("next(n) should not be called in this context"); + } + + @Override + public int previous() { + throw new IllegalStateException("previous() should not be called in this context"); + } +} diff --git a/core/src/main/java/org/apache/lucene/search/uhighlight/CustomFieldHighlighter.java b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomFieldHighlighter.java new file mode 100644 index 0000000000000..915e7cc153128 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomFieldHighlighter.java @@ -0,0 +1,79 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.search.uhighlight; + +import java.text.BreakIterator; +import java.util.Locale; + +import static org.apache.lucene.search.uhighlight.CustomUnifiedHighlighter.MULTIVAL_SEP_CHAR; + +/** + * Custom {@link FieldHighlighter} that creates a single passage bounded to {@code noMatchSize} when + * no highlights were found. + */ +class CustomFieldHighlighter extends FieldHighlighter { + private static final Passage[] EMPTY_PASSAGE = new Passage[0]; + + private final Locale breakIteratorLocale; + private final int noMatchSize; + private final String fieldValue; + + CustomFieldHighlighter(String field, FieldOffsetStrategy fieldOffsetStrategy, + Locale breakIteratorLocale, BreakIterator breakIterator, + PassageScorer passageScorer, int maxPassages, int maxNoHighlightPassages, + PassageFormatter passageFormatter, int noMatchSize, String fieldValue) { + super(field, fieldOffsetStrategy, breakIterator, passageScorer, maxPassages, + maxNoHighlightPassages, passageFormatter); + this.breakIteratorLocale = breakIteratorLocale; + this.noMatchSize = noMatchSize; + this.fieldValue = fieldValue; + } + + @Override + protected Passage[] getSummaryPassagesNoHighlight(int maxPassages) { + if (noMatchSize > 0) { + int pos = 0; + while (pos < fieldValue.length() && fieldValue.charAt(pos) == MULTIVAL_SEP_CHAR) { + pos ++; + } + if (pos < fieldValue.length()) { + int end = fieldValue.indexOf(MULTIVAL_SEP_CHAR, pos); + if (end == -1) { + end = fieldValue.length(); + } + if (noMatchSize+pos < end) { + BreakIterator bi = BreakIterator.getWordInstance(breakIteratorLocale); + bi.setText(fieldValue); + // Finds the next word boundary **after** noMatchSize. + end = bi.following(noMatchSize + pos); + if (end == BreakIterator.DONE) { + end = fieldValue.length(); + } + } + Passage passage = new Passage(); + passage.setScore(Float.NaN); + passage.setStartOffset(pos); + passage.setEndOffset(end); + return new Passage[]{passage}; + } + } + return EMPTY_PASSAGE; + } +} diff --git a/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPassageFormatter.java b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomPassageFormatter.java similarity index 78% rename from core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPassageFormatter.java rename to core/src/main/java/org/apache/lucene/search/uhighlight/CustomPassageFormatter.java index 889e7f741ed80..52eee559c6888 100644 --- a/core/src/main/java/org/apache/lucene/search/postingshighlight/CustomPassageFormatter.java +++ b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomPassageFormatter.java @@ -17,15 +17,15 @@ * under the License. */ -package org.apache.lucene.search.postingshighlight; +package org.apache.lucene.search.uhighlight; import org.apache.lucene.search.highlight.Encoder; import org.elasticsearch.search.fetch.subphase.highlight.HighlightUtils; /** -Custom passage formatter that allows us to: -1) extract different snippets (instead of a single big string) together with their scores ({@link Snippet}) -2) use the {@link Encoder} implementations that are already used with the other highlighters + * Custom passage formatter that allows us to: + * 1) extract different snippets (instead of a single big string) together with their scores ({@link Snippet}) + * 2) use the {@link Encoder} implementations that are already used with the other highlighters */ public class CustomPassageFormatter extends PassageFormatter { @@ -46,10 +46,10 @@ public Snippet[] format(Passage[] passages, String content) { for (int j = 0; j < passages.length; j++) { Passage passage = passages[j]; StringBuilder sb = new StringBuilder(); - pos = passage.startOffset; - for (int i = 0; i < passage.numMatches; i++) { - int start = passage.matchStarts[i]; - int end = passage.matchEnds[i]; + pos = passage.getStartOffset(); + for (int i = 0; i < passage.getNumMatches(); i++) { + int start = passage.getMatchStarts()[i]; + int end = passage.getMatchEnds()[i]; // its possible to have overlapping terms if (start > pos) { append(sb, content, pos, start); @@ -62,7 +62,7 @@ public Snippet[] format(Passage[] passages, String content) { } } // its possible a "term" from the analyzer could span a sentence boundary. - append(sb, content, pos, Math.max(pos, passage.endOffset)); + append(sb, content, pos, Math.max(pos, passage.getEndOffset())); //we remove the paragraph separator if present at the end of the snippet (we used it as separator between values) if (sb.charAt(sb.length() - 1) == HighlightUtils.PARAGRAPH_SEPARATOR) { sb.deleteCharAt(sb.length() - 1); @@ -70,12 +70,12 @@ public Snippet[] format(Passage[] passages, String content) { sb.deleteCharAt(sb.length() - 1); } //and we trim the snippets too - snippets[j] = new Snippet(sb.toString().trim(), passage.score, passage.numMatches > 0); + snippets[j] = new Snippet(sb.toString().trim(), passage.getScore(), passage.getNumMatches() > 0); } return snippets; } - protected void append(StringBuilder dest, String content, int start, int end) { + private void append(StringBuilder dest, String content, int start, int end) { dest.append(encoder.encodeText(content.substring(start, end))); } } diff --git a/core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java new file mode 100644 index 0000000000000..ebc13298202a6 --- /dev/null +++ b/core/src/main/java/org/apache/lucene/search/uhighlight/CustomUnifiedHighlighter.java @@ -0,0 +1,216 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.lucene.search.uhighlight; + +import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.index.Term; +import org.apache.lucene.queries.CommonTermsQuery; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.search.IndexSearcher; +import org.apache.lucene.search.PrefixQuery; +import org.apache.lucene.search.Query; +import org.apache.lucene.search.TermQuery; +import org.apache.lucene.search.spans.SpanMultiTermQueryWrapper; +import org.apache.lucene.search.spans.SpanNearQuery; +import org.apache.lucene.search.spans.SpanOrQuery; +import org.apache.lucene.search.spans.SpanQuery; +import org.apache.lucene.search.spans.SpanTermQuery; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.automaton.CharacterRunAutomaton; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.lucene.all.AllTermQuery; +import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; +import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery; +import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; + +import java.io.IOException; +import java.text.BreakIterator; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.Set; + +/** + * Subclass of the {@link UnifiedHighlighter} that works for a single field in a single document. + * Uses a custom {@link PassageFormatter}. Accepts field content as a constructor + * argument, given that loadings field value can be done reading from _source field. + * Supports using different {@link BreakIterator} to break the text into fragments. Considers every distinct field + * value as a discrete passage for highlighting (unless the whole content needs to be highlighted). + * Supports both returning empty snippets and non highlighted snippets when no highlighting can be performed. + */ +public class CustomUnifiedHighlighter extends UnifiedHighlighter { + public static final char MULTIVAL_SEP_CHAR = (char) 0; + private static final Snippet[] EMPTY_SNIPPET = new Snippet[0]; + + private final String fieldValue; + private final PassageFormatter passageFormatter; + private final BreakIterator breakIterator; + private final Locale breakIteratorLocale; + private final int noMatchSize; + + /** + * Creates a new instance of {@link CustomUnifiedHighlighter} + * + * @param analyzer the analyzer used for the field at index time, used for multi term queries internally + * @param passageFormatter our own {@link CustomPassageFormatter} + * which generates snippets in forms of {@link Snippet} objects + * @param breakIteratorLocale the {@link Locale} to use for dividing text into passages. + * If null {@link Locale#ROOT} is used + * @param breakIterator the {@link BreakIterator} to use for dividing text into passages. + * If null {@link BreakIterator#getSentenceInstance(Locale)} is used. + * @param fieldValue the original field values delimited by MULTIVAL_SEP_CHAR + * @param noMatchSize The size of the text that should be returned when no highlighting can be performed + */ + public CustomUnifiedHighlighter(IndexSearcher searcher, + Analyzer analyzer, + PassageFormatter passageFormatter, + @Nullable Locale breakIteratorLocale, + @Nullable BreakIterator breakIterator, + String fieldValue, + int noMatchSize) { + super(searcher, analyzer); + this.breakIterator = breakIterator; + this.breakIteratorLocale = breakIteratorLocale == null ? Locale.ROOT : breakIteratorLocale; + this.passageFormatter = passageFormatter; + this.fieldValue = fieldValue; + this.noMatchSize = noMatchSize; + } + + /** + * Highlights terms extracted from the provided query within the content of the provided field name + */ + public Snippet[] highlightField(String field, Query query, int docId, int maxPassages) throws IOException { + Map fieldsAsObjects = super.highlightFieldsAsObjects(new String[]{field}, query, + new int[]{docId}, new int[]{maxPassages}); + Object[] snippetObjects = fieldsAsObjects.get(field); + if (snippetObjects != null) { + //one single document at a time + assert snippetObjects.length == 1; + Object snippetObject = snippetObjects[0]; + if (snippetObject != null && snippetObject instanceof Snippet[]) { + return (Snippet[]) snippetObject; + } + } + return EMPTY_SNIPPET; + } + + @Override + protected List loadFieldValues(String[] fields, DocIdSetIterator docIter, + int cacheCharsThreshold) throws IOException { + // we only highlight one field, one document at a time + return Collections.singletonList(new String[]{fieldValue}); + } + + @Override + protected BreakIterator getBreakIterator(String field) { + return breakIterator; + } + + @Override + protected PassageFormatter getFormatter(String field) { + return passageFormatter; + } + + @Override + protected FieldHighlighter getFieldHighlighter(String field, Query query, Set allTerms, int maxPassages) { + BytesRef[] terms = filterExtractedTerms(getFieldMatcher(field), allTerms); + Set highlightFlags = getFlags(field); + PhraseHelper phraseHelper = getPhraseHelper(field, query, highlightFlags); + CharacterRunAutomaton[] automata = getAutomata(field, query, highlightFlags); + OffsetSource offsetSource = getOptimizedOffsetSource(field, terms, phraseHelper, automata); + BreakIterator breakIterator = new SplittingBreakIterator(getBreakIterator(field), + UnifiedHighlighter.MULTIVAL_SEP_CHAR); + FieldOffsetStrategy strategy = + getOffsetStrategy(offsetSource, field, terms, phraseHelper, automata, highlightFlags); + return new CustomFieldHighlighter(field, strategy, breakIteratorLocale, breakIterator, + getScorer(field), maxPassages, (noMatchSize > 0 ? 1 : 0), getFormatter(field), noMatchSize, fieldValue); + } + + @Override + protected Collection preMultiTermQueryRewrite(Query query) { + return rewriteCustomQuery(query); + } + + @Override + protected Collection preSpanQueryRewrite(Query query) { + return rewriteCustomQuery(query); + } + + /** + * Translate custom queries in queries that are supported by the unified highlighter. + */ + private Collection rewriteCustomQuery(Query query) { + if (query instanceof MultiPhrasePrefixQuery) { + MultiPhrasePrefixQuery mpq = (MultiPhrasePrefixQuery) query; + Term[][] terms = mpq.getTerms(); + int[] positions = mpq.getPositions(); + SpanQuery[] positionSpanQueries = new SpanQuery[positions.length]; + int sizeMinus1 = terms.length - 1; + for (int i = 0; i < positions.length; i++) { + SpanQuery[] innerQueries = new SpanQuery[terms[i].length]; + for (int j = 0; j < terms[i].length; j++) { + if (i == sizeMinus1) { + innerQueries[j] = new SpanMultiTermQueryWrapper(new PrefixQuery(terms[i][j])); + } else { + innerQueries[j] = new SpanTermQuery(terms[i][j]); + } + } + if (innerQueries.length > 1) { + positionSpanQueries[i] = new SpanOrQuery(innerQueries); + } else { + positionSpanQueries[i] = innerQueries[0]; + } + } + + if (positionSpanQueries.length == 1) { + return Collections.singletonList(positionSpanQueries[0]); + } + // sum position increments beyond 1 + int positionGaps = 0; + if (positions.length >= 2) { + // positions are in increasing order. max(0,...) is just a safeguard. + positionGaps = Math.max(0, positions[positions.length - 1] - positions[0] - positions.length + 1); + } + //if original slop is 0 then require inOrder + boolean inorder = (mpq.getSlop() == 0); + return Collections.singletonList(new SpanNearQuery(positionSpanQueries, + mpq.getSlop() + positionGaps, inorder)); + } else if (query instanceof CommonTermsQuery) { + CommonTermsQuery ctq = (CommonTermsQuery) query; + List tqs = new ArrayList<> (); + for (Term term : ctq.getTerms()) { + tqs.add(new TermQuery(term)); + } + return tqs; + } else if (query instanceof AllTermQuery) { + AllTermQuery atq = (AllTermQuery) query; + return Collections.singletonList(new TermQuery(atq.getTerm())); + } else if (query instanceof FunctionScoreQuery) { + return Collections.singletonList(((FunctionScoreQuery) query).getSubQuery()); + } else if (query instanceof FiltersFunctionScoreQuery) { + return Collections.singletonList(((FiltersFunctionScoreQuery) query).getSubQuery()); + } else { + return null; + } + } +} diff --git a/core/src/main/java/org/apache/lucene/search/postingshighlight/Snippet.java b/core/src/main/java/org/apache/lucene/search/uhighlight/Snippet.java similarity index 90% rename from core/src/main/java/org/apache/lucene/search/postingshighlight/Snippet.java rename to core/src/main/java/org/apache/lucene/search/uhighlight/Snippet.java index f3bfa1b9c652a..b7490c55feffa 100644 --- a/core/src/main/java/org/apache/lucene/search/postingshighlight/Snippet.java +++ b/core/src/main/java/org/apache/lucene/search/uhighlight/Snippet.java @@ -17,11 +17,11 @@ * under the License. */ -package org.apache.lucene.search.postingshighlight; +package org.apache.lucene.search.uhighlight; /** * Represents a scored highlighted snippet. - * It's our own arbitrary object that we get back from the postings highlighter when highlighting a document. + * It's our own arbitrary object that we get back from the unified highlighter when highlighting a document. * Every snippet contains its formatted text and its score. * The score is needed in case we want to sort snippets by score, they get sorted by position in the text by default. */ diff --git a/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java b/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java index f58d4b4742485..d6fb9b808ebd4 100644 --- a/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java +++ b/core/src/main/java/org/apache/lucene/search/vectorhighlight/CustomFieldQuery.java @@ -30,11 +30,11 @@ import org.apache.lucene.search.Query; import org.apache.lucene.search.SynonymQuery; import org.apache.lucene.search.TermQuery; -import org.apache.lucene.search.join.ToParentBlockJoinQuery; import org.apache.lucene.search.spans.SpanTermQuery; import org.elasticsearch.common.lucene.search.MultiPhrasePrefixQuery; import org.elasticsearch.common.lucene.search.function.FiltersFunctionScoreQuery; import org.elasticsearch.common.lucene.search.function.FunctionScoreQuery; +import org.elasticsearch.index.search.ESToParentBlockJoinQuery; import java.io.IOException; import java.util.Collection; @@ -77,8 +77,8 @@ void flatten(Query sourceQuery, IndexReader reader, Collection flatQuerie } else if (sourceQuery instanceof BlendedTermQuery) { final BlendedTermQuery blendedTermQuery = (BlendedTermQuery) sourceQuery; flatten(blendedTermQuery.rewrite(reader), reader, flatQueries, boost); - } else if (sourceQuery instanceof ToParentBlockJoinQuery) { - ToParentBlockJoinQuery blockJoinQuery = (ToParentBlockJoinQuery) sourceQuery; + } else if (sourceQuery instanceof ESToParentBlockJoinQuery) { + ESToParentBlockJoinQuery blockJoinQuery = (ESToParentBlockJoinQuery) sourceQuery; flatten(blockJoinQuery.getChildQuery(), reader, flatQueries, boost); } else if (sourceQuery instanceof BoostingQuery) { BoostingQuery boostingQuery = (BoostingQuery) sourceQuery; diff --git a/core/src/main/java/org/apache/lucene/spatial/geopoint/search/XGeoPointDistanceRangeQuery.java b/core/src/main/java/org/apache/lucene/spatial/geopoint/search/XGeoPointDistanceRangeQuery.java deleted file mode 100644 index 3cf290e035ea8..0000000000000 --- a/core/src/main/java/org/apache/lucene/spatial/geopoint/search/XGeoPointDistanceRangeQuery.java +++ /dev/null @@ -1,124 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.lucene.spatial.geopoint.search; - -import org.apache.lucene.index.IndexReader; -import org.apache.lucene.search.BooleanClause; -import org.apache.lucene.search.BooleanQuery; -import org.apache.lucene.search.Query; -import org.apache.lucene.spatial.geopoint.document.GeoPointField.TermEncoding; - -/** Implements a point distance range query on a GeoPoint field. This is based on - * {@code org.apache.lucene.spatial.geopoint.search.GeoPointDistanceQuery} and is implemented using a - * {@code org.apache.lucene.search.BooleanClause.MUST_NOT} clause to exclude any points that fall within - * minRadiusMeters from the provided point. - *

- * NOTE: this query does not correctly support multi-value docs (see: https://issues.apache.org/jira/browse/LUCENE-7126) - *
- * TODO: remove this per ISSUE #17658 - **/ -public final class XGeoPointDistanceRangeQuery extends GeoPointDistanceQuery { - /** minimum distance range (in meters) from lat, lon center location, maximum is inherited */ - protected final double minRadiusMeters; - - /** - * Constructs a query for all {@link org.apache.lucene.spatial.geopoint.document.GeoPointField} types within a minimum / maximum - * distance (in meters) range from a given point - */ - public XGeoPointDistanceRangeQuery(final String field, final double centerLat, final double centerLon, - final double minRadiusMeters, final double maxRadiusMeters) { - this(field, TermEncoding.PREFIX, centerLat, centerLon, minRadiusMeters, maxRadiusMeters); - } - - /** - * Constructs a query for all {@link org.apache.lucene.spatial.geopoint.document.GeoPointField} types within a minimum / maximum - * distance (in meters) range from a given point. Accepts an optional - * {@link org.apache.lucene.spatial.geopoint.document.GeoPointField.TermEncoding} - */ - public XGeoPointDistanceRangeQuery(final String field, final TermEncoding termEncoding, final double centerLat, final double centerLon, - final double minRadiusMeters, final double maxRadius) { - super(field, termEncoding, centerLat, centerLon, maxRadius); - this.minRadiusMeters = minRadiusMeters; - } - - @Override - public Query rewrite(IndexReader reader) { - Query q = super.rewrite(reader); - if (minRadiusMeters == 0.0) { - return q; - } - - // add an exclusion query - BooleanQuery.Builder bqb = new BooleanQuery.Builder(); - - // create a new exclusion query - GeoPointDistanceQuery exclude = new GeoPointDistanceQuery(field, termEncoding, centerLat, centerLon, minRadiusMeters); - // full map search -// if (radiusMeters >= GeoProjectionUtils.SEMIMINOR_AXIS) { -// bqb.add(new BooleanClause(new GeoPointInBBoxQuery(this.field, -180.0, -90.0, 180.0, 90.0), BooleanClause.Occur.MUST)); -// } else { - bqb.add(new BooleanClause(q, BooleanClause.Occur.MUST)); -// } - bqb.add(new BooleanClause(exclude, BooleanClause.Occur.MUST_NOT)); - - return bqb.build(); - } - - @Override - public String toString(String field) { - final StringBuilder sb = new StringBuilder(); - sb.append(getClass().getSimpleName()); - sb.append(':'); - if (!this.field.equals(field)) { - sb.append(" field="); - sb.append(this.field); - sb.append(':'); - } - return sb.append( " Center: [") - .append(centerLat) - .append(',') - .append(centerLon) - .append(']') - .append(" From Distance: ") - .append(minRadiusMeters) - .append(" m") - .append(" To Distance: ") - .append(radiusMeters) - .append(" m") - .append(" Lower Left: [") - .append(minLat) - .append(',') - .append(minLon) - .append(']') - .append(" Upper Right: [") - .append(maxLat) - .append(',') - .append(maxLon) - .append("]") - .toString(); - } - - /** getter method for minimum distance */ - public double getMinRadiusMeters() { - return this.minRadiusMeters; - } - - /** getter method for maximum distance */ - public double getMaxRadiusMeters() { - return this.radiusMeters; - } -} diff --git a/core/src/main/java/org/elasticsearch/Assertions.java b/core/src/main/java/org/elasticsearch/Assertions.java new file mode 100644 index 0000000000000..8783101db0a88 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/Assertions.java @@ -0,0 +1,47 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch; + +/** + * Provides a static final field that can be used to check if assertions are enabled. Since this field might be used elsewhere to check if + * assertions are enabled, if you are running with assertions enabled for specific packages or classes, you should enable assertions on this + * class too (e.g., {@code -ea org.elasticsearch.Assertions -ea org.elasticsearch.cluster.service.MasterService}). + */ +public final class Assertions { + + private Assertions() { + + } + + public static final boolean ENABLED; + + static { + boolean enabled = false; + /* + * If assertions are enabled, the following line will be evaluated and enabled will have the value true, otherwise when assertions + * are disabled enabled will have the value false. + */ + // noinspection ConstantConditions,AssertWithSideEffects + assert enabled = true; + // noinspection ConstantConditions + ENABLED = enabled; + } + +} diff --git a/core/src/main/java/org/elasticsearch/Build.java b/core/src/main/java/org/elasticsearch/Build.java index 25da5f281665f..bef9fafe3ca70 100644 --- a/core/src/main/java/org/elasticsearch/Build.java +++ b/core/src/main/java/org/elasticsearch/Build.java @@ -19,6 +19,7 @@ package org.elasticsearch; +import org.elasticsearch.common.io.FileSystemUtils; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -42,9 +43,11 @@ public class Build { final String date; final boolean isSnapshot; + final String esPrefix = "elasticsearch-" + Version.CURRENT; final URL url = getElasticsearchCodebase(); - if (url.toString().endsWith(".jar")) { - try (JarInputStream jar = new JarInputStream(url.openStream())) { + final String urlStr = url.toString(); + if (urlStr.startsWith("file:/") && (urlStr.endsWith(esPrefix + ".jar") || urlStr.endsWith(esPrefix + "-SNAPSHOT.jar"))) { + try (JarInputStream jar = new JarInputStream(FileSystemUtils.openFileURLStream(url))) { Manifest manifest = jar.getManifest(); shortHash = manifest.getMainAttributes().getValue("Change"); date = manifest.getMainAttributes().getValue("Build-Date"); @@ -53,7 +56,7 @@ public class Build { throw new RuntimeException(e); } } else { - // not running from a jar (unit tests, IDE) + // not running from the official elasticsearch jar file (unit tests, IDE, uber client jar, shadiness) shortHash = "Unknown"; date = "Unknown"; isSnapshot = true; @@ -79,10 +82,10 @@ static URL getElasticsearchCodebase() { return Build.class.getProtectionDomain().getCodeSource().getLocation(); } - private String shortHash; - private String date; + private final String shortHash; + private final String date; - Build(String shortHash, String date, boolean isSnapshot) { + public Build(String shortHash, String date, boolean isSnapshot) { this.shortHash = shortHash; this.date = date; this.isSnapshot = isSnapshot; diff --git a/core/src/main/java/org/elasticsearch/ElasticsearchException.java b/core/src/main/java/org/elasticsearch/ElasticsearchException.java index bd3ea6797dbc5..7c20ed7d2c482 100644 --- a/core/src/main/java/org/elasticsearch/ElasticsearchException.java +++ b/core/src/main/java/org/elasticsearch/ElasticsearchException.java @@ -21,6 +21,9 @@ import org.elasticsearch.action.support.replication.ReplicationOperation; import org.elasticsearch.cluster.action.shard.ShardStateAction; +import org.elasticsearch.common.CheckedFunction; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -34,45 +37,49 @@ import org.elasticsearch.transport.TcpTransport; import java.io.IOException; +import java.util.ArrayList; import java.util.Arrays; import java.util.Collections; import java.util.HashMap; +import java.util.Iterator; import java.util.List; import java.util.Map; import java.util.Set; import java.util.stream.Collectors; +import static java.util.Collections.emptyMap; +import static java.util.Collections.singletonMap; import static java.util.Collections.unmodifiableMap; import static org.elasticsearch.cluster.metadata.IndexMetaData.INDEX_UUID_NA_VALUE; import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; -import static org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownField; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureFieldName; /** * A base class for all elasticsearch exceptions. */ public class ElasticsearchException extends RuntimeException implements ToXContent, Writeable { - static final Version UNKNOWN_VERSION_ADDED = Version.fromId(0); + private static final Version UNKNOWN_VERSION_ADDED = Version.fromId(0); /** - * Passed in the {@link Params} of {@link #toXContent(XContentBuilder, org.elasticsearch.common.xcontent.ToXContent.Params, Throwable)} + * Passed in the {@link Params} of {@link #generateThrowableXContent(XContentBuilder, Params, Throwable)} * to control if the {@code caused_by} element should render. Unlike most parameters to {@code toXContent} methods this parameter is * internal only and not available as a URL parameter. */ - public static final String REST_EXCEPTION_SKIP_CAUSE = "rest.exception.cause.skip"; + private static final String REST_EXCEPTION_SKIP_CAUSE = "rest.exception.cause.skip"; /** - * Passed in the {@link Params} of {@link #toXContent(XContentBuilder, org.elasticsearch.common.xcontent.ToXContent.Params, Throwable)} + * Passed in the {@link Params} of {@link #generateThrowableXContent(XContentBuilder, Params, Throwable)} * to control if the {@code stack_trace} element should render. Unlike most parameters to {@code toXContent} methods this parameter is * internal only and not available as a URL parameter. Use the {@code error_trace} parameter instead. */ public static final String REST_EXCEPTION_SKIP_STACK_TRACE = "rest.exception.stacktrace.skip"; public static final boolean REST_EXCEPTION_SKIP_STACK_TRACE_DEFAULT = true; - public static final boolean REST_EXCEPTION_SKIP_CAUSE_DEFAULT = false; - private static final String INDEX_HEADER_KEY = "es.index"; - private static final String INDEX_HEADER_KEY_UUID = "es.index_uuid"; - private static final String SHARD_HEADER_KEY = "es.shard"; - private static final String RESOURCE_HEADER_TYPE_KEY = "es.resource.type"; - private static final String RESOURCE_HEADER_ID_KEY = "es.resource.id"; + private static final boolean REST_EXCEPTION_SKIP_CAUSE_DEFAULT = false; + private static final String INDEX_METADATA_KEY = "es.index"; + private static final String INDEX_METADATA_KEY_UUID = "es.index_uuid"; + private static final String SHARD_METADATA_KEY = "es.shard"; + private static final String RESOURCE_METADATA_TYPE_KEY = "es.resource.type"; + private static final String RESOURCE_METADATA_ID_KEY = "es.resource.id"; private static final String TYPE = "type"; private static final String REASON = "reason"; @@ -82,8 +89,9 @@ public class ElasticsearchException extends RuntimeException implements ToXConte private static final String ERROR = "error"; private static final String ROOT_CAUSE = "root_cause"; - private static final Map> ID_TO_SUPPLIER; + private static final Map> ID_TO_SUPPLIER; private static final Map, ElasticsearchExceptionHandle> CLASS_TO_ELASTICSEARCH_EXCEPTION_HANDLE; + private final Map> metadata = new HashMap<>(); private final Map> headers = new HashMap<>(); /** @@ -125,14 +133,56 @@ public ElasticsearchException(StreamInput in) throws IOException { super(in.readOptionalString(), in.readException()); readStackTrace(this, in); headers.putAll(in.readMapOfLists(StreamInput::readString, StreamInput::readString)); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + metadata.putAll(in.readMapOfLists(StreamInput::readString, StreamInput::readString)); + } else { + for (Iterator>> iterator = headers.entrySet().iterator(); iterator.hasNext(); ) { + Map.Entry> header = iterator.next(); + if (header.getKey().startsWith("es.")) { + metadata.put(header.getKey(), header.getValue()); + iterator.remove(); + } + } + } } /** - * Adds a new header with the given key. - * This method will replace existing header if a header with the same key already exists + * Adds a new piece of metadata with the given key. + * If the provided key is already present, the corresponding metadata will be replaced */ - public void addHeader(String key, String... value) { - this.headers.put(key, Arrays.asList(value)); + public void addMetadata(String key, String... values) { + addMetadata(key, Arrays.asList(values)); + } + + /** + * Adds a new piece of metadata with the given key. + * If the provided key is already present, the corresponding metadata will be replaced + */ + public void addMetadata(String key, List values) { + //we need to enforce this otherwise bw comp doesn't work properly, as "es." was the previous criteria to split headers in two sets + if (key.startsWith("es.") == false) { + throw new IllegalArgumentException("exception metadata must start with [es.], found [" + key + "] instead"); + } + this.metadata.put(key, values); + } + + /** + * Returns a set of all metadata keys on this exception + */ + public Set getMetadataKeys() { + return metadata.keySet(); + } + + /** + * Returns the list of metadata values for the given key or {@code null} if no metadata for the + * given key exists. + */ + public List getMetadata(String key) { + return metadata.get(key); + } + + protected Map> getMetadata() { + return metadata; } /** @@ -140,9 +190,20 @@ public void addHeader(String key, String... value) { * This method will replace existing header if a header with the same key already exists */ public void addHeader(String key, List value) { + //we need to enforce this otherwise bw comp doesn't work properly, as "es." was the previous criteria to split headers in two sets + if (key.startsWith("es.")) { + throw new IllegalArgumentException("exception headers must not start with [es.], found [" + key + "] instead"); + } this.headers.put(key, value); } + /** + * Adds a new header with the given key. + * This method will replace existing header if a header with the same key already exists + */ + public void addHeader(String key, String... value) { + addHeader(key, Arrays.asList(value)); + } /** * Returns a set of all header keys on this exception @@ -152,13 +213,17 @@ public Set getHeaderKeys() { } /** - * Returns the list of header values for the given key or {@code null} if not header for the + * Returns the list of header values for the given key or {@code null} if no header for the * given key exists. */ public List getHeader(String key) { return headers.get(key); } + protected Map> getHeaders() { + return headers; + } + /** * Returns the rest status code associated with this exception. */ @@ -219,11 +284,19 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(this.getMessage()); out.writeException(this.getCause()); writeStackTraces(this, out); - out.writeMapOfLists(headers, StreamOutput::writeString, StreamOutput::writeString); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + out.writeMapOfLists(headers, StreamOutput::writeString, StreamOutput::writeString); + out.writeMapOfLists(metadata, StreamOutput::writeString, StreamOutput::writeString); + } else { + HashMap> finalHeaders = new HashMap<>(headers.size() + metadata.size()); + finalHeaders.putAll(headers); + finalHeaders.putAll(metadata); + out.writeMapOfLists(finalHeaders, StreamOutput::writeString, StreamOutput::writeString); + } } public static ElasticsearchException readException(StreamInput input, int id) throws IOException { - FunctionThatThrowsIOException elasticsearchException = ID_TO_SUPPLIER.get(id); + CheckedFunction elasticsearchException = ID_TO_SUPPLIER.get(id); if (elasticsearchException == null) { throw new IllegalStateException("unknown exception for id: " + id); } @@ -256,64 +329,51 @@ public static int getId(Class exception) { public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { Throwable ex = ExceptionsHelper.unwrapCause(this); if (ex != this) { - toXContent(builder, params, this); + generateThrowableXContent(builder, params, this); } else { - builder.field(TYPE, getExceptionName()); - builder.field(REASON, getMessage()); - for (String key : headers.keySet()) { - if (key.startsWith("es.")) { - List values = headers.get(key); - xContentHeader(builder, key.substring("es.".length()), values); - } - } - innerToXContent(builder, params); - renderHeader(builder, params); - if (params.paramAsBoolean(REST_EXCEPTION_SKIP_STACK_TRACE, REST_EXCEPTION_SKIP_STACK_TRACE_DEFAULT) == false) { - builder.field(STACK_TRACE, ExceptionsHelper.stackTrace(this)); - } + innerToXContent(builder, params, this, getExceptionName(), getMessage(), headers, metadata, getCause()); } return builder; } - /** - * Renders additional per exception information into the xcontent - */ - protected void innerToXContent(XContentBuilder builder, Params params) throws IOException { - causeToXContent(builder, params); - } + protected static void innerToXContent(XContentBuilder builder, Params params, + Throwable throwable, String type, String message, Map> headers, + Map> metadata, Throwable cause) throws IOException { + builder.field(TYPE, type); + builder.field(REASON, message); - /** - * Renders a cause exception as xcontent - */ - protected void causeToXContent(XContentBuilder builder, Params params) throws IOException { - final Throwable cause = getCause(); - if (cause != null && params.paramAsBoolean(REST_EXCEPTION_SKIP_CAUSE, REST_EXCEPTION_SKIP_CAUSE_DEFAULT) == false) { - builder.field(CAUSED_BY); - builder.startObject(); - toXContent(builder, params, cause); - builder.endObject(); + for (Map.Entry> entry : metadata.entrySet()) { + headerToXContent(builder, entry.getKey().substring("es.".length()), entry.getValue()); } - } - protected final void renderHeader(XContentBuilder builder, Params params) throws IOException { - boolean hasHeader = false; - for (String key : headers.keySet()) { - if (key.startsWith("es.")) { - continue; - } - if (hasHeader == false) { - builder.startObject(HEADER); - hasHeader = true; + if (throwable instanceof ElasticsearchException) { + ElasticsearchException exception = (ElasticsearchException) throwable; + exception.metadataToXContent(builder, params); + } + + if (params.paramAsBoolean(REST_EXCEPTION_SKIP_CAUSE, REST_EXCEPTION_SKIP_CAUSE_DEFAULT) == false) { + if (cause != null) { + builder.field(CAUSED_BY); + builder.startObject(); + generateThrowableXContent(builder, params, cause); + builder.endObject(); } - List values = headers.get(key); - xContentHeader(builder, key, values); } - if (hasHeader) { + + if (headers.isEmpty() == false) { + builder.startObject(HEADER); + for (Map.Entry> entry : headers.entrySet()) { + headerToXContent(builder, entry.getKey(), entry.getValue()); + } builder.endObject(); } + + if (params.paramAsBoolean(REST_EXCEPTION_SKIP_STACK_TRACE, REST_EXCEPTION_SKIP_STACK_TRACE_DEFAULT) == false) { + builder.field(STACK_TRACE, ExceptionsHelper.stackTrace(throwable)); + } } - private void xContentHeader(XContentBuilder builder, String key, List values) throws IOException { + private static void headerToXContent(XContentBuilder builder, String key, List values) throws IOException { if (values != null && values.isEmpty() == false) { if (values.size() == 1) { builder.field(key, values.get(0)); @@ -328,25 +388,9 @@ private void xContentHeader(XContentBuilder builder, String key, List va } /** - * Static toXContent helper method that also renders non {@link org.elasticsearch.ElasticsearchException} instances as XContent. + * Renders additional per exception information into the XContent */ - public static void toXContent(XContentBuilder builder, Params params, Throwable ex) throws IOException { - ex = ExceptionsHelper.unwrapCause(ex); - if (ex instanceof ElasticsearchException) { - ((ElasticsearchException) ex).toXContent(builder, params); - } else { - builder.field(TYPE, getExceptionName(ex)); - builder.field(REASON, ex.getMessage()); - if (ex.getCause() != null) { - builder.field(CAUSED_BY); - builder.startObject(); - toXContent(builder, params, ex.getCause()); - builder.endObject(); - } - if (params.paramAsBoolean(REST_EXCEPTION_SKIP_STACK_TRACE, REST_EXCEPTION_SKIP_STACK_TRACE_DEFAULT) == false) { - builder.field(STACK_TRACE, ExceptionsHelper.stackTrace(ex)); - } - } + protected void metadataToXContent(XContentBuilder builder, Params params) throws IOException { } /** @@ -359,14 +403,23 @@ public static void toXContent(XContentBuilder builder, Params params, Throwable public static ElasticsearchException fromXContent(XContentParser parser) throws IOException { XContentParser.Token token = parser.nextToken(); ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + return innerFromXContent(parser, false); + } + + private static ElasticsearchException innerFromXContent(XContentParser parser, boolean parseRootCauses) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); String type = null, reason = null, stack = null; ElasticsearchException cause = null; - Map headers = new HashMap<>(); + Map> metadata = new HashMap<>(); + Map> headers = new HashMap<>(); + List rootCauses = new ArrayList<>(); - do { + for (; token == XContentParser.Token.FIELD_NAME; token = parser.nextToken()) { String currentFieldName = parser.currentName(); token = parser.nextToken(); + if (token.isValue()) { if (TYPE.equals(currentFieldName)) { type = parser.text(); @@ -374,36 +427,173 @@ public static ElasticsearchException fromXContent(XContentParser parser) throws reason = parser.text(); } else if (STACK_TRACE.equals(currentFieldName)) { stack = parser.text(); - } else { - // Everything else is considered as a header - headers.put(currentFieldName, parser.text()); + } else if (token == XContentParser.Token.VALUE_STRING) { + metadata.put(currentFieldName, Collections.singletonList(parser.text())); } } else if (token == XContentParser.Token.START_OBJECT) { if (CAUSED_BY.equals(currentFieldName)) { cause = fromXContent(parser); } else if (HEADER.equals(currentFieldName)) { - headers.putAll(parser.map()); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else { + List values = headers.getOrDefault(currentFieldName, new ArrayList<>()); + if (token == XContentParser.Token.VALUE_STRING) { + values.add(parser.text()); + } else if (token == XContentParser.Token.START_ARRAY) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token == XContentParser.Token.VALUE_STRING) { + values.add(parser.text()); + } else { + parser.skipChildren(); + } + } + } else if (token == XContentParser.Token.START_OBJECT) { + parser.skipChildren(); + } + headers.put(currentFieldName, values); + } + } } else { - throwUnknownField(currentFieldName, parser.getTokenLocation()); + // Any additional metadata object added by the metadataToXContent method is ignored + // and skipped, so that the parser does not fail on unknown fields. The parser only + // support metadata key-pairs and metadata arrays of values. + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (parseRootCauses && ROOT_CAUSE.equals(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + rootCauses.add(fromXContent(parser)); + } + } else { + // Parse the array and add each item to the corresponding list of metadata. + // Arrays of objects are not supported yet and just ignored and skipped. + List values = new ArrayList<>(); + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token == XContentParser.Token.VALUE_STRING) { + values.add(parser.text()); + } else { + parser.skipChildren(); + } + } + if (values.size() > 0) { + if (metadata.containsKey(currentFieldName)) { + values.addAll(metadata.get(currentFieldName)); + } + metadata.put(currentFieldName, values); + } } } - } while ((token = parser.nextToken()) == XContentParser.Token.FIELD_NAME); + } - StringBuilder message = new StringBuilder("Elasticsearch exception ["); - message.append(TYPE).append('=').append(type).append(", "); - message.append(REASON).append('=').append(reason); - if (stack != null) { - message.append(", ").append(STACK_TRACE).append('=').append(stack); + ElasticsearchException e = new ElasticsearchException(buildMessage(type, reason, stack), cause); + for (Map.Entry> entry : metadata.entrySet()) { + //subclasses can print out additional metadata through the metadataToXContent method. Simple key-value pairs will be + //parsed back and become part of this metadata set, while objects and arrays are not supported when parsing back. + //Those key-value pairs become part of the metadata set and inherit the "es." prefix as that is currently required + //by addMetadata. The prefix will get stripped out when printing metadata out so it will be effectively invisible. + //TODO move subclasses that print out simple metadata to using addMetadata directly and support also numbers and booleans. + //TODO rename metadataToXContent and have only SearchPhaseExecutionException use it, which prints out complex objects + e.addMetadata("es." + entry.getKey(), entry.getValue()); + } + for (Map.Entry> header : headers.entrySet()) { + e.addHeader(header.getKey(), header.getValue()); } - message.append(']'); - ElasticsearchException e = new ElasticsearchException(message.toString(), cause); - for (Map.Entry header : headers.entrySet()) { - e.addHeader(header.getKey(), String.valueOf(header.getValue())); + // Adds root causes as suppressed exception. This way they are not lost + // after parsing and can be retrieved using getSuppressed() method. + for (ElasticsearchException rootCause : rootCauses) { + e.addSuppressed(rootCause); } return e; } + /** + * Static toXContent helper method that renders {@link org.elasticsearch.ElasticsearchException} or {@link Throwable} instances + * as XContent, delegating the rendering to {@link #toXContent(XContentBuilder, Params)} + * or {@link #innerToXContent(XContentBuilder, Params, Throwable, String, String, Map, Map, Throwable)}. + * + * This method is usually used when the {@link Throwable} is rendered as a part of another XContent object, and its result can + * be parsed back using the {@link #fromXContent(XContentParser)} method. + */ + public static void generateThrowableXContent(XContentBuilder builder, Params params, Throwable t) throws IOException { + t = ExceptionsHelper.unwrapCause(t); + + if (t instanceof ElasticsearchException) { + ((ElasticsearchException) t).toXContent(builder, params); + } else { + innerToXContent(builder, params, t, getExceptionName(t), t.getMessage(), emptyMap(), emptyMap(), t.getCause()); + } + } + + /** + * Render any exception as a xcontent, encapsulated within a field or object named "error". The level of details that are rendered + * depends on the value of the "detailed" parameter: when it's false only a simple message based on the type and message of the + * exception is rendered. When it's true all detail are provided including guesses root causes, cause and potentially stack + * trace. + * + * This method is usually used when the {@link Exception} is rendered as a full XContent object, and its output can be parsed + * by the {@link #failureFromXContent(XContentParser)} method. + */ + public static void generateFailureXContent(XContentBuilder builder, Params params, @Nullable Exception e, boolean detailed) + throws IOException { + // No exception to render as an error + if (e == null) { + builder.field(ERROR, "unknown"); + return; + } + + // Render the exception with a simple message + if (detailed == false) { + String message = "No ElasticsearchException found"; + Throwable t = e; + for (int counter = 0; counter < 10 && t != null; counter++) { + if (t instanceof ElasticsearchException) { + message = t.getClass().getSimpleName() + "[" + t.getMessage() + "]"; + break; + } + t = t.getCause(); + } + builder.field(ERROR, message); + return; + } + + // Render the exception with all details + final ElasticsearchException[] rootCauses = ElasticsearchException.guessRootCauses(e); + builder.startObject(ERROR); + { + builder.startArray(ROOT_CAUSE); + for (ElasticsearchException rootCause : rootCauses) { + builder.startObject(); + rootCause.toXContent(builder, new DelegatingMapParams(singletonMap(REST_EXCEPTION_SKIP_CAUSE, "true"), params)); + builder.endObject(); + } + builder.endArray(); + } + generateThrowableXContent(builder, params, e); + builder.endObject(); + } + + /** + * Parses the output of {@link #generateFailureXContent(XContentBuilder, Params, Exception, boolean)} + */ + public static ElasticsearchException failureFromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureFieldName(parser, token, ERROR); + + token = parser.nextToken(); + if (token.isValue()) { + return new ElasticsearchException(buildMessage("exception", parser.text(), null)); + } + + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + token = parser.nextToken(); + + // Root causes are parsed in the innerFromXContent() and are added as suppressed exceptions. + return innerFromXContent(parser, true); + } + /** * Returns the root cause of this exception or multiple if different shards caused different exceptions */ @@ -449,12 +639,23 @@ public static String getExceptionName(Throwable ex) { return toUnderscoreCase(simpleName); } + static String buildMessage(String type, String reason, String stack) { + StringBuilder message = new StringBuilder("Elasticsearch exception ["); + message.append(TYPE).append('=').append(type).append(", "); + message.append(REASON).append('=').append(reason); + if (stack != null) { + message.append(", ").append(STACK_TRACE).append('=').append(stack); + } + message.append(']'); + return message.toString(); + } + @Override public String toString() { StringBuilder builder = new StringBuilder(); - if (headers.containsKey(INDEX_HEADER_KEY)) { + if (metadata.containsKey(INDEX_METADATA_KEY)) { builder.append(getIndex()); - if (headers.containsKey(SHARD_HEADER_KEY)) { + if (metadata.containsKey(SHARD_METADATA_KEY)) { builder.append('[').append(getShardId()).append(']'); } builder.append(' '); @@ -512,7 +713,7 @@ public static T writeStackTraces(T throwable, StreamOutput * in id order below. If you want to remove an exception leave a tombstone comment and mark the id as null in * ExceptionSerializationTests.testIds.ids. */ - enum ElasticsearchExceptionHandle { + private enum ElasticsearchExceptionHandle { INDEX_SHARD_SNAPSHOT_FAILED_EXCEPTION(org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException.class, org.elasticsearch.index.snapshots.IndexShardSnapshotFailedException::new, 0, UNKNOWN_VERSION_ADDED), DFS_PHASE_EXECUTION_EXCEPTION(org.elasticsearch.search.dfs.DfsPhaseExecutionException.class, @@ -564,8 +765,7 @@ enum ElasticsearchExceptionHandle { org.elasticsearch.search.SearchContextMissingException::new, 24, UNKNOWN_VERSION_ADDED), GENERAL_SCRIPT_EXCEPTION(org.elasticsearch.script.GeneralScriptException.class, org.elasticsearch.script.GeneralScriptException::new, 25, UNKNOWN_VERSION_ADDED), - BATCH_OPERATION_EXCEPTION(org.elasticsearch.index.shard.TranslogRecoveryPerformer.BatchOperationException.class, - org.elasticsearch.index.shard.TranslogRecoveryPerformer.BatchOperationException::new, 26, UNKNOWN_VERSION_ADDED), + // 26 was BatchOperationException SNAPSHOT_CREATION_EXCEPTION(org.elasticsearch.snapshots.SnapshotCreationException.class, org.elasticsearch.snapshots.SnapshotCreationException::new, 27, UNKNOWN_VERSION_ADDED), DELETE_FAILED_ENGINE_EXCEPTION(org.elasticsearch.index.engine.DeleteFailedEngineException.class, // deprecated in 6.0, remove in 7.0 @@ -629,8 +829,7 @@ enum ElasticsearchExceptionHandle { org.elasticsearch.transport.SendRequestTransportException::new, 58, UNKNOWN_VERSION_ADDED), ES_REJECTED_EXECUTION_EXCEPTION(org.elasticsearch.common.util.concurrent.EsRejectedExecutionException.class, org.elasticsearch.common.util.concurrent.EsRejectedExecutionException::new, 59, UNKNOWN_VERSION_ADDED), - EARLY_TERMINATION_EXCEPTION(org.elasticsearch.common.lucene.Lucene.EarlyTerminationException.class, - org.elasticsearch.common.lucene.Lucene.EarlyTerminationException::new, 60, UNKNOWN_VERSION_ADDED), + // 60 used to be for EarlyTerminationException // 61 used to be for RoutingValidationException NOT_SERIALIZABLE_EXCEPTION_WRAPPER(org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper.class, org.elasticsearch.common.io.stream.NotSerializableExceptionWrapper::new, 62, UNKNOWN_VERSION_ADDED), @@ -676,8 +875,7 @@ enum ElasticsearchExceptionHandle { org.elasticsearch.transport.ReceiveTimeoutTransportException::new, 83, UNKNOWN_VERSION_ADDED), NODE_DISCONNECTED_EXCEPTION(org.elasticsearch.transport.NodeDisconnectedException.class, org.elasticsearch.transport.NodeDisconnectedException::new, 84, UNKNOWN_VERSION_ADDED), - ALREADY_EXPIRED_EXCEPTION(org.elasticsearch.index.AlreadyExpiredException.class, - org.elasticsearch.index.AlreadyExpiredException::new, 85, UNKNOWN_VERSION_ADDED), + // 85 used to be for AlreadyExpiredException AGGREGATION_EXECUTION_EXCEPTION(org.elasticsearch.search.aggregations.AggregationExecutionException.class, org.elasticsearch.search.aggregations.AggregationExecutionException::new, 86, UNKNOWN_VERSION_ADDED), // 87 used to be for MergeMappingException @@ -753,8 +951,7 @@ enum ElasticsearchExceptionHandle { org.elasticsearch.search.SearchContextException::new, 127, UNKNOWN_VERSION_ADDED), SEARCH_SOURCE_BUILDER_EXCEPTION(org.elasticsearch.search.builder.SearchSourceBuilderException.class, org.elasticsearch.search.builder.SearchSourceBuilderException::new, 128, UNKNOWN_VERSION_ADDED), - ENGINE_CLOSED_EXCEPTION(org.elasticsearch.index.engine.EngineClosedException.class, - org.elasticsearch.index.engine.EngineClosedException::new, 129, UNKNOWN_VERSION_ADDED), + // 129 was EngineClosedException NO_SHARD_AVAILABLE_ACTION_EXCEPTION(org.elasticsearch.action.NoShardAvailableActionException.class, org.elasticsearch.action.NoShardAvailableActionException::new, 130, UNKNOWN_VERSION_ADDED), UNAVAILABLE_SHARDS_EXCEPTION(org.elasticsearch.action.UnavailableShardsException.class, @@ -785,19 +982,19 @@ enum ElasticsearchExceptionHandle { STATUS_EXCEPTION(org.elasticsearch.ElasticsearchStatusException.class, org.elasticsearch.ElasticsearchStatusException::new, 145, UNKNOWN_VERSION_ADDED), TASK_CANCELLED_EXCEPTION(org.elasticsearch.tasks.TaskCancelledException.class, - org.elasticsearch.tasks.TaskCancelledException::new, 146, Version.V_5_1_1_UNRELEASED), + org.elasticsearch.tasks.TaskCancelledException::new, 146, Version.V_5_1_1), SHARD_LOCK_OBTAIN_FAILED_EXCEPTION(org.elasticsearch.env.ShardLockObtainFailedException.class, org.elasticsearch.env.ShardLockObtainFailedException::new, 147, Version.V_5_0_2), UNKNOWN_NAMED_OBJECT_EXCEPTION(org.elasticsearch.common.xcontent.NamedXContentRegistry.UnknownNamedObjectException.class, - org.elasticsearch.common.xcontent.NamedXContentRegistry.UnknownNamedObjectException::new, 148, Version.V_5_2_0_UNRELEASED); + org.elasticsearch.common.xcontent.NamedXContentRegistry.UnknownNamedObjectException::new, 148, Version.V_5_2_0); final Class exceptionClass; - final FunctionThatThrowsIOException constructor; + final CheckedFunction constructor; final int id; final Version versionAdded; ElasticsearchExceptionHandle(Class exceptionClass, - FunctionThatThrowsIOException constructor, int id, + CheckedFunction constructor, int id, Version versionAdded) { // We need the exceptionClass because you can't dig it out of the constructor reliably. this.exceptionClass = exceptionClass; @@ -807,6 +1004,30 @@ ElasticsearchExceptionHandle(Class excepti } } + /** + * Returns an array of all registered handle IDs. These are the IDs for every registered + * exception. + * + * @return an array of all registered handle IDs + */ + static int[] ids() { + return Arrays.stream(ElasticsearchExceptionHandle.values()).mapToInt(h -> h.id).toArray(); + } + + /** + * Returns an array of all registered pairs of handle IDs and exception classes. These pairs are + * provided for every registered exception. + * + * @return an array of all registered pairs of handle IDs and exception classes + */ + static Tuple>[] classes() { + @SuppressWarnings("unchecked") + final Tuple>[] ts = + Arrays.stream(ElasticsearchExceptionHandle.values()) + .map(h -> Tuple.tuple(h.id, h.exceptionClass)).toArray(Tuple[]::new); + return ts; + } + static { ID_TO_SUPPLIER = unmodifiableMap(Arrays .stream(ElasticsearchExceptionHandle.values()).collect(Collectors.toMap(e -> e.id, e -> e.constructor))); @@ -815,9 +1036,9 @@ ElasticsearchExceptionHandle(Class excepti } public Index getIndex() { - List index = getHeader(INDEX_HEADER_KEY); + List index = getMetadata(INDEX_METADATA_KEY); if (index != null && index.isEmpty() == false) { - List index_uuid = getHeader(INDEX_HEADER_KEY_UUID); + List index_uuid = getMetadata(INDEX_METADATA_KEY_UUID); return new Index(index.get(0), index_uuid.get(0)); } @@ -825,7 +1046,7 @@ public Index getIndex() { } public ShardId getShardId() { - List shard = getHeader(SHARD_HEADER_KEY); + List shard = getMetadata(SHARD_METADATA_KEY); if (shard != null && shard.isEmpty() == false) { return new ShardId(getIndex(), Integer.parseInt(shard.get(0))); } @@ -834,8 +1055,8 @@ public ShardId getShardId() { public void setIndex(Index index) { if (index != null) { - addHeader(INDEX_HEADER_KEY, index.getName()); - addHeader(INDEX_HEADER_KEY_UUID, index.getUUID()); + addMetadata(INDEX_METADATA_KEY, index.getName()); + addMetadata(INDEX_METADATA_KEY_UUID, index.getUUID()); } } @@ -848,27 +1069,22 @@ public void setIndex(String index) { public void setShard(ShardId shardId) { if (shardId != null) { setIndex(shardId.getIndex()); - addHeader(SHARD_HEADER_KEY, Integer.toString(shardId.id())); + addMetadata(SHARD_METADATA_KEY, Integer.toString(shardId.id())); } } - public void setShard(String index, int shardId) { - setIndex(index); - addHeader(SHARD_HEADER_KEY, Integer.toString(shardId)); - } - public void setResources(String type, String... id) { assert type != null; - addHeader(RESOURCE_HEADER_ID_KEY, id); - addHeader(RESOURCE_HEADER_TYPE_KEY, type); + addMetadata(RESOURCE_METADATA_ID_KEY, id); + addMetadata(RESOURCE_METADATA_TYPE_KEY, type); } public List getResourceId() { - return getHeader(RESOURCE_HEADER_ID_KEY); + return getMetadata(RESOURCE_METADATA_ID_KEY); } public String getResourceType() { - List header = getHeader(RESOURCE_HEADER_TYPE_KEY); + List header = getMetadata(RESOURCE_METADATA_TYPE_KEY); if (header != null && header.isEmpty() == false) { assert header.size() == 1; return header.get(0); @@ -876,26 +1092,6 @@ public String getResourceType() { return null; } - public static void renderException(XContentBuilder builder, Params params, Exception e) throws IOException { - builder.startObject(ERROR); - final ElasticsearchException[] rootCauses = ElasticsearchException.guessRootCauses(e); - builder.field(ROOT_CAUSE); - builder.startArray(); - for (ElasticsearchException rootCause : rootCauses) { - builder.startObject(); - rootCause.toXContent(builder, new ToXContent.DelegatingMapParams( - Collections.singletonMap(ElasticsearchException.REST_EXCEPTION_SKIP_CAUSE, "true"), params)); - builder.endObject(); - } - builder.endArray(); - ElasticsearchException.toXContent(builder, params, e); - builder.endObject(); - } - - interface FunctionThatThrowsIOException { - R apply(T t) throws IOException; - } - // lower cases and adds underscores to transitions in a name private static String toUnderscoreCase(String value) { StringBuilder sb = new StringBuilder(); diff --git a/core/src/main/java/org/elasticsearch/ExceptionsHelper.java b/core/src/main/java/org/elasticsearch/ExceptionsHelper.java index c30662a093479..e89e04a301da1 100644 --- a/core/src/main/java/org/elasticsearch/ExceptionsHelper.java +++ b/core/src/main/java/org/elasticsearch/ExceptionsHelper.java @@ -214,7 +214,7 @@ static class GroupBy { final String index; final Class causeType; - public GroupBy(Throwable t) { + GroupBy(Throwable t) { if (t instanceof ElasticsearchException) { final Index index = ((ElasticsearchException) t).getIndex(); if (index != null) { diff --git a/core/src/main/java/org/elasticsearch/SpecialPermission.java b/core/src/main/java/org/elasticsearch/SpecialPermission.java index 7d796346c6472..9e5571a5b0af9 100644 --- a/core/src/main/java/org/elasticsearch/SpecialPermission.java +++ b/core/src/main/java/org/elasticsearch/SpecialPermission.java @@ -57,6 +57,9 @@ * */ public final class SpecialPermission extends BasicPermission { + + public static final SpecialPermission INSTANCE = new SpecialPermission(); + /** * Creates a new SpecialPermision object. */ @@ -76,4 +79,14 @@ public SpecialPermission() { public SpecialPermission(String name, String actions) { this(); } + + /** + * Check that the current stack has {@link SpecialPermission} access according to the {@link SecurityManager}. + */ + public static void check() { + SecurityManager sm = System.getSecurityManager(); + if (sm != null) { + sm.checkPermission(INSTANCE); + } + } } diff --git a/core/src/main/java/org/elasticsearch/Version.java b/core/src/main/java/org/elasticsearch/Version.java index 1ee42adaee3db..288a52a0a1fcf 100644 --- a/core/src/main/java/org/elasticsearch/Version.java +++ b/core/src/main/java/org/elasticsearch/Version.java @@ -28,7 +28,6 @@ import org.elasticsearch.monitor.jvm.JvmInfo; import java.io.IOException; -import java.util.Comparator; public class Version implements Comparable { /* @@ -36,44 +35,6 @@ public class Version implements Comparable { * values below 25 are for alpha builder (since 5.0), and above 25 and below 50 are beta builds, and below 99 are RC builds, with 99 * indicating a release the (internal) format of the id is there so we can easily do after/before checks on the id */ - public static final int V_2_0_0_ID = 2000099; - public static final Version V_2_0_0 = new Version(V_2_0_0_ID, org.apache.lucene.util.Version.LUCENE_5_2_1); - public static final int V_2_0_1_ID = 2000199; - public static final Version V_2_0_1 = new Version(V_2_0_1_ID, org.apache.lucene.util.Version.LUCENE_5_2_1); - public static final int V_2_0_2_ID = 2000299; - public static final Version V_2_0_2 = new Version(V_2_0_2_ID, org.apache.lucene.util.Version.LUCENE_5_2_1); - public static final int V_2_1_0_ID = 2010099; - public static final Version V_2_1_0 = new Version(V_2_1_0_ID, org.apache.lucene.util.Version.LUCENE_5_3_1); - public static final int V_2_1_1_ID = 2010199; - public static final Version V_2_1_1 = new Version(V_2_1_1_ID, org.apache.lucene.util.Version.LUCENE_5_3_1); - public static final int V_2_1_2_ID = 2010299; - public static final Version V_2_1_2 = new Version(V_2_1_2_ID, org.apache.lucene.util.Version.LUCENE_5_3_1); - public static final int V_2_2_0_ID = 2020099; - public static final Version V_2_2_0 = new Version(V_2_2_0_ID, org.apache.lucene.util.Version.LUCENE_5_4_1); - public static final int V_2_2_1_ID = 2020199; - public static final Version V_2_2_1 = new Version(V_2_2_1_ID, org.apache.lucene.util.Version.LUCENE_5_4_1); - public static final int V_2_2_2_ID = 2020299; - public static final Version V_2_2_2 = new Version(V_2_2_2_ID, org.apache.lucene.util.Version.LUCENE_5_4_1); - public static final int V_2_3_0_ID = 2030099; - public static final Version V_2_3_0 = new Version(V_2_3_0_ID, org.apache.lucene.util.Version.LUCENE_5_5_0); - public static final int V_2_3_1_ID = 2030199; - public static final Version V_2_3_1 = new Version(V_2_3_1_ID, org.apache.lucene.util.Version.LUCENE_5_5_0); - public static final int V_2_3_2_ID = 2030299; - public static final Version V_2_3_2 = new Version(V_2_3_2_ID, org.apache.lucene.util.Version.LUCENE_5_5_0); - public static final int V_2_3_3_ID = 2030399; - public static final Version V_2_3_3 = new Version(V_2_3_3_ID, org.apache.lucene.util.Version.LUCENE_5_5_0); - public static final int V_2_3_4_ID = 2030499; - public static final Version V_2_3_4 = new Version(V_2_3_4_ID, org.apache.lucene.util.Version.LUCENE_5_5_0); - public static final int V_2_3_5_ID = 2030599; - public static final Version V_2_3_5 = new Version(V_2_3_5_ID, org.apache.lucene.util.Version.LUCENE_5_5_0); - public static final int V_2_4_0_ID = 2040099; - public static final Version V_2_4_0 = new Version(V_2_4_0_ID, org.apache.lucene.util.Version.LUCENE_5_5_2); - public static final int V_2_4_1_ID = 2040199; - public static final Version V_2_4_1 = new Version(V_2_4_1_ID, org.apache.lucene.util.Version.LUCENE_5_5_2); - public static final int V_2_4_2_ID = 2040299; - public static final Version V_2_4_2 = new Version(V_2_4_2_ID, org.apache.lucene.util.Version.LUCENE_5_5_2); - public static final int V_2_4_3_ID = 2040399; - public static final Version V_2_4_3 = new Version(V_2_4_3_ID, org.apache.lucene.util.Version.LUCENE_5_5_2); public static final int V_5_0_0_alpha1_ID = 5000001; public static final Version V_5_0_0_alpha1 = new Version(V_5_0_0_alpha1_ID, org.apache.lucene.util.Version.LUCENE_6_0_0); public static final int V_5_0_0_alpha2_ID = 5000002; @@ -94,21 +55,43 @@ public class Version implements Comparable { public static final Version V_5_0_1 = new Version(V_5_0_1_ID, org.apache.lucene.util.Version.LUCENE_6_2_1); public static final int V_5_0_2_ID = 5000299; public static final Version V_5_0_2 = new Version(V_5_0_2_ID, org.apache.lucene.util.Version.LUCENE_6_2_1); - public static final int V_5_0_3_ID_UNRELEASED = 5000399; - public static final Version V_5_0_3_UNRELEASED = new Version(V_5_0_3_ID_UNRELEASED, org.apache.lucene.util.Version.LUCENE_6_3_0); // no version constant for 5.1.0 due to inadvertent release - public static final int V_5_1_1_ID_UNRELEASED = 5010199; - public static final Version V_5_1_1_UNRELEASED = new Version(V_5_1_1_ID_UNRELEASED, org.apache.lucene.util.Version.LUCENE_6_3_0); - public static final int V_5_1_2_ID_UNRELEASED = 5010299; - public static final Version V_5_1_2_UNRELEASED = new Version(V_5_1_2_ID_UNRELEASED, org.apache.lucene.util.Version.LUCENE_6_3_0); - public static final int V_5_2_0_ID_UNRELEASED = 5020099; - public static final Version V_5_2_0_UNRELEASED = new Version(V_5_2_0_ID_UNRELEASED, org.apache.lucene.util.Version.LUCENE_6_3_0); - public static final int V_5_3_0_ID_UNRELEASED = 5030099; - public static final Version V_5_3_0_UNRELEASED = new Version(V_5_3_0_ID_UNRELEASED, org.apache.lucene.util.Version.LUCENE_6_4_0); - public static final int V_6_0_0_alpha1_ID_UNRELEASED = 6000001; - public static final Version V_6_0_0_alpha1_UNRELEASED = - new Version(V_6_0_0_alpha1_ID_UNRELEASED, org.apache.lucene.util.Version.LUCENE_6_4_0); - public static final Version CURRENT = V_6_0_0_alpha1_UNRELEASED; + public static final int V_5_1_1_ID = 5010199; + public static final Version V_5_1_1 = new Version(V_5_1_1_ID, org.apache.lucene.util.Version.LUCENE_6_3_0); + public static final int V_5_1_2_ID = 5010299; + public static final Version V_5_1_2 = new Version(V_5_1_2_ID, org.apache.lucene.util.Version.LUCENE_6_3_0); + public static final int V_5_2_0_ID = 5020099; + public static final Version V_5_2_0 = new Version(V_5_2_0_ID, org.apache.lucene.util.Version.LUCENE_6_4_0); + public static final int V_5_2_1_ID = 5020199; + public static final Version V_5_2_1 = new Version(V_5_2_1_ID, org.apache.lucene.util.Version.LUCENE_6_4_1); + public static final int V_5_2_2_ID = 5020299; + public static final Version V_5_2_2 = new Version(V_5_2_2_ID, org.apache.lucene.util.Version.LUCENE_6_4_1); + public static final int V_5_3_0_ID = 5030099; + public static final Version V_5_3_0 = new Version(V_5_3_0_ID, org.apache.lucene.util.Version.LUCENE_6_4_1); + public static final int V_5_3_1_ID = 5030199; + public static final Version V_5_3_1 = new Version(V_5_3_1_ID, org.apache.lucene.util.Version.LUCENE_6_4_2); + public static final int V_5_3_2_ID = 5030299; + public static final Version V_5_3_2 = new Version(V_5_3_2_ID, org.apache.lucene.util.Version.LUCENE_6_4_2); + public static final int V_5_3_3_ID = 5030399; + public static final Version V_5_3_3 = new Version(V_5_3_3_ID, org.apache.lucene.util.Version.LUCENE_6_4_2); + public static final int V_5_4_0_ID = 5040099; + public static final Version V_5_4_0 = new Version(V_5_4_0_ID, org.apache.lucene.util.Version.LUCENE_6_5_0); + public static final int V_5_4_1_ID = 5040199; + public static final Version V_5_4_1 = new Version(V_5_4_1_ID, org.apache.lucene.util.Version.LUCENE_6_5_1); + public static final int V_5_5_0_ID = 5050099; + public static final Version V_5_5_0 = new Version(V_5_5_0_ID, org.apache.lucene.util.Version.LUCENE_6_6_0); + public static final int V_5_6_0_ID = 5060099; + public static final Version V_5_6_0 = new Version(V_5_6_0_ID, org.apache.lucene.util.Version.LUCENE_6_6_0); + public static final int V_6_0_0_alpha1_ID = 6000001; + public static final Version V_6_0_0_alpha1 = + new Version(V_6_0_0_alpha1_ID, org.apache.lucene.util.Version.LUCENE_7_0_0); + public static final int V_6_0_0_alpha2_ID = 6000002; + public static final Version V_6_0_0_alpha2 = + new Version(V_6_0_0_alpha2_ID, org.apache.lucene.util.Version.LUCENE_7_0_0); + public static final int V_6_0_0_alpha3_ID = 6000003; + public static final Version V_6_0_0_alpha3 = + new Version(V_6_0_0_alpha3_ID, org.apache.lucene.util.Version.LUCENE_7_0_0); + public static final Version CURRENT = V_6_0_0_alpha3; // unreleased versions must be added to the above list with the suffix _UNRELEASED (with the exception of CURRENT) @@ -123,18 +106,38 @@ public static Version readVersion(StreamInput in) throws IOException { public static Version fromId(int id) { switch (id) { - case V_6_0_0_alpha1_ID_UNRELEASED: - return V_6_0_0_alpha1_UNRELEASED; - case V_5_3_0_ID_UNRELEASED: - return V_5_3_0_UNRELEASED; - case V_5_2_0_ID_UNRELEASED: - return V_5_2_0_UNRELEASED; - case V_5_1_2_ID_UNRELEASED: - return V_5_1_2_UNRELEASED; - case V_5_1_1_ID_UNRELEASED: - return V_5_1_1_UNRELEASED; - case V_5_0_3_ID_UNRELEASED: - return V_5_0_3_UNRELEASED; + case V_6_0_0_alpha3_ID: + return V_6_0_0_alpha3; + case V_6_0_0_alpha2_ID: + return V_6_0_0_alpha2; + case V_6_0_0_alpha1_ID: + return V_6_0_0_alpha1; + case V_5_6_0_ID: + return V_5_6_0; + case V_5_5_0_ID: + return V_5_5_0; + case V_5_4_1_ID: + return V_5_4_1; + case V_5_4_0_ID: + return V_5_4_0; + case V_5_3_3_ID: + return V_5_3_3; + case V_5_3_2_ID: + return V_5_3_2; + case V_5_3_1_ID: + return V_5_3_1; + case V_5_3_0_ID: + return V_5_3_0; + case V_5_2_2_ID: + return V_5_2_2; + case V_5_2_1_ID: + return V_5_2_1; + case V_5_2_0_ID: + return V_5_2_0; + case V_5_1_2_ID: + return V_5_1_2; + case V_5_1_1_ID: + return V_5_1_1; case V_5_0_2_ID: return V_5_0_2; case V_5_0_1_ID: @@ -155,44 +158,6 @@ public static Version fromId(int id) { return V_5_0_0_alpha2; case V_5_0_0_alpha1_ID: return V_5_0_0_alpha1; - case V_2_4_3_ID: - return V_2_4_3; - case V_2_4_2_ID: - return V_2_4_2; - case V_2_4_1_ID: - return V_2_4_1; - case V_2_4_0_ID: - return V_2_4_0; - case V_2_3_5_ID: - return V_2_3_5; - case V_2_3_4_ID: - return V_2_3_4; - case V_2_3_3_ID: - return V_2_3_3; - case V_2_3_2_ID: - return V_2_3_2; - case V_2_3_1_ID: - return V_2_3_1; - case V_2_3_0_ID: - return V_2_3_0; - case V_2_2_2_ID: - return V_2_2_2; - case V_2_2_1_ID: - return V_2_2_1; - case V_2_2_0_ID: - return V_2_2_0; - case V_2_1_2_ID: - return V_2_1_2; - case V_2_1_1_ID: - return V_2_1_1; - case V_2_1_0_ID: - return V_2_1_0; - case V_2_0_2_ID: - return V_2_0_2; - case V_2_0_1_ID: - return V_2_0_1; - case V_2_0_0_ID: - return V_2_0_0; default: return new Version(id, org.apache.lucene.util.Version.LATEST); } @@ -241,7 +206,7 @@ public static Version fromString(String version) { if (snapshot = version.endsWith("-SNAPSHOT")) { version = version.substring(0, version.length() - 9); } - String[] parts = version.split("\\.|\\-"); + String[] parts = version.split("[.-]"); if (parts.length < 3 || parts.length > 4) { throw new IllegalArgumentException( "the version needs to contain major, minor, and revision, and optionally the build: " + version); @@ -330,9 +295,12 @@ public int compareTo(Version other) { public Version minimumCompatibilityVersion() { final int bwcMajor; final int bwcMinor; - if (this.onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { - bwcMajor = major - 1; - bwcMinor = 0; // TODO we have to move this to the latest released minor of the last major but for now we just keep + if (major == 6) { // we only specialize for current major here + bwcMajor = Version.V_5_4_0.major; + bwcMinor = Version.V_5_4_0.minor; + } else if (major > 6) { // all the future versions are compatible with first minor... + bwcMajor = major -1; + bwcMinor = 0; } else { bwcMajor = major; bwcMinor = 0; @@ -342,7 +310,8 @@ public Version minimumCompatibilityVersion() { /** * Returns the minimum created index version that this version supports. Indices created with lower versions - * can't be used with this version. + * can't be used with this version. This should also be used for file based serialization backwards compatibility ie. on serialization + * code that is used to read / write file formats like transaction logs, cluster state, and index metadata. */ public Version minimumIndexCompatibilityVersion() { final int bwcMajor; diff --git a/core/src/main/java/org/elasticsearch/action/ActionListener.java b/core/src/main/java/org/elasticsearch/action/ActionListener.java index ef26867600e69..e0d91a9036437 100644 --- a/core/src/main/java/org/elasticsearch/action/ActionListener.java +++ b/core/src/main/java/org/elasticsearch/action/ActionListener.java @@ -19,8 +19,12 @@ package org.elasticsearch.action; +import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.common.CheckedConsumer; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.atomic.AtomicBoolean; import java.util.function.Consumer; /** @@ -65,4 +69,41 @@ public void onFailure(Exception e) { } }; } + + /** + * Notifies every given listener with the response passed to {@link #onResponse(Object)}. If a listener itself throws an exception + * the exception is forwarded to {@link #onFailure(Exception)}. If in turn {@link #onFailure(Exception)} fails all remaining + * listeners will be processed and the caught exception will be re-thrown. + */ + static void onResponse(Iterable> listeners, Response response) { + List exceptionList = new ArrayList<>(); + for (ActionListener listener : listeners) { + try { + listener.onResponse(response); + } catch (Exception ex) { + try { + listener.onFailure(ex); + } catch (Exception ex1) { + exceptionList.add(ex1); + } + } + } + ExceptionsHelper.maybeThrowRuntimeAndSuppress(exceptionList); + } + + /** + * Notifies every given listener with the failure passed to {@link #onFailure(Exception)}. If a listener itself throws an exception + * all remaining listeners will be processed and the caught exception will be re-thrown. + */ + static void onFailure(Iterable> listeners, Exception failure) { + List exceptionList = new ArrayList<>(); + for (ActionListener listener : listeners) { + try { + listener.onFailure(failure); + } catch (Exception ex) { + exceptionList.add(ex); + } + } + ExceptionsHelper.maybeThrowRuntimeAndSuppress(exceptionList); + } } diff --git a/core/src/main/java/org/elasticsearch/action/ActionModule.java b/core/src/main/java/org/elasticsearch/action/ActionModule.java index a24ed5f808399..89994dc30f3d1 100644 --- a/core/src/main/java/org/elasticsearch/action/ActionModule.java +++ b/core/src/main/java/org/elasticsearch/action/ActionModule.java @@ -19,14 +19,6 @@ package org.elasticsearch.action; -import java.util.ArrayList; -import java.util.HashSet; -import java.util.List; -import java.util.Map; -import java.util.Set; -import java.util.function.UnaryOperator; -import java.util.stream.Collectors; - import org.apache.logging.log4j.Logger; import org.elasticsearch.action.admin.cluster.allocation.ClusterAllocationExplainAction; import org.elasticsearch.action.admin.cluster.allocation.TransportClusterAllocationExplainAction; @@ -45,6 +37,10 @@ import org.elasticsearch.action.admin.cluster.node.tasks.get.TransportGetTaskAction; import org.elasticsearch.action.admin.cluster.node.tasks.list.ListTasksAction; import org.elasticsearch.action.admin.cluster.node.tasks.list.TransportListTasksAction; +import org.elasticsearch.action.admin.cluster.node.usage.NodesUsageAction; +import org.elasticsearch.action.admin.cluster.node.usage.TransportNodesUsageAction; +import org.elasticsearch.action.admin.cluster.remote.RemoteInfoAction; +import org.elasticsearch.action.admin.cluster.remote.TransportRemoteInfoAction; import org.elasticsearch.action.admin.cluster.repositories.delete.DeleteRepositoryAction; import org.elasticsearch.action.admin.cluster.repositories.delete.TransportDeleteRepositoryAction; import org.elasticsearch.action.admin.cluster.repositories.get.GetRepositoriesAction; @@ -157,6 +153,9 @@ import org.elasticsearch.action.delete.TransportDeleteAction; import org.elasticsearch.action.explain.ExplainAction; import org.elasticsearch.action.explain.TransportExplainAction; +import org.elasticsearch.action.fieldcaps.FieldCapabilitiesAction; +import org.elasticsearch.action.fieldcaps.TransportFieldCapabilitiesAction; +import org.elasticsearch.action.fieldcaps.TransportFieldCapabilitiesIndexAction; import org.elasticsearch.action.fieldstats.FieldStatsAction; import org.elasticsearch.action.fieldstats.TransportFieldStatsAction; import org.elasticsearch.action.get.GetAction; @@ -196,19 +195,24 @@ import org.elasticsearch.action.termvectors.TransportTermVectorsAction; import org.elasticsearch.action.update.TransportUpdateAction; import org.elasticsearch.action.update.UpdateAction; +import org.elasticsearch.client.node.NodeClient; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.common.NamedRegistry; import org.elasticsearch.common.inject.AbstractModule; import org.elasticsearch.common.inject.multibindings.MapBinder; import org.elasticsearch.common.inject.multibindings.Multibinder; import org.elasticsearch.common.logging.ESLoggerFactory; -import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.ClusterSettings; +import org.elasticsearch.common.settings.IndexScopedSettings; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.settings.SettingsFilter; +import org.elasticsearch.indices.breaker.CircuitBreakerService; import org.elasticsearch.plugins.ActionPlugin; import org.elasticsearch.plugins.ActionPlugin.ActionHandler; import org.elasticsearch.rest.RestController; import org.elasticsearch.rest.RestHandler; +import org.elasticsearch.rest.action.RestFieldCapabilitiesAction; import org.elasticsearch.rest.action.RestFieldStatsAction; import org.elasticsearch.rest.action.RestMainAction; import org.elasticsearch.rest.action.admin.cluster.RestCancelTasksAction; @@ -232,13 +236,14 @@ import org.elasticsearch.rest.action.admin.cluster.RestNodesHotThreadsAction; import org.elasticsearch.rest.action.admin.cluster.RestNodesInfoAction; import org.elasticsearch.rest.action.admin.cluster.RestNodesStatsAction; +import org.elasticsearch.rest.action.admin.cluster.RestNodesUsageAction; import org.elasticsearch.rest.action.admin.cluster.RestPendingClusterTasksAction; import org.elasticsearch.rest.action.admin.cluster.RestPutRepositoryAction; import org.elasticsearch.rest.action.admin.cluster.RestPutStoredScriptAction; +import org.elasticsearch.rest.action.admin.cluster.RestRemoteClusterInfoAction; import org.elasticsearch.rest.action.admin.cluster.RestRestoreSnapshotAction; import org.elasticsearch.rest.action.admin.cluster.RestSnapshotsStatusAction; import org.elasticsearch.rest.action.admin.cluster.RestVerifyRepositoryAction; -import org.elasticsearch.rest.action.admin.indices.RestAliasesExistAction; import org.elasticsearch.rest.action.admin.indices.RestAnalyzeAction; import org.elasticsearch.rest.action.admin.indices.RestClearIndicesCacheAction; import org.elasticsearch.rest.action.admin.indices.RestCloseIndexAction; @@ -248,16 +253,17 @@ import org.elasticsearch.rest.action.admin.indices.RestFlushAction; import org.elasticsearch.rest.action.admin.indices.RestForceMergeAction; import org.elasticsearch.rest.action.admin.indices.RestGetAliasesAction; +import org.elasticsearch.rest.action.admin.indices.RestGetAllAliasesAction; +import org.elasticsearch.rest.action.admin.indices.RestGetAllMappingsAction; +import org.elasticsearch.rest.action.admin.indices.RestGetAllSettingsAction; import org.elasticsearch.rest.action.admin.indices.RestGetFieldMappingAction; import org.elasticsearch.rest.action.admin.indices.RestGetIndexTemplateAction; import org.elasticsearch.rest.action.admin.indices.RestGetIndicesAction; import org.elasticsearch.rest.action.admin.indices.RestGetMappingAction; import org.elasticsearch.rest.action.admin.indices.RestGetSettingsAction; -import org.elasticsearch.rest.action.admin.indices.RestHeadIndexTemplateAction; import org.elasticsearch.rest.action.admin.indices.RestIndexDeleteAliasesAction; import org.elasticsearch.rest.action.admin.indices.RestIndexPutAliasAction; import org.elasticsearch.rest.action.admin.indices.RestIndicesAliasesAction; -import org.elasticsearch.rest.action.admin.indices.RestIndicesExistsAction; import org.elasticsearch.rest.action.admin.indices.RestIndicesSegmentsAction; import org.elasticsearch.rest.action.admin.indices.RestIndicesShardStoresAction; import org.elasticsearch.rest.action.admin.indices.RestIndicesStatsAction; @@ -295,7 +301,6 @@ import org.elasticsearch.rest.action.document.RestDeleteAction; import org.elasticsearch.rest.action.document.RestGetAction; import org.elasticsearch.rest.action.document.RestGetSourceAction; -import org.elasticsearch.rest.action.document.RestHeadAction; import org.elasticsearch.rest.action.document.RestIndexAction; import org.elasticsearch.rest.action.document.RestMultiGetAction; import org.elasticsearch.rest.action.document.RestMultiTermVectorsAction; @@ -311,6 +316,16 @@ import org.elasticsearch.rest.action.search.RestSearchAction; import org.elasticsearch.rest.action.search.RestSearchScrollAction; import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.usage.UsageService; + +import java.util.ArrayList; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.Consumer; +import java.util.function.Supplier; +import java.util.function.UnaryOperator; +import java.util.stream.Collectors; import static java.util.Collections.unmodifiableList; import static java.util.Collections.unmodifiableMap; @@ -324,6 +339,10 @@ public class ActionModule extends AbstractModule { private final boolean transportClient; private final Settings settings; + private final IndexNameExpressionResolver indexNameExpressionResolver; + private final IndexScopedSettings indexScopedSettings; + private final ClusterSettings clusterSettings; + private final SettingsFilter settingsFilter; private final List actionPlugins; private final Map> actions; private final List> actionFilters; @@ -331,14 +350,20 @@ public class ActionModule extends AbstractModule { private final DestructiveOperations destructiveOperations; private final RestController restController; - public ActionModule(boolean transportClient, Settings settings, IndexNameExpressionResolver resolver, - ClusterSettings clusterSettings, ThreadPool threadPool, List actionPlugins) { + public ActionModule(boolean transportClient, Settings settings, IndexNameExpressionResolver indexNameExpressionResolver, + IndexScopedSettings indexScopedSettings, ClusterSettings clusterSettings, SettingsFilter settingsFilter, + ThreadPool threadPool, List actionPlugins, NodeClient nodeClient, + CircuitBreakerService circuitBreakerService, UsageService usageService) { this.transportClient = transportClient; this.settings = settings; + this.indexNameExpressionResolver = indexNameExpressionResolver; + this.indexScopedSettings = indexScopedSettings; + this.clusterSettings = clusterSettings; + this.settingsFilter = settingsFilter; this.actionPlugins = actionPlugins; actions = setupActions(actionPlugins); actionFilters = setupActionFilters(actionPlugins); - autoCreateIndex = transportClient ? null : new AutoCreateIndex(settings, clusterSettings, resolver); + autoCreateIndex = transportClient ? null : new AutoCreateIndex(settings, clusterSettings, indexNameExpressionResolver); destructiveOperations = new DestructiveOperations(settings, clusterSettings); Set headers = actionPlugins.stream().flatMap(p -> p.getRestHeaders().stream()).collect(Collectors.toSet()); UnaryOperator restWrapper = null; @@ -352,9 +377,14 @@ public ActionModule(boolean transportClient, Settings settings, IndexNameExpress restWrapper = newRestWrapper; } } - restController = new RestController(settings, headers, restWrapper); + if (transportClient) { + restController = null; + } else { + restController = new RestController(settings, headers, restWrapper, nodeClient, circuitBreakerService, usageService); + } } + public Map> getActions() { return actions; } @@ -362,7 +392,7 @@ public ActionModule(boolean transportClient, Settings settings, IndexNameExpress static Map> setupActions(List actionPlugins) { // Subclass NamedRegistry for easy registration class ActionRegistry extends NamedRegistry> { - public ActionRegistry() { + ActionRegistry() { super("action"); } @@ -380,7 +410,9 @@ public void reg actions.register(MainAction.INSTANCE, TransportMainAction.class); actions.register(NodesInfoAction.INSTANCE, TransportNodesInfoAction.class); + actions.register(RemoteInfoAction.INSTANCE, TransportRemoteInfoAction.class); actions.register(NodesStatsAction.INSTANCE, TransportNodesStatsAction.class); + actions.register(NodesUsageAction.INSTANCE, TransportNodesUsageAction.class); actions.register(NodesHotThreadsAction.INSTANCE, TransportNodesHotThreadsAction.class); actions.register(ListTasksAction.INSTANCE, TransportListTasksAction.class); actions.register(GetTaskAction.INSTANCE, TransportGetTaskAction.class); @@ -463,6 +495,8 @@ public void reg actions.register(DeleteStoredScriptAction.INSTANCE, TransportDeleteStoredScriptAction.class); actions.register(FieldStatsAction.INSTANCE, TransportFieldStatsAction.class); + actions.register(FieldCapabilitiesAction.INSTANCE, TransportFieldCapabilitiesAction.class, + TransportFieldCapabilitiesIndexAction.class); actions.register(PutPipelineAction.INSTANCE, PutPipelineTransportAction.class); actions.register(GetPipelineAction.INSTANCE, GetPipelineTransportAction.class); @@ -478,147 +512,146 @@ private List> setupActionFilters(List p.getActionFilters().stream()).collect(Collectors.toList())); } - static Set> setupRestHandlers(List actionPlugins) { - Set> handlers = new HashSet<>(); - registerRestHandler(handlers, RestMainAction.class); - registerRestHandler(handlers, RestNodesInfoAction.class); - registerRestHandler(handlers, RestNodesStatsAction.class); - registerRestHandler(handlers, RestNodesHotThreadsAction.class); - registerRestHandler(handlers, RestClusterAllocationExplainAction.class); - registerRestHandler(handlers, RestClusterStatsAction.class); - registerRestHandler(handlers, RestClusterStateAction.class); - registerRestHandler(handlers, RestClusterHealthAction.class); - registerRestHandler(handlers, RestClusterUpdateSettingsAction.class); - registerRestHandler(handlers, RestClusterGetSettingsAction.class); - registerRestHandler(handlers, RestClusterRerouteAction.class); - registerRestHandler(handlers, RestClusterSearchShardsAction.class); - registerRestHandler(handlers, RestPendingClusterTasksAction.class); - registerRestHandler(handlers, RestPutRepositoryAction.class); - registerRestHandler(handlers, RestGetRepositoriesAction.class); - registerRestHandler(handlers, RestDeleteRepositoryAction.class); - registerRestHandler(handlers, RestVerifyRepositoryAction.class); - registerRestHandler(handlers, RestGetSnapshotsAction.class); - registerRestHandler(handlers, RestCreateSnapshotAction.class); - registerRestHandler(handlers, RestRestoreSnapshotAction.class); - registerRestHandler(handlers, RestDeleteSnapshotAction.class); - registerRestHandler(handlers, RestSnapshotsStatusAction.class); - - registerRestHandler(handlers, RestIndicesExistsAction.class); - registerRestHandler(handlers, RestTypesExistsAction.class); - registerRestHandler(handlers, RestGetIndicesAction.class); - registerRestHandler(handlers, RestIndicesStatsAction.class); - registerRestHandler(handlers, RestIndicesSegmentsAction.class); - registerRestHandler(handlers, RestIndicesShardStoresAction.class); - registerRestHandler(handlers, RestGetAliasesAction.class); - registerRestHandler(handlers, RestAliasesExistAction.class); - registerRestHandler(handlers, RestIndexDeleteAliasesAction.class); - registerRestHandler(handlers, RestIndexPutAliasAction.class); - registerRestHandler(handlers, RestIndicesAliasesAction.class); - registerRestHandler(handlers, RestCreateIndexAction.class); - registerRestHandler(handlers, RestShrinkIndexAction.class); - registerRestHandler(handlers, RestRolloverIndexAction.class); - registerRestHandler(handlers, RestDeleteIndexAction.class); - registerRestHandler(handlers, RestCloseIndexAction.class); - registerRestHandler(handlers, RestOpenIndexAction.class); - - registerRestHandler(handlers, RestUpdateSettingsAction.class); - registerRestHandler(handlers, RestGetSettingsAction.class); - - registerRestHandler(handlers, RestAnalyzeAction.class); - registerRestHandler(handlers, RestGetIndexTemplateAction.class); - registerRestHandler(handlers, RestPutIndexTemplateAction.class); - registerRestHandler(handlers, RestDeleteIndexTemplateAction.class); - registerRestHandler(handlers, RestHeadIndexTemplateAction.class); - - registerRestHandler(handlers, RestPutMappingAction.class); - registerRestHandler(handlers, RestGetMappingAction.class); - registerRestHandler(handlers, RestGetFieldMappingAction.class); - - registerRestHandler(handlers, RestRefreshAction.class); - registerRestHandler(handlers, RestFlushAction.class); - registerRestHandler(handlers, RestSyncedFlushAction.class); - registerRestHandler(handlers, RestForceMergeAction.class); - registerRestHandler(handlers, RestUpgradeAction.class); - registerRestHandler(handlers, RestClearIndicesCacheAction.class); - - registerRestHandler(handlers, RestIndexAction.class); - registerRestHandler(handlers, RestGetAction.class); - registerRestHandler(handlers, RestGetSourceAction.class); - registerRestHandler(handlers, RestHeadAction.Document.class); - registerRestHandler(handlers, RestHeadAction.Source.class); - registerRestHandler(handlers, RestMultiGetAction.class); - registerRestHandler(handlers, RestDeleteAction.class); - registerRestHandler(handlers, org.elasticsearch.rest.action.document.RestCountAction.class); - registerRestHandler(handlers, RestTermVectorsAction.class); - registerRestHandler(handlers, RestMultiTermVectorsAction.class); - registerRestHandler(handlers, RestBulkAction.class); - registerRestHandler(handlers, RestUpdateAction.class); - - registerRestHandler(handlers, RestSearchAction.class); - registerRestHandler(handlers, RestSearchScrollAction.class); - registerRestHandler(handlers, RestClearScrollAction.class); - registerRestHandler(handlers, RestMultiSearchAction.class); - - registerRestHandler(handlers, RestValidateQueryAction.class); - - registerRestHandler(handlers, RestExplainAction.class); - - registerRestHandler(handlers, RestRecoveryAction.class); + public void initRestHandlers(Supplier nodesInCluster) { + List catActions = new ArrayList<>(); + Consumer registerHandler = a -> { + if (a instanceof AbstractCatAction) { + catActions.add((AbstractCatAction) a); + } + }; + registerHandler.accept(new RestMainAction(settings, restController)); + registerHandler.accept(new RestNodesInfoAction(settings, restController, settingsFilter)); + registerHandler.accept(new RestRemoteClusterInfoAction(settings, restController)); + registerHandler.accept(new RestNodesStatsAction(settings, restController)); + registerHandler.accept(new RestNodesUsageAction(settings, restController)); + registerHandler.accept(new RestNodesHotThreadsAction(settings, restController)); + registerHandler.accept(new RestClusterAllocationExplainAction(settings, restController)); + registerHandler.accept(new RestClusterStatsAction(settings, restController)); + registerHandler.accept(new RestClusterStateAction(settings, restController, settingsFilter)); + registerHandler.accept(new RestClusterHealthAction(settings, restController)); + registerHandler.accept(new RestClusterUpdateSettingsAction(settings, restController)); + registerHandler.accept(new RestClusterGetSettingsAction(settings, restController, clusterSettings, settingsFilter)); + registerHandler.accept(new RestClusterRerouteAction(settings, restController, settingsFilter)); + registerHandler.accept(new RestClusterSearchShardsAction(settings, restController)); + registerHandler.accept(new RestPendingClusterTasksAction(settings, restController)); + registerHandler.accept(new RestPutRepositoryAction(settings, restController)); + registerHandler.accept(new RestGetRepositoriesAction(settings, restController, settingsFilter)); + registerHandler.accept(new RestDeleteRepositoryAction(settings, restController)); + registerHandler.accept(new RestVerifyRepositoryAction(settings, restController)); + registerHandler.accept(new RestGetSnapshotsAction(settings, restController)); + registerHandler.accept(new RestCreateSnapshotAction(settings, restController)); + registerHandler.accept(new RestRestoreSnapshotAction(settings, restController)); + registerHandler.accept(new RestDeleteSnapshotAction(settings, restController)); + registerHandler.accept(new RestSnapshotsStatusAction(settings, restController)); + + registerHandler.accept(new RestGetAllAliasesAction(settings, restController)); + registerHandler.accept(new RestGetAllMappingsAction(settings, restController)); + registerHandler.accept(new RestGetAllSettingsAction(settings, restController, indexScopedSettings, settingsFilter)); + registerHandler.accept(new RestTypesExistsAction(settings, restController)); + registerHandler.accept(new RestGetIndicesAction(settings, restController, indexScopedSettings, settingsFilter)); + registerHandler.accept(new RestIndicesStatsAction(settings, restController)); + registerHandler.accept(new RestIndicesSegmentsAction(settings, restController)); + registerHandler.accept(new RestIndicesShardStoresAction(settings, restController)); + registerHandler.accept(new RestGetAliasesAction(settings, restController)); + registerHandler.accept(new RestIndexDeleteAliasesAction(settings, restController)); + registerHandler.accept(new RestIndexPutAliasAction(settings, restController)); + registerHandler.accept(new RestIndicesAliasesAction(settings, restController)); + registerHandler.accept(new RestCreateIndexAction(settings, restController)); + registerHandler.accept(new RestShrinkIndexAction(settings, restController)); + registerHandler.accept(new RestRolloverIndexAction(settings, restController)); + registerHandler.accept(new RestDeleteIndexAction(settings, restController)); + registerHandler.accept(new RestCloseIndexAction(settings, restController)); + registerHandler.accept(new RestOpenIndexAction(settings, restController)); + + registerHandler.accept(new RestUpdateSettingsAction(settings, restController)); + registerHandler.accept(new RestGetSettingsAction(settings, restController, indexScopedSettings, settingsFilter)); + + registerHandler.accept(new RestAnalyzeAction(settings, restController)); + registerHandler.accept(new RestGetIndexTemplateAction(settings, restController)); + registerHandler.accept(new RestPutIndexTemplateAction(settings, restController)); + registerHandler.accept(new RestDeleteIndexTemplateAction(settings, restController)); + + registerHandler.accept(new RestPutMappingAction(settings, restController)); + registerHandler.accept(new RestGetMappingAction(settings, restController)); + registerHandler.accept(new RestGetFieldMappingAction(settings, restController)); + + registerHandler.accept(new RestRefreshAction(settings, restController)); + registerHandler.accept(new RestFlushAction(settings, restController)); + registerHandler.accept(new RestSyncedFlushAction(settings, restController)); + registerHandler.accept(new RestForceMergeAction(settings, restController)); + registerHandler.accept(new RestUpgradeAction(settings, restController)); + registerHandler.accept(new RestClearIndicesCacheAction(settings, restController)); + + registerHandler.accept(new RestIndexAction(settings, restController)); + registerHandler.accept(new RestGetAction(settings, restController)); + registerHandler.accept(new RestGetSourceAction(settings, restController)); + registerHandler.accept(new RestMultiGetAction(settings, restController)); + registerHandler.accept(new RestDeleteAction(settings, restController)); + registerHandler.accept(new org.elasticsearch.rest.action.document.RestCountAction(settings, restController)); + registerHandler.accept(new RestTermVectorsAction(settings, restController)); + registerHandler.accept(new RestMultiTermVectorsAction(settings, restController)); + registerHandler.accept(new RestBulkAction(settings, restController)); + registerHandler.accept(new RestUpdateAction(settings, restController)); + + registerHandler.accept(new RestSearchAction(settings, restController)); + registerHandler.accept(new RestSearchScrollAction(settings, restController)); + registerHandler.accept(new RestClearScrollAction(settings, restController)); + registerHandler.accept(new RestMultiSearchAction(settings, restController)); + + registerHandler.accept(new RestValidateQueryAction(settings, restController)); + + registerHandler.accept(new RestExplainAction(settings, restController)); + + registerHandler.accept(new RestRecoveryAction(settings, restController)); // Scripts API - registerRestHandler(handlers, RestGetStoredScriptAction.class); - registerRestHandler(handlers, RestPutStoredScriptAction.class); - registerRestHandler(handlers, RestDeleteStoredScriptAction.class); + registerHandler.accept(new RestGetStoredScriptAction(settings, restController)); + registerHandler.accept(new RestPutStoredScriptAction(settings, restController)); + registerHandler.accept(new RestDeleteStoredScriptAction(settings, restController)); - registerRestHandler(handlers, RestFieldStatsAction.class); + registerHandler.accept(new RestFieldStatsAction(settings, restController)); + registerHandler.accept(new RestFieldCapabilitiesAction(settings, restController)); // Tasks API - registerRestHandler(handlers, RestListTasksAction.class); - registerRestHandler(handlers, RestGetTaskAction.class); - registerRestHandler(handlers, RestCancelTasksAction.class); + registerHandler.accept(new RestListTasksAction(settings, restController, nodesInCluster)); + registerHandler.accept(new RestGetTaskAction(settings, restController)); + registerHandler.accept(new RestCancelTasksAction(settings, restController, nodesInCluster)); // Ingest API - registerRestHandler(handlers, RestPutPipelineAction.class); - registerRestHandler(handlers, RestGetPipelineAction.class); - registerRestHandler(handlers, RestDeletePipelineAction.class); - registerRestHandler(handlers, RestSimulatePipelineAction.class); + registerHandler.accept(new RestPutPipelineAction(settings, restController)); + registerHandler.accept(new RestGetPipelineAction(settings, restController)); + registerHandler.accept(new RestDeletePipelineAction(settings, restController)); + registerHandler.accept(new RestSimulatePipelineAction(settings, restController)); // CAT API - registerRestHandler(handlers, RestCatAction.class); - registerRestHandler(handlers, RestAllocationAction.class); - registerRestHandler(handlers, RestShardsAction.class); - registerRestHandler(handlers, RestMasterAction.class); - registerRestHandler(handlers, RestNodesAction.class); - registerRestHandler(handlers, RestTasksAction.class); - registerRestHandler(handlers, RestIndicesAction.class); - registerRestHandler(handlers, RestSegmentsAction.class); + registerHandler.accept(new RestAllocationAction(settings, restController)); + registerHandler.accept(new RestShardsAction(settings, restController)); + registerHandler.accept(new RestMasterAction(settings, restController)); + registerHandler.accept(new RestNodesAction(settings, restController)); + registerHandler.accept(new RestTasksAction(settings, restController, nodesInCluster)); + registerHandler.accept(new RestIndicesAction(settings, restController, indexNameExpressionResolver)); + registerHandler.accept(new RestSegmentsAction(settings, restController)); // Fully qualified to prevent interference with rest.action.count.RestCountAction - registerRestHandler(handlers, org.elasticsearch.rest.action.cat.RestCountAction.class); + registerHandler.accept(new org.elasticsearch.rest.action.cat.RestCountAction(settings, restController)); // Fully qualified to prevent interference with rest.action.indices.RestRecoveryAction - registerRestHandler(handlers, org.elasticsearch.rest.action.cat.RestRecoveryAction.class); - registerRestHandler(handlers, RestHealthAction.class); - registerRestHandler(handlers, org.elasticsearch.rest.action.cat.RestPendingClusterTasksAction.class); - registerRestHandler(handlers, RestAliasAction.class); - registerRestHandler(handlers, RestThreadPoolAction.class); - registerRestHandler(handlers, RestPluginsAction.class); - registerRestHandler(handlers, RestFielddataAction.class); - registerRestHandler(handlers, RestNodeAttrsAction.class); - registerRestHandler(handlers, RestRepositoriesAction.class); - registerRestHandler(handlers, RestSnapshotAction.class); - registerRestHandler(handlers, RestTemplatesAction.class); + registerHandler.accept(new org.elasticsearch.rest.action.cat.RestRecoveryAction(settings, restController)); + registerHandler.accept(new RestHealthAction(settings, restController)); + registerHandler.accept(new org.elasticsearch.rest.action.cat.RestPendingClusterTasksAction(settings, restController)); + registerHandler.accept(new RestAliasAction(settings, restController)); + registerHandler.accept(new RestThreadPoolAction(settings, restController)); + registerHandler.accept(new RestPluginsAction(settings, restController)); + registerHandler.accept(new RestFielddataAction(settings, restController)); + registerHandler.accept(new RestNodeAttrsAction(settings, restController)); + registerHandler.accept(new RestRepositoriesAction(settings, restController)); + registerHandler.accept(new RestSnapshotAction(settings, restController)); + registerHandler.accept(new RestTemplatesAction(settings, restController)); for (ActionPlugin plugin : actionPlugins) { - for (Class handler : plugin.getRestHandlers()) { - registerRestHandler(handlers, handler); + for (RestHandler handler : plugin.getRestHandlers(settings, restController, clusterSettings, indexScopedSettings, + settingsFilter, indexNameExpressionResolver, nodesInCluster)) { + registerHandler.accept(handler); } } - return handlers; - } - - private static void registerRestHandler(Set> handlers, Class handler) { - if (handlers.contains(handler)) { - throw new IllegalArgumentException("can't register the same [rest_handler] more than once for [" + handler.getName() + "]"); - } - handlers.add(handler); + registerHandler.accept(new RestCatAction(settings, restController, catActions)); } @Override @@ -647,23 +680,6 @@ protected void configure() { bind(supportAction).asEagerSingleton(); } } - - // Bind the RestController which is required (by Node) even if rest isn't enabled. - bind(RestController.class).toInstance(restController); - - // Setup the RestHandlers - if (NetworkModule.HTTP_ENABLED.get(settings)) { - Multibinder restHandlers = Multibinder.newSetBinder(binder(), RestHandler.class); - Multibinder catHandlers = Multibinder.newSetBinder(binder(), AbstractCatAction.class); - for (Class handler : setupRestHandlers(actionPlugins)) { - bind(handler).asEagerSingleton(); - if (AbstractCatAction.class.isAssignableFrom(handler)) { - catHandlers.addBinding().to(handler.asSubclass(AbstractCatAction.class)); - } else { - restHandlers.addBinding().to(handler); - } - } - } } } diff --git a/core/src/main/java/org/elasticsearch/action/ActionRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/ActionRequestBuilder.java index 076d4ae67f6c3..964568fc472fd 100644 --- a/core/src/main/java/org/elasticsearch/action/ActionRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/ActionRequestBuilder.java @@ -19,18 +19,16 @@ package org.elasticsearch.action; -import org.elasticsearch.action.support.PlainListenableActionFuture; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.threadpool.ThreadPool; import java.util.Objects; -public abstract class ActionRequestBuilder> { +public abstract class ActionRequestBuilder> { protected final Action action; protected final Request request; - private final ThreadPool threadPool; protected final ElasticsearchClient client; protected ActionRequestBuilder(ElasticsearchClient client, Action action, Request request) { @@ -38,18 +36,14 @@ protected ActionRequestBuilder(ElasticsearchClient client, Action execute() { - PlainListenableActionFuture future = new PlainListenableActionFuture<>(threadPool); - execute(future); - return future; + public ActionFuture execute() { + return client.execute(action, request); } /** @@ -74,13 +68,6 @@ public Response get(String timeout) { } public void execute(ActionListener listener) { - client.execute(action, beforeExecute(request), listener); - } - - /** - * A callback to additionally process the request before its executed - */ - protected Request beforeExecute(Request request) { - return request; + client.execute(action, request, listener); } } diff --git a/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java b/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java index f96dfcf0f7c13..64f63025279c3 100644 --- a/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java +++ b/core/src/main/java/org/elasticsearch/action/DocWriteResponse.java @@ -23,26 +23,43 @@ import org.elasticsearch.action.support.WriteRequest.RefreshPolicy; import org.elasticsearch.action.support.WriteResponse; import org.elasticsearch.action.support.replication.ReplicationResponse; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexSettings; import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import java.io.IOException; -import java.net.URI; -import java.net.URISyntaxException; +import java.io.UnsupportedEncodingException; +import java.net.URLEncoder; import java.util.Locale; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; +import static org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownField; +import static org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownToken; + /** * A base class for the response of a write operation that involves a single doc */ -public abstract class DocWriteResponse extends ReplicationResponse implements WriteResponse, StatusToXContent { +public abstract class DocWriteResponse extends ReplicationResponse implements WriteResponse, StatusToXContentObject { + + private static final String _SHARDS = "_shards"; + private static final String _INDEX = "_index"; + private static final String _TYPE = "_type"; + private static final String _ID = "_id"; + private static final String _VERSION = "_version"; + private static final String _SEQ_NO = "_seq_no"; + private static final String _PRIMARY_TERM = "_primary_term"; + private static final String RESULT = "result"; + private static final String FORCED_REFRESH = "forced_refresh"; /** * An enum that represents the the results of CRUD operations, primarily used to communicate the type of @@ -100,14 +117,16 @@ public void writeTo(StreamOutput out) throws IOException { private String type; private long version; private long seqNo; + private long primaryTerm; private boolean forcedRefresh; protected Result result; - public DocWriteResponse(ShardId shardId, String type, String id, long seqNo, long version, Result result) { + public DocWriteResponse(ShardId shardId, String type, String id, long seqNo, long primaryTerm, long version, Result result) { this.shardId = shardId; this.type = type; this.id = id; this.seqNo = seqNo; + this.primaryTerm = primaryTerm; this.version = version; this.result = result; } @@ -166,6 +185,15 @@ public long getSeqNo() { return seqNo; } + /** + * The primary term for this change. + * + * @return the primary term + */ + public long getPrimaryTerm() { + return primaryTerm; + } + /** * Did this request force a refresh? Requests that set {@link WriteRequest#setRefreshPolicy(RefreshPolicy)} to * {@link RefreshPolicy#IMMEDIATE} will always return true for this. Requests that set it to {@link RefreshPolicy#WAIT_UNTIL} will @@ -181,36 +209,49 @@ public void setForcedRefresh(boolean forcedRefresh) { } /** returns the rest status for this response (based on {@link ShardInfo#status()} */ + @Override public RestStatus status() { return getShardInfo().status(); } /** - * Gets the location of the written document as a string suitable for a {@code Location} header. - * @param routing any routing used in the request. If null the location doesn't include routing information. + * Return the relative URI for the location of the document suitable for use in the {@code Location} header. The use of relative URIs is + * permitted as of HTTP/1.1 (cf. https://tools.ietf.org/html/rfc7231#section-7.1.2). * + * @param routing custom routing or {@code null} if custom routing is not used + * @return the relative URI for the location of the document */ - public String getLocation(@Nullable String routing) throws URISyntaxException { - // Absolute path for the location of the document. This should be allowed as of HTTP/1.1: - // https://tools.ietf.org/html/rfc7231#section-7.1.2 - String index = getIndex(); - String type = getType(); - String id = getId(); - String routingStart = "?routing="; - int bufferSize = 3 + index.length() + type.length() + id.length(); - if (routing != null) { - bufferSize += routingStart.length() + routing.length(); + public String getLocation(@Nullable String routing) { + final String encodedIndex; + final String encodedType; + final String encodedId; + final String encodedRouting; + try { + // encode the path components separately otherwise the path separators will be encoded + encodedIndex = URLEncoder.encode(getIndex(), "UTF-8"); + encodedType = URLEncoder.encode(getType(), "UTF-8"); + encodedId = URLEncoder.encode(getId(), "UTF-8"); + encodedRouting = routing == null ? null : URLEncoder.encode(routing, "UTF-8"); + } catch (final UnsupportedEncodingException e) { + throw new AssertionError(e); + } + final String routingStart = "?routing="; + final int bufferSizeExcludingRouting = 3 + encodedIndex.length() + encodedType.length() + encodedId.length(); + final int bufferSize; + if (encodedRouting == null) { + bufferSize = bufferSizeExcludingRouting; + } else { + bufferSize = bufferSizeExcludingRouting + routingStart.length() + encodedRouting.length(); } - StringBuilder location = new StringBuilder(bufferSize); - location.append('/').append(index); - location.append('/').append(type); - location.append('/').append(id); - if (routing != null) { - location.append(routingStart).append(routing); + final StringBuilder location = new StringBuilder(bufferSize); + location.append('/').append(encodedIndex); + location.append('/').append(encodedType); + location.append('/').append(encodedId); + if (encodedRouting != null) { + location.append(routingStart).append(encodedRouting); } - URI uri = new URI(location.toString()); - return uri.toASCIIString(); + return location.toString(); } @Override @@ -220,10 +261,12 @@ public void readFrom(StreamInput in) throws IOException { type = in.readString(); id = in.readString(); version = in.readZLong(); - if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { seqNo = in.readZLong(); + primaryTerm = in.readVLong(); } else { seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + primaryTerm = 0; } forcedRefresh = in.readBoolean(); result = Result.readFrom(in); @@ -236,28 +279,157 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(type); out.writeString(id); out.writeZLong(version); - if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { out.writeZLong(seqNo); + out.writeVLong(primaryTerm); } out.writeBoolean(forcedRefresh); result.writeTo(out); } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + innerToXContent(builder, params); + builder.endObject(); + return builder; + } + + public XContentBuilder innerToXContent(XContentBuilder builder, Params params) throws IOException { ReplicationResponse.ShardInfo shardInfo = getShardInfo(); - builder.field("_index", shardId.getIndexName()) - .field("_type", type) - .field("_id", id) - .field("_version", version) - .field("result", getResult().getLowercase()); + builder.field(_INDEX, shardId.getIndexName()) + .field(_TYPE, type) + .field(_ID, id) + .field(_VERSION, version) + .field(RESULT, getResult().getLowercase()); if (forcedRefresh) { - builder.field("forced_refresh", forcedRefresh); + builder.field(FORCED_REFRESH, true); } - shardInfo.toXContent(builder, params); + builder.field(_SHARDS, shardInfo); if (getSeqNo() >= 0) { - builder.field("_seq_no", getSeqNo()); + builder.field(_SEQ_NO, getSeqNo()); + builder.field(_PRIMARY_TERM, getPrimaryTerm()); } return builder; } + + /** + * Parse the output of the {@link #innerToXContent(XContentBuilder, Params)} method. + * + * This method is intended to be called by subclasses and must be called multiple times to parse all the information concerning + * {@link DocWriteResponse} objects. It always parses the current token, updates the given parsing context accordingly + * if needed and then immediately returns. + */ + protected static void parseInnerToXContent(XContentParser parser, Builder context) throws IOException { + XContentParser.Token token = parser.currentToken(); + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + + String currentFieldName = parser.currentName(); + token = parser.nextToken(); + + if (token.isValue()) { + if (_INDEX.equals(currentFieldName)) { + // index uuid and shard id are unknown and can't be parsed back for now. + context.setShardId(new ShardId(new Index(parser.text(), IndexMetaData.INDEX_UUID_NA_VALUE), -1)); + } else if (_TYPE.equals(currentFieldName)) { + context.setType(parser.text()); + } else if (_ID.equals(currentFieldName)) { + context.setId(parser.text()); + } else if (_VERSION.equals(currentFieldName)) { + context.setVersion(parser.longValue()); + } else if (RESULT.equals(currentFieldName)) { + String result = parser.text(); + for (Result r : Result.values()) { + if (r.getLowercase().equals(result)) { + context.setResult(r); + break; + } + } + } else if (FORCED_REFRESH.equals(currentFieldName)) { + context.setForcedRefresh(parser.booleanValue()); + } else if (_SEQ_NO.equals(currentFieldName)) { + context.setSeqNo(parser.longValue()); + } else if (_PRIMARY_TERM.equals(currentFieldName)) { + context.setPrimaryTerm(parser.longValue()); + } else { + throwUnknownField(currentFieldName, parser.getTokenLocation()); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (_SHARDS.equals(currentFieldName)) { + context.setShardInfo(ShardInfo.fromXContent(parser)); + } else { + throwUnknownField(currentFieldName, parser.getTokenLocation()); + } + } else { + throwUnknownToken(token, parser.getTokenLocation()); + } + } + + /** + * Base class of all {@link DocWriteResponse} builders. These {@link DocWriteResponse.Builder} are used during + * xcontent parsing to temporarily store the parsed values, then the {@link Builder#build()} method is called to + * instantiate the appropriate {@link DocWriteResponse} with the parsed values. + */ + public abstract static class Builder { + + protected ShardId shardId = null; + protected String type = null; + protected String id = null; + protected Long version = null; + protected Result result = null; + protected boolean forcedRefresh; + protected ShardInfo shardInfo = null; + protected Long seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + protected Long primaryTerm = 0L; + + public ShardId getShardId() { + return shardId; + } + + public void setShardId(ShardId shardId) { + this.shardId = shardId; + } + + public String getType() { + return type; + } + + public void setType(String type) { + this.type = type; + } + + public String getId() { + return id; + } + + public void setId(String id) { + this.id = id; + } + + public void setVersion(Long version) { + this.version = version; + } + + public void setResult(Result result) { + this.result = result; + } + + public void setForcedRefresh(boolean forcedRefresh) { + this.forcedRefresh = forcedRefresh; + } + + public void setShardInfo(ShardInfo shardInfo) { + this.shardInfo = shardInfo; + } + + public void setSeqNo(Long seqNo) { + this.seqNo = seqNo; + } + + public void setPrimaryTerm(Long primaryTerm) { + this.primaryTerm = primaryTerm; + } + + public abstract DocWriteResponse build(); + } } diff --git a/core/src/main/java/org/elasticsearch/action/ListenableActionFuture.java b/core/src/main/java/org/elasticsearch/action/ListenableActionFuture.java index 29b5a2a877495..87e4df3bc7975 100644 --- a/core/src/main/java/org/elasticsearch/action/ListenableActionFuture.java +++ b/core/src/main/java/org/elasticsearch/action/ListenableActionFuture.java @@ -29,5 +29,5 @@ public interface ListenableActionFuture extends ActionFuture { /** * Add an action listener to be invoked when a response has received. */ - void addListener(final ActionListener listener); + void addListener(ActionListener listener); } diff --git a/core/src/main/java/org/elasticsearch/action/NotifyOnceListener.java b/core/src/main/java/org/elasticsearch/action/NotifyOnceListener.java new file mode 100644 index 0000000000000..1b717dcc6c05a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/NotifyOnceListener.java @@ -0,0 +1,50 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action; + +import java.util.concurrent.atomic.AtomicBoolean; + +/** + * A listener that ensures that only one of onResponse or onFailure is called. And the method + * the is called is only called once. Subclasses should implement notification logic with + * innerOnResponse and innerOnFailure. + */ +public abstract class NotifyOnceListener implements ActionListener { + + private final AtomicBoolean hasBeenCalled = new AtomicBoolean(false); + + protected abstract void innerOnResponse(Response response); + + protected abstract void innerOnFailure(Exception e); + + @Override + public final void onResponse(Response response) { + if (hasBeenCalled.compareAndSet(false, true)) { + innerOnResponse(response); + } + } + + @Override + public final void onFailure(Exception e) { + if (hasBeenCalled.compareAndSet(false, true)) { + innerOnFailure(e); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/OriginalIndices.java b/core/src/main/java/org/elasticsearch/action/OriginalIndices.java index cc299f544b38f..0642326d2b48e 100644 --- a/core/src/main/java/org/elasticsearch/action/OriginalIndices.java +++ b/core/src/main/java/org/elasticsearch/action/OriginalIndices.java @@ -24,11 +24,15 @@ import org.elasticsearch.common.io.stream.StreamOutput; import java.io.IOException; +import java.util.Arrays; /** * Used to keep track of original indices within internal (e.g. shard level) requests */ -public class OriginalIndices implements IndicesRequest { +public final class OriginalIndices implements IndicesRequest { + + //constant to use when original indices are not applicable and will not be serialized across the wire + public static final OriginalIndices NONE = new OriginalIndices(null, null); private final String[] indices; private final IndicesOptions indicesOptions; @@ -39,7 +43,6 @@ public OriginalIndices(IndicesRequest indicesRequest) { public OriginalIndices(String[] indices, IndicesOptions indicesOptions) { this.indices = indices; - assert indicesOptions != null; this.indicesOptions = indicesOptions; } @@ -57,9 +60,17 @@ public static OriginalIndices readOriginalIndices(StreamInput in) throws IOExcep return new OriginalIndices(in.readStringArray(), IndicesOptions.readIndicesOptions(in)); } - public static void writeOriginalIndices(OriginalIndices originalIndices, StreamOutput out) throws IOException { + assert originalIndices != NONE; out.writeStringArrayNullable(originalIndices.indices); originalIndices.indicesOptions.writeIndicesOptions(out); } + + @Override + public String toString() { + return "OriginalIndices{" + + "indices=" + Arrays.toString(indices) + + ", indicesOptions=" + indicesOptions + + '}'; + } } diff --git a/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java b/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java index 6704f610ec0aa..8c8f263c34db0 100644 --- a/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java +++ b/core/src/main/java/org/elasticsearch/action/TaskOperationFailure.java @@ -88,7 +88,7 @@ public RestStatus getStatus() { return status; } - public Throwable getCause() { + public Exception getCause() { return reason; } @@ -105,7 +105,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (reason != null) { builder.field("reason"); builder.startObject(); - ElasticsearchException.toXContent(builder, params, reason); + ElasticsearchException.generateThrowableXContent(builder, params, reason); builder.endObject(); } return builder; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java index ff09c23207fd7..aea1ee57dca87 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/allocation/ClusterAllocationExplainRequest.java @@ -19,14 +19,11 @@ package org.elasticsearch.action.admin.cluster.allocation; -import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.support.master.MasterNodeRequest; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcher; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ObjectParser; @@ -41,8 +38,7 @@ */ public class ClusterAllocationExplainRequest extends MasterNodeRequest { - private static ObjectParser PARSER = new ObjectParser( - "cluster/allocation/explain"); + private static ObjectParser PARSER = new ObjectParser<>("cluster/allocation/explain"); static { PARSER.declareString(ClusterAllocationExplainRequest::setIndex, new ParseField("index")); PARSER.declareInt(ClusterAllocationExplainRequest::setShard, new ParseField("shard")); @@ -225,12 +221,7 @@ public String toString() { } public static ClusterAllocationExplainRequest parse(XContentParser parser) throws IOException { - ClusterAllocationExplainRequest req = PARSER.parse(parser, new ClusterAllocationExplainRequest(), () -> ParseFieldMatcher.STRICT); - Exception e = req.validate(); - if (e != null) { - throw new ElasticsearchParseException("'index', 'shard', and 'primary' must be specified in allocation explain request", e); - } - return req; + return PARSER.parse(parser, new ClusterAllocationExplainRequest(), null); } @Override @@ -258,8 +249,8 @@ public void writeTo(StreamOutput out) throws IOException { } private void checkVersion(Version version) { - if (version.before(Version.V_5_2_0_UNRELEASED)) { - throw new IllegalArgumentException("cannot explain shards in a mixed-cluster with pre-" + Version.V_5_2_0_UNRELEASED + + if (version.before(Version.V_5_2_0)) { + throw new IllegalArgumentException("cannot explain shards in a mixed-cluster with pre-" + Version.V_5_2_0 + " nodes, node version [" + version + "]"); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthResponse.java index e4a575dcf79d5..a9a2c36970ee4 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/ClusterHealthResponse.java @@ -24,19 +24,19 @@ import org.elasticsearch.cluster.health.ClusterHealthStatus; import org.elasticsearch.cluster.health.ClusterIndexHealth; import org.elasticsearch.cluster.health.ClusterStateHealth; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.rest.RestStatus; import java.io.IOException; import java.util.Locale; import java.util.Map; -public class ClusterHealthResponse extends ActionResponse implements StatusToXContent { +public class ClusterHealthResponse extends ActionResponse implements StatusToXContentObject { private String clusterName; private int numberOfPendingTasks = 0; private int numberOfInFlightFetch = 0; @@ -200,18 +200,9 @@ public void writeTo(StreamOutput out) throws IOException { taskMaxWaitingTime.writeTo(out); } - @Override public String toString() { - try { - XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); - builder.startObject(); - toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.string(); - } catch (IOException e) { - return "{ \"error\" : \"" + e.getMessage() + "\"}"; - } + return Strings.toString(this); } @Override @@ -240,6 +231,7 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.field(CLUSTER_NAME, getClusterName()); builder.field(STATUS, getStatus().name().toLowerCase(Locale.ROOT)); builder.field(TIMED_OUT, isTimedOut()); @@ -268,6 +260,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } builder.endObject(); } + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java index 44c604dc8b845..8924f81a86cea 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/health/TransportClusterHealthAction.java @@ -194,14 +194,14 @@ public void onTimeout(TimeValue timeout) { } private boolean validateRequest(final ClusterHealthRequest request, ClusterState clusterState, final int waitFor) { - ClusterHealthResponse response = clusterHealth(request, clusterState, clusterService.numberOfPendingTasks(), - gatewayAllocator.getNumberOfInFlightFetch(), clusterService.getMaxTaskWaitTime()); + ClusterHealthResponse response = clusterHealth(request, clusterState, clusterService.getMasterService().numberOfPendingTasks(), + gatewayAllocator.getNumberOfInFlightFetch(), clusterService.getMasterService().getMaxTaskWaitTime()); return prepareResponse(request, response, clusterState, waitFor); } private ClusterHealthResponse getResponse(final ClusterHealthRequest request, ClusterState clusterState, final int waitFor, boolean timedOut) { - ClusterHealthResponse response = clusterHealth(request, clusterState, clusterService.numberOfPendingTasks(), - gatewayAllocator.getNumberOfInFlightFetch(), clusterService.getMaxTaskWaitTime()); + ClusterHealthResponse response = clusterHealth(request, clusterState, clusterService.getMasterService().numberOfPendingTasks(), + gatewayAllocator.getNumberOfInFlightFetch(), clusterService.getMasterService().getMaxTaskWaitTime()); boolean valid = prepareResponse(request, response, clusterState, waitFor); assert valid || timedOut; // we check for a timeout here since this method might be called from the wait_for_events diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/hotthreads/TransportNodesHotThreadsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/hotthreads/TransportNodesHotThreadsAction.java index da45a3e4027bd..7b43d1c259b0c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/hotthreads/TransportNodesHotThreadsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/hotthreads/TransportNodesHotThreadsAction.java @@ -24,7 +24,6 @@ import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.nodes.BaseNodeRequest; import org.elasticsearch.action.support.nodes.TransportNodesAction; -import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; @@ -82,11 +81,6 @@ protected NodeHotThreads nodeOperation(NodeRequest request) { } } - @Override - protected boolean accumulateExceptions() { - return false; - } - public static class NodeRequest extends BaseNodeRequest { NodesHotThreadsRequest request; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java index c26554b25e06e..afe535601fcfc 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/info/TransportNodesInfoAction.java @@ -29,7 +29,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -76,11 +76,6 @@ protected NodeInfo nodeOperation(NodeInfoRequest nodeRequest) { request.transport(), request.http(), request.plugins(), request.ingest(), request.indices()); } - @Override - protected boolean accumulateExceptions() { - return false; - } - public static class NodeInfoRequest extends BaseNodeRequest { NodesInfoRequest request; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java index b4cef38d28ddd..56c98ed7db02e 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/stats/TransportNodesStatsAction.java @@ -29,7 +29,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -76,11 +76,6 @@ protected NodeStats nodeOperation(NodeStatsRequest nodeStatsRequest) { request.ingest()); } - @Override - protected boolean accumulateExceptions() { - return false; - } - public static class NodeStatsRequest extends BaseNodeRequest { NodesStatsRequest request; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/TransportCancelTasksAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/TransportCancelTasksAction.java index ce5d92753a83a..aca1be7adff4c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/TransportCancelTasksAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/cancel/TransportCancelTasksAction.java @@ -19,21 +19,21 @@ package org.elasticsearch.action.admin.cluster.node.tasks.cancel; +import com.carrotsearch.hppc.cursors.ObjectObjectCursor; import org.elasticsearch.ResourceNotFoundException; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.FailedNodeException; import org.elasticsearch.action.TaskOperationFailure; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.tasks.TransportTasksAction; -import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.concurrent.AtomicArray; import org.elasticsearch.tasks.CancellableTask; import org.elasticsearch.tasks.TaskId; import org.elasticsearch.tasks.TaskInfo; @@ -49,9 +49,7 @@ import java.io.IOException; import java.util.ArrayList; import java.util.List; -import java.util.Set; import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicReference; import java.util.function.Consumer; /** @@ -116,18 +114,16 @@ protected void processTasks(CancelTasksRequest request, Consumer listener) { - final BanLock banLock = new BanLock(nodes -> removeBanOnNodes(cancellableTask, nodes)); - Set childNodes = taskManager.cancel(cancellableTask, request.getReason(), banLock::onTaskFinished); - if (childNodes != null) { - if (childNodes.isEmpty()) { - // The task has no child tasks, so we can return immediately - logger.trace("cancelling task {} with no children", cancellableTask.getId()); - listener.onResponse(cancellableTask.taskInfo(clusterService.localNode().getId(), false)); - } else { - // The task has some child tasks, we need to wait for until ban is set on all nodes - logger.trace("cancelling task {} with children on nodes [{}]", cancellableTask.getId(), childNodes); - String nodeId = clusterService.localNode().getId(); - AtomicInteger responses = new AtomicInteger(childNodes.size()); + String nodeId = clusterService.localNode().getId(); + final boolean canceled; + if (cancellableTask.shouldCancelChildrenOnCancellation()) { + DiscoveryNodes childNodes = clusterService.state().nodes(); + final BanLock banLock = new BanLock(childNodes.getSize(), () -> removeBanOnNodes(cancellableTask, childNodes)); + canceled = taskManager.cancel(cancellableTask, request.getReason(), banLock::onTaskFinished); + if (canceled) { + // /In case the task has some child tasks, we need to wait for until ban is set on all nodes + logger.trace("cancelling task {} on child nodes", cancellableTask.getId()); + AtomicInteger responses = new AtomicInteger(childNodes.getSize()); List failures = new ArrayList<>(); setBanOnNodes(request.getReason(), cancellableTask, childNodes, new ActionListener() { @Override @@ -157,83 +153,68 @@ private void processResponse() { } } }); - } - } else { + } else { + canceled = taskManager.cancel(cancellableTask, request.getReason(), + () -> listener.onResponse(cancellableTask.taskInfo(nodeId, false))); + if (canceled) { + logger.trace("task {} doesn't have any children that should be cancelled", cancellableTask.getId()); + } + } + if (canceled == false) { logger.trace("task {} is already cancelled", cancellableTask.getId()); throw new IllegalStateException("task with id " + cancellableTask.getId() + " is already cancelled"); } } - @Override - protected boolean accumulateExceptions() { - return true; - } - - private void setBanOnNodes(String reason, CancellableTask task, Set nodes, ActionListener listener) { + private void setBanOnNodes(String reason, CancellableTask task, DiscoveryNodes nodes, ActionListener listener) { sendSetBanRequest(nodes, BanParentTaskRequest.createSetBanParentTaskRequest(new TaskId(clusterService.localNode().getId(), task.getId()), reason), listener); } - private void removeBanOnNodes(CancellableTask task, Set nodes) { + private void removeBanOnNodes(CancellableTask task, DiscoveryNodes nodes) { sendRemoveBanRequest(nodes, BanParentTaskRequest.createRemoveBanParentTaskRequest(new TaskId(clusterService.localNode().getId(), task.getId()))); } - private void sendSetBanRequest(Set nodes, BanParentTaskRequest request, ActionListener listener) { - ClusterState clusterState = clusterService.state(); - for (String node : nodes) { - DiscoveryNode discoveryNode = clusterState.getNodes().get(node); - if (discoveryNode != null) { - // Check if node still in the cluster - logger.trace("Sending ban for tasks with the parent [{}] to the node [{}], ban [{}]", request.parentTaskId, node, - request.ban); - transportService.sendRequest(discoveryNode, BAN_PARENT_ACTION_NAME, request, - new EmptyTransportResponseHandler(ThreadPool.Names.SAME) { - @Override - public void handleResponse(TransportResponse.Empty response) { - listener.onResponse(null); - } + private void sendSetBanRequest(DiscoveryNodes nodes, BanParentTaskRequest request, ActionListener listener) { + for (ObjectObjectCursor node : nodes.getNodes()) { + logger.trace("Sending ban for tasks with the parent [{}] to the node [{}], ban [{}]", request.parentTaskId, node.key, + request.ban); + transportService.sendRequest(node.value, BAN_PARENT_ACTION_NAME, request, + new EmptyTransportResponseHandler(ThreadPool.Names.SAME) { + @Override + public void handleResponse(TransportResponse.Empty response) { + listener.onResponse(null); + } - @Override - public void handleException(TransportException exp) { - logger.warn("Cannot send ban for tasks with the parent [{}] to the node [{}]", request.parentTaskId, node); - listener.onFailure(exp); - } - }); - } else { - listener.onResponse(null); - logger.debug("Cannot send ban for tasks with the parent [{}] to the node [{}] - the node no longer in the cluster", - request.parentTaskId, node); - } + @Override + public void handleException(TransportException exp) { + logger.warn("Cannot send ban for tasks with the parent [{}] to the node [{}]", request.parentTaskId, node.key); + listener.onFailure(exp); + } + }); } } - private void sendRemoveBanRequest(Set nodes, BanParentTaskRequest request) { - ClusterState clusterState = clusterService.state(); - for (String node : nodes) { - DiscoveryNode discoveryNode = clusterState.getNodes().get(node); - if (discoveryNode != null) { - // Check if node still in the cluster - logger.debug("Sending remove ban for tasks with the parent [{}] to the node [{}]", request.parentTaskId, node); - transportService.sendRequest(discoveryNode, BAN_PARENT_ACTION_NAME, request, EmptyTransportResponseHandler - .INSTANCE_SAME); - } else { - logger.debug("Cannot send remove ban request for tasks with the parent [{}] to the node [{}] - the node no longer in " + - "the cluster", request.parentTaskId, node); - } + private void sendRemoveBanRequest(DiscoveryNodes nodes, BanParentTaskRequest request) { + for (ObjectObjectCursor node : nodes.getNodes()) { + logger.debug("Sending remove ban for tasks with the parent [{}] to the node [{}]", request.parentTaskId, node.key); + transportService.sendRequest(node.value, BAN_PARENT_ACTION_NAME, request, EmptyTransportResponseHandler + .INSTANCE_SAME); } } private static class BanLock { - private final Consumer> finish; + private final Runnable finish; private final AtomicInteger counter; - private final AtomicReference> nodes = new AtomicReference<>(); + private final int nodesSize; - public BanLock(Consumer> finish) { + BanLock(int nodesSize, Runnable finish) { counter = new AtomicInteger(0); this.finish = finish; + this.nodesSize = nodesSize; } public void onBanSet() { @@ -242,15 +223,14 @@ public void onBanSet() { } } - public void onTaskFinished(Set nodes) { - this.nodes.set(nodes); - if (counter.addAndGet(nodes.size()) == 0) { + public void onTaskFinished() { + if (counter.addAndGet(nodesSize) == 0) { finish(); } } public void finish() { - finish.accept(nodes.get()); + finish.run(); } } @@ -282,7 +262,7 @@ private BanParentTaskRequest(TaskId parentTaskId) { this.ban = false; } - public BanParentTaskRequest() { + BanParentTaskRequest() { } @Override @@ -322,5 +302,4 @@ public void messageReceived(final BanParentTaskRequest request, final TransportC } } - } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskResponse.java index ffd4b35831421..72f26d2d57692 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/GetTaskResponse.java @@ -23,7 +23,7 @@ import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.tasks.TaskResult; @@ -34,7 +34,7 @@ /** * Returns the list of tasks currently running on the nodes */ -public class GetTaskResponse extends ActionResponse implements ToXContent { +public class GetTaskResponse extends ActionResponse implements ToXContentObject { private TaskResult task; public GetTaskResponse() { @@ -65,7 +65,10 @@ public TaskResult getTask() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - return task.innerToXContent(builder, params); + builder.startObject(); + task.innerToXContent(builder, params); + builder.endObject(); + return builder; } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/TransportGetTaskAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/TransportGetTaskAction.java index a6c8941358d2c..30d71a992fd95 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/TransportGetTaskAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/get/TransportGetTaskAction.java @@ -31,7 +31,6 @@ import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.AbstractRunnable; @@ -39,10 +38,10 @@ import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.IndexNotFoundException; -import org.elasticsearch.tasks.TaskResult; import org.elasticsearch.tasks.Task; import org.elasticsearch.tasks.TaskId; import org.elasticsearch.tasks.TaskInfo; +import org.elasticsearch.tasks.TaskResult; import org.elasticsearch.tasks.TaskResultsService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportException; @@ -121,7 +120,6 @@ private void runOnNodeWithTaskIfPossible(Task thisTask, GetTaskRequest request, return; } GetTaskRequest nodeRequest = request.nodeRequest(clusterService.localNode().getId(), thisTask.getId()); - taskManager.registerChildTask(thisTask, node.getId()); transportService.sendRequest(node, GetTaskAction.NAME, nodeRequest, builder.build(), new TransportResponseHandler() { @Override @@ -251,7 +249,7 @@ void onGetFinishedTaskFromIndex(GetResponse response, ActionListener ParseFieldMatcher.STRICT); + TaskResult result = TaskResult.PARSER.apply(parser, null); listener.onResponse(new GetTaskResponse(result)); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java index b33226b973ba7..a203dd35b47ff 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/ListTasksResponse.java @@ -27,7 +27,7 @@ import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.tasks.TaskId; import org.elasticsearch.tasks.TaskInfo; @@ -43,7 +43,7 @@ /** * Returns the list of tasks currently running on the nodes */ -public class ListTasksResponse extends BaseTasksResponse implements ToXContent { +public class ListTasksResponse extends BaseTasksResponse implements ToXContentObject { private List tasks; @@ -161,8 +161,9 @@ public XContentBuilder toXContentGroupedByNode(XContentBuilder builder, Params p } builder.startObject("tasks"); for(TaskInfo task : entry.getValue()) { - builder.field(task.getTaskId().toString()); + builder.startObject(task.getTaskId().toString()); task.toXContent(builder, params); + builder.endObject(); } builder.endObject(); builder.endObject(); @@ -187,7 +188,10 @@ public XContentBuilder toXContentGroupedByParents(XContentBuilder builder, Param @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - return toXContentGroupedByParents(builder, params); + builder.startObject(); + toXContentGroupedByParents(builder, params); + builder.endObject(); + return builder; } private void toXContentCommon(XContentBuilder builder, Params params) throws IOException { @@ -214,6 +218,6 @@ private void toXContentCommon(XContentBuilder builder, Params params) throws IOE @Override public String toString() { - return Strings.toString(this, true); + return Strings.toString(this); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java index b254137163d75..87bf70acede44 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TaskGroup.java @@ -81,7 +81,7 @@ public List getChildTasks() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.startObject(); - task.innerToXContent(builder, params); + task.toXContent(builder, params); if (childTasks.isEmpty() == false) { builder.startArray("children"); for (TaskGroup taskGroup : childTasks) { diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TransportListTasksAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TransportListTasksAction.java index 889628e373a8d..eb8a6ad4ca50c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TransportListTasksAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/tasks/list/TransportListTasksAction.java @@ -90,8 +90,4 @@ protected void processTasks(ListTasksRequest request, Consumer operation) super.processTasks(request, operation); } - @Override - protected boolean accumulateExceptions() { - return true; - } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodeUsage.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodeUsage.java new file mode 100644 index 0000000000000..954e64e8caf33 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodeUsage.java @@ -0,0 +1,115 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.node.usage; + +import org.elasticsearch.action.support.nodes.BaseNodeResponse; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Map; + +public class NodeUsage extends BaseNodeResponse implements ToXContent { + + private long timestamp; + private long sinceTime; + private Map restUsage; + + NodeUsage() { + } + + public static NodeUsage readNodeStats(StreamInput in) throws IOException { + NodeUsage nodeInfo = new NodeUsage(); + nodeInfo.readFrom(in); + return nodeInfo; + } + + /** + * @param node + * the node these statistics were collected from + * @param timestamp + * the timestamp for when these statistics were collected + * @param sinceTime + * the timestamp for when the collection of these statistics + * started + * @param restUsage + * a map containing the counts of the number of times each REST + * endpoint has been called + */ + public NodeUsage(DiscoveryNode node, long timestamp, long sinceTime, Map restUsage) { + super(node); + this.timestamp = timestamp; + this.sinceTime = sinceTime; + this.restUsage = restUsage; + } + + /** + * @return the timestamp for when these statistics were collected + */ + public long getTimestamp() { + return timestamp; + } + + /** + * @return the timestamp for when the collection of these statistics started + */ + public long getSinceTime() { + return sinceTime; + } + + /** + * @return a map containing the counts of the number of times each REST + * endpoint has been called + */ + public Map getRestUsage() { + return restUsage; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.field("since", sinceTime); + if (restUsage != null) { + builder.field("rest_actions"); + builder.map(restUsage); + } + return builder; + } + + @SuppressWarnings("unchecked") + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + timestamp = in.readLong(); + sinceTime = in.readLong(); + restUsage = (Map) in.readGenericValue(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeLong(timestamp); + out.writeLong(sinceTime); + out.writeGenericValue(restUsage); + } + +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageAction.java new file mode 100644 index 0000000000000..358659e5f61f7 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageAction.java @@ -0,0 +1,44 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.node.usage; + +import org.elasticsearch.action.Action; +import org.elasticsearch.client.ElasticsearchClient; + +public class NodesUsageAction extends Action { + + public static final NodesUsageAction INSTANCE = new NodesUsageAction(); + public static final String NAME = "cluster:monitor/nodes/usage"; + + protected NodesUsageAction() { + super(NAME); + } + + @Override + public NodesUsageRequestBuilder newRequestBuilder(ElasticsearchClient client) { + return new NodesUsageRequestBuilder(client, this); + } + + @Override + public NodesUsageResponse newResponse() { + return new NodesUsageResponse(); + } + +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageRequest.java new file mode 100644 index 0000000000000..c4e80494aed5c --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageRequest.java @@ -0,0 +1,86 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.node.usage; + +import org.elasticsearch.action.support.nodes.BaseNodesRequest; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; + +import java.io.IOException; + +public class NodesUsageRequest extends BaseNodesRequest { + + private boolean restActions; + + public NodesUsageRequest() { + super(); + } + + /** + * Get usage from nodes based on the nodes ids specified. If none are + * passed, usage for all nodes will be returned. + */ + public NodesUsageRequest(String... nodesIds) { + super(nodesIds); + } + + /** + * Sets all the request flags. + */ + public NodesUsageRequest all() { + this.restActions = true; + return this; + } + + /** + * Clears all the request flags. + */ + public NodesUsageRequest clear() { + this.restActions = false; + return this; + } + + /** + * Should the node rest actions usage statistics be returned. + */ + public boolean restActions() { + return this.restActions; + } + + /** + * Should the node rest actions usage statistics be returned. + */ + public NodesUsageRequest restActions(boolean restActions) { + this.restActions = restActions; + return this; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + this.restActions = in.readBoolean(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeBoolean(restActions); + } +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageRequestBuilder.java new file mode 100644 index 0000000000000..76d14556b9c4a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageRequestBuilder.java @@ -0,0 +1,34 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.node.usage; + +import org.elasticsearch.action.Action; +import org.elasticsearch.action.support.nodes.NodesOperationRequestBuilder; +import org.elasticsearch.client.ElasticsearchClient; + +public class NodesUsageRequestBuilder + extends NodesOperationRequestBuilder { + + public NodesUsageRequestBuilder(ElasticsearchClient client, + Action action) { + super(client, action, new NodesUsageRequest()); + } + +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageResponse.java new file mode 100644 index 0000000000000..ff88145021c73 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/NodesUsageResponse.java @@ -0,0 +1,85 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.node.usage; + +import org.elasticsearch.action.FailedNodeException; +import org.elasticsearch.action.support.nodes.BaseNodesResponse; +import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentFactory; + +import java.io.IOException; +import java.util.List; + +/** + * The response for the nodes usage api which contains the individual usage + * statistics for all nodes queried. + */ +public class NodesUsageResponse extends BaseNodesResponse implements ToXContent { + + NodesUsageResponse() { + } + + public NodesUsageResponse(ClusterName clusterName, List nodes, List failures) { + super(clusterName, nodes, failures); + } + + @Override + protected List readNodesFrom(StreamInput in) throws IOException { + return in.readList(NodeUsage::readNodeStats); + } + + @Override + protected void writeNodesTo(StreamOutput out, List nodes) throws IOException { + out.writeStreamableList(nodes); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject("nodes"); + for (NodeUsage nodeUsage : getNodes()) { + builder.startObject(nodeUsage.getNode().getId()); + builder.field("timestamp", nodeUsage.getTimestamp()); + nodeUsage.toXContent(builder, params); + + builder.endObject(); + } + builder.endObject(); + + return builder; + } + + @Override + public String toString() { + try { + XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); + builder.startObject(); + toXContent(builder, EMPTY_PARAMS); + builder.endObject(); + return builder.string(); + } catch (IOException e) { + return "{ \"error\" : \"" + e.getMessage() + "\"}"; + } + } + +} \ No newline at end of file diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/TransportNodesUsageAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/TransportNodesUsageAction.java new file mode 100644 index 0000000000000..c87e0b9942d0a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/node/usage/TransportNodesUsageAction.java @@ -0,0 +1,99 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.node.usage; + +import org.elasticsearch.action.FailedNodeException; +import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.nodes.BaseNodeRequest; +import org.elasticsearch.action.support.nodes.TransportNodesAction; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportService; +import org.elasticsearch.usage.UsageService; + +import java.io.IOException; +import java.util.List; + +public class TransportNodesUsageAction + extends TransportNodesAction { + + private UsageService usageService; + + @Inject + public TransportNodesUsageAction(Settings settings, ThreadPool threadPool, ClusterService clusterService, + TransportService transportService, ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver, UsageService usageService) { + super(settings, NodesUsageAction.NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver, + NodesUsageRequest::new, NodeUsageRequest::new, ThreadPool.Names.MANAGEMENT, NodeUsage.class); + this.usageService = usageService; + } + + @Override + protected NodesUsageResponse newResponse(NodesUsageRequest request, List responses, List failures) { + return new NodesUsageResponse(clusterService.getClusterName(), responses, failures); + } + + @Override + protected NodeUsageRequest newNodeRequest(String nodeId, NodesUsageRequest request) { + return new NodeUsageRequest(nodeId, request); + } + + @Override + protected NodeUsage newNodeResponse() { + return new NodeUsage(); + } + + @Override + protected NodeUsage nodeOperation(NodeUsageRequest nodeUsageRequest) { + NodesUsageRequest request = nodeUsageRequest.request; + return usageService.getUsageStats(clusterService.localNode(), request.restActions()); + } + + public static class NodeUsageRequest extends BaseNodeRequest { + + NodesUsageRequest request; + + public NodeUsageRequest() { + } + + NodeUsageRequest(String nodeId, NodesUsageRequest request) { + super(nodeId); + this.request = request; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + request = new NodesUsageRequest(); + request.readFrom(in); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + request.writeTo(out); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoAction.java new file mode 100644 index 0000000000000..aa546c7dffd26 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoAction.java @@ -0,0 +1,43 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.remote; + +import org.elasticsearch.action.Action; +import org.elasticsearch.client.ElasticsearchClient; + +public final class RemoteInfoAction extends Action { + + public static final String NAME = "cluster:monitor/remote/info"; + public static final RemoteInfoAction INSTANCE = new RemoteInfoAction(); + + public RemoteInfoAction() { + super(NAME); + } + + @Override + public RemoteInfoRequestBuilder newRequestBuilder(ElasticsearchClient client) { + return new RemoteInfoRequestBuilder(client, INSTANCE); + } + + @Override + public RemoteInfoResponse newResponse() { + return new RemoteInfoResponse(); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequest.java new file mode 100644 index 0000000000000..6e41f145b65e7 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequest.java @@ -0,0 +1,32 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.remote; + +import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.ActionRequestValidationException; + +public final class RemoteInfoRequest extends ActionRequest { + + @Override + public ActionRequestValidationException validate() { + return null; + } + +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequestBuilder.java new file mode 100644 index 0000000000000..f46f5ecd2d3ca --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoRequestBuilder.java @@ -0,0 +1,30 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.remote; + +import org.elasticsearch.action.ActionRequestBuilder; +import org.elasticsearch.client.ElasticsearchClient; + +public final class RemoteInfoRequestBuilder extends ActionRequestBuilder { + + public RemoteInfoRequestBuilder(ElasticsearchClient client, RemoteInfoAction action) { + super(client, action, new RemoteInfoRequest()); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoResponse.java new file mode 100644 index 0000000000000..8e9360bdb1238 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/RemoteInfoResponse.java @@ -0,0 +1,67 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.remote; + +import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.transport.RemoteConnectionInfo; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.List; + +public final class RemoteInfoResponse extends ActionResponse implements ToXContentObject { + + private List infos; + + RemoteInfoResponse() { + } + + RemoteInfoResponse(Collection infos) { + this.infos = Collections.unmodifiableList(new ArrayList<>(infos)); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeList(infos); + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + infos = in.readList(RemoteConnectionInfo::new); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + for (RemoteConnectionInfo info : infos) { + info.toXContent(builder, params); + } + builder.endObject(); + return builder; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/TransportRemoteInfoAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/TransportRemoteInfoAction.java new file mode 100644 index 0000000000000..33254a9aed9ab --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/remote/TransportRemoteInfoAction.java @@ -0,0 +1,51 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.admin.cluster.remote; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.transport.RemoteClusterService; +import org.elasticsearch.action.search.SearchTransportService; +import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.HandledTransportAction; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportService; + +public final class TransportRemoteInfoAction extends HandledTransportAction { + + private final RemoteClusterService remoteClusterService; + + @Inject + public TransportRemoteInfoAction(Settings settings, ThreadPool threadPool, TransportService transportService, + ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, + SearchTransportService searchTransportService) { + super(settings, RemoteInfoAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, + RemoteInfoRequest::new); + this.remoteClusterService = searchTransportService.getRemoteClusterService(); + } + + @Override + protected void doExecute(RemoteInfoRequest remoteInfoRequest, ActionListener listener) { + remoteClusterService.getRemoteConnectionInfos(ActionListener.wrap(remoteConnectionInfos + -> listener.onResponse(new RemoteInfoResponse(remoteConnectionInfos)), listener::onFailure)); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/GetRepositoriesResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/GetRepositoriesResponse.java index c933156fcb077..6d4cb83934548 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/GetRepositoriesResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/get/GetRepositoriesResponse.java @@ -60,7 +60,7 @@ public List repositories() { public void readFrom(StreamInput in) throws IOException { super.readFrom(in); int size = in.readVInt(); - List repositoryListBuilder = new ArrayList<>(); + List repositoryListBuilder = new ArrayList<>(size); for (int j = 0; j < size; j++) { repositoryListBuilder.add(new RepositoryMetaData( in.readString(), diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java index a06175a598bf2..e60de1e292916 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequest.java @@ -142,11 +142,12 @@ public PutRepositoryRequest settings(Settings.Builder settings) { /** * Sets the repository settings. * - * @param source repository settings in json, yaml or properties format + * @param source repository settings in json or yaml format + * @param xContentType the content type of the source * @return this request */ - public PutRepositoryRequest settings(String source) { - this.settings = Settings.builder().loadFromSource(source).build(); + public PutRepositoryRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); return this; } @@ -160,7 +161,7 @@ public PutRepositoryRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java index 39cfa6af7f750..21b8e8713a19b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/put/PutRepositoryRequestBuilder.java @@ -22,6 +22,7 @@ import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Map; @@ -89,13 +90,14 @@ public PutRepositoryRequestBuilder setSettings(Settings.Builder settings) { } /** - * Sets the repository settings in Json, Yaml or properties format + * Sets the repository settings in Json or Yaml format * * @param source repository settings + * @param xContentType the content type of the source * @return this builder */ - public PutRepositoryRequestBuilder setSettings(String source) { - request.settings(source); + public PutRepositoryRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); return this; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/VerifyRepositoryResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/VerifyRepositoryResponse.java index 5de45fafa6df4..27612a3dab24b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/VerifyRepositoryResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/repositories/verify/VerifyRepositoryResponse.java @@ -22,18 +22,18 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentHelper; import java.io.IOException; /** * Unregister repository response */ -public class VerifyRepositoryResponse extends ActionResponse implements ToXContent { +public class VerifyRepositoryResponse extends ActionResponse implements ToXContentObject { private DiscoveryNode[] nodes; @@ -83,6 +83,7 @@ static final class Fields { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startObject(Fields.NODES); for (DiscoveryNode node : nodes) { builder.startObject(node.getId()); @@ -90,11 +91,12 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.endObject(); } builder.endObject(); + builder.endObject(); return builder; } @Override public String toString() { - return XContentHelper.toString(this); + return Strings.toString(this); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java index 16310b58cbc5b..efd27d1a38f37 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequest.java @@ -51,7 +51,7 @@ public ClusterUpdateSettingsRequest() { @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; - if (transientSettings.getAsMap().isEmpty() && persistentSettings.getAsMap().isEmpty()) { + if (transientSettings.isEmpty() && persistentSettings.isEmpty()) { validationException = addValidationError("no settings to update", validationException); } return validationException; @@ -84,8 +84,8 @@ public ClusterUpdateSettingsRequest transientSettings(Settings.Builder settings) /** * Sets the source containing the transient settings to be updated. They will not survive a full cluster restart */ - public ClusterUpdateSettingsRequest transientSettings(String source) { - this.transientSettings = Settings.builder().loadFromSource(source).build(); + public ClusterUpdateSettingsRequest transientSettings(String source, XContentType xContentType) { + this.transientSettings = Settings.builder().loadFromSource(source, xContentType).build(); return this; } @@ -97,7 +97,7 @@ public ClusterUpdateSettingsRequest transientSettings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - transientSettings(builder.string()); + transientSettings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -123,8 +123,8 @@ public ClusterUpdateSettingsRequest persistentSettings(Settings.Builder settings /** * Sets the source containing the persistent settings to be updated. They will get applied cross restarts */ - public ClusterUpdateSettingsRequest persistentSettings(String source) { - this.persistentSettings = Settings.builder().loadFromSource(source).build(); + public ClusterUpdateSettingsRequest persistentSettings(String source, XContentType xContentType) { + this.persistentSettings = Settings.builder().loadFromSource(source, xContentType).build(); return this; } @@ -136,7 +136,7 @@ public ClusterUpdateSettingsRequest persistentSettings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - persistentSettings(builder.string()); + persistentSettings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java index f0492edfeb19f..6d58c989a8f32 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/ClusterUpdateSettingsRequestBuilder.java @@ -22,6 +22,7 @@ import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Map; @@ -53,8 +54,8 @@ public ClusterUpdateSettingsRequestBuilder setTransientSettings(Settings.Builder /** * Sets the source containing the transient settings to be updated. They will not survive a full cluster restart */ - public ClusterUpdateSettingsRequestBuilder setTransientSettings(String settings) { - request.transientSettings(settings); + public ClusterUpdateSettingsRequestBuilder setTransientSettings(String settings, XContentType xContentType) { + request.transientSettings(settings, xContentType); return this; } @@ -85,8 +86,8 @@ public ClusterUpdateSettingsRequestBuilder setPersistentSettings(Settings.Builde /** * Sets the source containing the persistent settings to be updated. They will get applied cross restarts */ - public ClusterUpdateSettingsRequestBuilder setPersistentSettings(String settings) { - request.persistentSettings(settings); + public ClusterUpdateSettingsRequestBuilder setPersistentSettings(String settings, XContentType xContentType) { + request.persistentSettings(settings, xContentType); return this; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java index 575fbcd3b9827..e9fec716a90c7 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/SettingsUpdater.java @@ -67,12 +67,20 @@ synchronized ClusterState updateSettings(final ClusterState currentState, Settin .transientSettings(transientSettings.build()); ClusterBlocks.Builder blocks = ClusterBlocks.builder().blocks(currentState.blocks()); - boolean updatedReadOnly = MetaData.SETTING_READ_ONLY_SETTING.get(metaData.persistentSettings()) || MetaData.SETTING_READ_ONLY_SETTING.get(metaData.transientSettings()); + boolean updatedReadOnly = MetaData.SETTING_READ_ONLY_SETTING.get(metaData.persistentSettings()) + || MetaData.SETTING_READ_ONLY_SETTING.get(metaData.transientSettings()); if (updatedReadOnly) { blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK); } else { blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_BLOCK); } + boolean updatedReadOnlyAllowDelete = MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.persistentSettings()) + || MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.get(metaData.transientSettings()); + if (updatedReadOnlyAllowDelete) { + blocks.addGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK); + } else { + blocks.removeGlobalBlock(MetaData.CLUSTER_READ_ONLY_ALLOW_DELETE_BLOCK); + } ClusterState build = builder(currentState).metaData(metaData).blocks(blocks).build(); Settings settings = build.metaData().settings(); // now we try to apply things and if they are invalid we fail diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java index ab6cdb94e1f98..dae55b2fc048a 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/settings/TransportClusterUpdateSettingsAction.java @@ -67,12 +67,15 @@ protected String executor() { @Override protected ClusterBlockException checkBlock(ClusterUpdateSettingsRequest request, ClusterState state) { // allow for dedicated changes to the metadata blocks, so we don't block those to allow to "re-enable" it - if ((request.transientSettings().getAsMap().isEmpty() && - request.persistentSettings().getAsMap().size() == 1 && - MetaData.SETTING_READ_ONLY_SETTING.exists(request.persistentSettings())) || - (request.persistentSettings().getAsMap().isEmpty() && request.transientSettings().getAsMap().size() == 1 && - MetaData.SETTING_READ_ONLY_SETTING.exists(request.transientSettings()))) { - return null; + if (request.transientSettings().size() + request.persistentSettings().size() == 1) { + // only one setting + if (MetaData.SETTING_READ_ONLY_SETTING.exists(request.persistentSettings()) + || MetaData.SETTING_READ_ONLY_SETTING.exists(request.transientSettings()) + || MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.exists(request.transientSettings()) + || MetaData.SETTING_READ_ONLY_ALLOW_DELETE_SETTING.exists(request.persistentSettings())) { + // one of the settings above as the only setting in the request means - resetting the block! + return null; + } } return state.blocks().globalBlockedException(ClusterBlockLevel.METADATA_WRITE); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsGroup.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsGroup.java index 473d31754eb96..79a014ebda71d 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsGroup.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsGroup.java @@ -38,7 +38,7 @@ private ClusterSearchShardsGroup() { } - ClusterSearchShardsGroup(ShardId shardId, ShardRouting[] shards) { + public ClusterSearchShardsGroup(ShardId shardId, ShardRouting[] shards) { this.shardId = shardId; this.shards = shards; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java index 36d63bbcebea3..df38690b790a4 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsRequest.java @@ -134,7 +134,7 @@ public void readFrom(StreamInput in) throws IOException { routing = in.readOptionalString(); preference = in.readOptionalString(); - if (in.getVersion().onOrBefore(Version.V_5_1_1_UNRELEASED)) { + if (in.getVersion().onOrBefore(Version.V_5_1_1)) { //types in.readStringArray(); } @@ -153,7 +153,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(routing); out.writeOptionalString(preference); - if (out.getVersion().onOrBefore(Version.V_5_1_1_UNRELEASED)) { + if (out.getVersion().onOrBefore(Version.V_5_1_1)) { //types out.writeStringArray(Strings.EMPTY_ARRAY); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsResponse.java index 6f9a2ae55b1c4..b5b28e2b8f79a 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/ClusterSearchShardsResponse.java @@ -24,15 +24,16 @@ import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.search.internal.AliasFilter; import java.io.IOException; +import java.util.Arrays; import java.util.HashMap; import java.util.Map; -public class ClusterSearchShardsResponse extends ActionResponse implements ToXContent { +public class ClusterSearchShardsResponse extends ActionResponse implements ToXContentObject { private ClusterSearchShardsGroup[] groups; private DiscoveryNode[] nodes; @@ -42,7 +43,8 @@ public ClusterSearchShardsResponse() { } - ClusterSearchShardsResponse(ClusterSearchShardsGroup[] groups, DiscoveryNode[] nodes, Map indicesAndFilters) { + public ClusterSearchShardsResponse(ClusterSearchShardsGroup[] groups, DiscoveryNode[] nodes, + Map indicesAndFilters) { this.groups = groups; this.nodes = nodes; this.indicesAndFilters = indicesAndFilters; @@ -71,7 +73,7 @@ public void readFrom(StreamInput in) throws IOException { for (int i = 0; i < nodes.length; i++) { nodes[i] = new DiscoveryNode(in); } - if (in.getVersion().onOrAfter(Version.V_5_1_1_UNRELEASED)) { + if (in.getVersion().onOrAfter(Version.V_5_1_1)) { int size = in.readVInt(); indicesAndFilters = new HashMap<>(); for (int i = 0; i < size; i++) { @@ -93,7 +95,7 @@ public void writeTo(StreamOutput out) throws IOException { for (DiscoveryNode node : nodes) { node.writeTo(out); } - if (out.getVersion().onOrAfter(Version.V_5_1_1_UNRELEASED)) { + if (out.getVersion().onOrAfter(Version.V_5_1_1)) { out.writeVInt(indicesAndFilters.size()); for (Map.Entry entry : indicesAndFilters.entrySet()) { out.writeString(entry.getKey()); @@ -104,6 +106,7 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startObject("nodes"); for (DiscoveryNode node : nodes) { node.toXContent(builder, params); @@ -115,10 +118,14 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws String index = entry.getKey(); builder.startObject(index); AliasFilter aliasFilter = entry.getValue(); - if (aliasFilter.getAliases().length > 0) { - builder.array("aliases", aliasFilter.getAliases()); - builder.field("filter"); - aliasFilter.getQueryBuilder().toXContent(builder, params); + String[] aliases = aliasFilter.getAliases(); + if (aliases.length > 0) { + Arrays.sort(aliases); // we want consistent ordering here and these values might be generated from a set / map + builder.array("aliases", aliases); + if (aliasFilter.getQueryBuilder() != null) { // might be null if we include non-filtering aliases + builder.field("filter"); + aliasFilter.getQueryBuilder().toXContent(builder, params); + } } builder.endObject(); } @@ -129,7 +136,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws group.toXContent(builder, params); } builder.endArray(); + builder.endObject(); return builder; } - } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java index 01aafc0b0a940..20ed69ae5a92f 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/shards/TransportClusterSearchShardsAction.java @@ -83,12 +83,14 @@ protected void masterOperation(final ClusterSearchShardsRequest request, final C Map> routingMap = indexNameExpressionResolver.resolveSearchRouting(state, request.routing(), request.indices()); Map indicesAndFilters = new HashMap<>(); for (String index : concreteIndices) { - AliasFilter aliasFilter = indicesService.buildAliasFilter(clusterState, index, request.indices()); - indicesAndFilters.put(index, aliasFilter); + final AliasFilter aliasFilter = indicesService.buildAliasFilter(clusterState, index, request.indices()); + final String[] aliases = indexNameExpressionResolver.indexAliases(clusterState, index, aliasMetadata -> true, true, + request.indices()); + indicesAndFilters.put(index, new AliasFilter(aliasFilter.getQueryBuilder(), aliases)); } Set nodeIds = new HashSet<>(); - GroupShardsIterator groupShardsIterator = clusterService.operationRouting().searchShards(clusterState, concreteIndices, + GroupShardsIterator groupShardsIterator = clusterService.operationRouting().searchShards(clusterState, concreteIndices, routingMap, request.preference()); ShardRouting shard; ClusterSearchShardsGroup[] groupResponses = new ClusterSearchShardsGroup[groupShardsIterator.size()]; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java index 9cbc1b6563242..ae7647af577e3 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequest.java @@ -42,7 +42,7 @@ import static org.elasticsearch.common.settings.Settings.readSettingsFromStream; import static org.elasticsearch.common.settings.Settings.writeSettingsToStream; import static org.elasticsearch.common.settings.Settings.Builder.EMPTY_SETTINGS; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; +import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeBooleanValue; /** * Create snapshot request @@ -288,15 +288,16 @@ public CreateSnapshotRequest settings(Settings.Builder settings) { } /** - * Sets repository-specific snapshot settings in JSON, YAML or properties format + * Sets repository-specific snapshot settings in JSON or YAML format *

* See repository documentation for more information. * * @param source repository-specific snapshot settings + * @param xContentType the content type of the source * @return this request */ - public CreateSnapshotRequest settings(String source) { - this.settings = Settings.builder().loadFromSource(source).build(); + public CreateSnapshotRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); return this; } @@ -312,7 +313,7 @@ public CreateSnapshotRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -366,14 +367,14 @@ public CreateSnapshotRequest source(Map source) { throw new IllegalArgumentException("malformed indices section, should be an array of strings"); } } else if (name.equals("partial")) { - partial(lenientNodeBooleanValue(entry.getValue())); + partial(nodeBooleanValue(entry.getValue(), "partial")); } else if (name.equals("settings")) { if (!(entry.getValue() instanceof Map)) { throw new IllegalArgumentException("malformed settings section, should indices an inner object"); } settings((Map) entry.getValue()); } else if (name.equals("include_global_state")) { - includeGlobalState = lenientNodeBooleanValue(entry.getValue()); + includeGlobalState = nodeBooleanValue(entry.getValue(), "include_global_state"); } } indicesOptions(IndicesOptions.fromMap((Map) source, IndicesOptions.lenientExpandOpen())); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestBuilder.java index ebdd206b5c3e8..4022d0497c018 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotRequestBuilder.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Map; @@ -141,15 +142,16 @@ public CreateSnapshotRequestBuilder setSettings(Settings.Builder settings) { } /** - * Sets repository-specific snapshot settings in YAML, JSON or properties format + * Sets repository-specific snapshot settings in YAML or JSON format *

* See repository documentation for more information. * * @param source repository-specific snapshot settings + * @param xContentType the content type of the source * @return this builder */ - public CreateSnapshotRequestBuilder setSettings(String source) { - request.settings(source); + public CreateSnapshotRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); return this; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotResponse.java index efc2fbeb5b580..1f9f77f9ed3df 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/create/CreateSnapshotResponse.java @@ -23,7 +23,7 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.snapshots.SnapshotInfo; @@ -33,7 +33,7 @@ /** * Create snapshot response */ -public class CreateSnapshotResponse extends ActionResponse implements ToXContent { +public class CreateSnapshotResponse extends ActionResponse implements ToXContentObject { @Nullable private SnapshotInfo snapshotInfo; @@ -83,12 +83,14 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); if (snapshotInfo != null) { builder.field("snapshot"); snapshotInfo.toXContent(builder, params); } else { builder.field("accepted", true); } + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java index fd2c97ed5d43c..e90f9e578ce37 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequest.java @@ -28,6 +28,7 @@ import java.io.IOException; import static org.elasticsearch.action.ValidateActions.addValidationError; +import static org.elasticsearch.snapshots.SnapshotInfo.VERBOSE_INTRODUCED; /** * Get snapshot request @@ -43,6 +44,8 @@ public class GetSnapshotsRequest extends MasterNodeRequest private boolean ignoreUnavailable; + private boolean verbose = true; + public GetSnapshotsRequest() { } @@ -123,6 +126,7 @@ public GetSnapshotsRequest ignoreUnavailable(boolean ignoreUnavailable) { this.ignoreUnavailable = ignoreUnavailable; return this; } + /** * @return Whether snapshots should be ignored when unavailable (corrupt or temporarily not fetchable) */ @@ -130,12 +134,36 @@ public boolean ignoreUnavailable() { return ignoreUnavailable; } + /** + * Set to {@code false} to only show the snapshot names and the indices they contain. + * This is useful when the snapshots belong to a cloud-based repository where each + * blob read is a concern (cost wise and performance wise), as the snapshot names and + * indices they contain can be retrieved from a single index blob in the repository, + * whereas the rest of the information requires reading a snapshot metadata file for + * each snapshot requested. Defaults to {@code true}, which returns all information + * about each requested snapshot. + */ + public GetSnapshotsRequest verbose(boolean verbose) { + this.verbose = verbose; + return this; + } + + /** + * Returns whether the request will return a verbose response. + */ + public boolean verbose() { + return verbose; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); repository = in.readString(); snapshots = in.readStringArray(); ignoreUnavailable = in.readBoolean(); + if (in.getVersion().onOrAfter(VERBOSE_INTRODUCED)) { + verbose = in.readBoolean(); + } } @Override @@ -144,5 +172,8 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(repository); out.writeStringArray(snapshots); out.writeBoolean(ignoreUnavailable); + if (out.getVersion().onOrAfter(VERBOSE_INTRODUCED)) { + out.writeBoolean(verbose); + } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequestBuilder.java index 3b0ac47c69f9f..2115bd0bc3b81 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsRequestBuilder.java @@ -96,4 +96,18 @@ public GetSnapshotsRequestBuilder setIgnoreUnavailable(boolean ignoreUnavailable return this; } + /** + * Set to {@code false} to only show the snapshot names and the indices they contain. + * This is useful when the snapshots belong to a cloud-based repository where each + * blob read is a concern (cost wise and performance wise), as the snapshot names and + * indices they contain can be retrieved from a single index blob in the repository, + * whereas the rest of the information requires reading a snapshot metadata file for + * each snapshot requested. Defaults to {@code true}, which returns all information + * about each requested snapshot. + */ + public GetSnapshotsRequestBuilder setVerbose(boolean verbose) { + request.verbose(verbose); + return this; + } + } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsResponse.java index 924f5a90d4256..0d1e5eda7f2d2 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/GetSnapshotsResponse.java @@ -23,6 +23,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.snapshots.SnapshotInfo; @@ -34,7 +35,7 @@ /** * Get snapshots response */ -public class GetSnapshotsResponse extends ActionResponse implements ToXContent { +public class GetSnapshotsResponse extends ActionResponse implements ToXContentObject { private List snapshots = Collections.emptyList(); @@ -58,7 +59,7 @@ public List getSnapshots() { public void readFrom(StreamInput in) throws IOException { super.readFrom(in); int size = in.readVInt(); - List builder = new ArrayList<>(); + List builder = new ArrayList<>(size); for (int i = 0; i < size; i++) { builder.add(new SnapshotInfo(in)); } @@ -76,11 +77,13 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException { + builder.startObject(); builder.startArray("snapshots"); for (SnapshotInfo snapshotInfo : snapshots) { snapshotInfo.toXContent(builder, params); } builder.endArray(); + builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java index 573bb0ea26355..eec218a4119ba 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/get/TransportGetSnapshotsAction.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.cluster.snapshots.get; +import org.apache.lucene.util.CollectionUtil; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.master.TransportMasterNodeAction; @@ -30,6 +31,8 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.regex.Regex; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.repositories.IndexId; +import org.elasticsearch.repositories.RepositoryData; import org.elasticsearch.snapshots.SnapshotId; import org.elasticsearch.snapshots.SnapshotInfo; import org.elasticsearch.snapshots.SnapshotMissingException; @@ -38,11 +41,13 @@ import org.elasticsearch.transport.TransportService; import java.util.ArrayList; +import java.util.Collections; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; import java.util.Set; +import java.util.stream.Collectors; /** * Transport Action for get snapshots operation @@ -75,30 +80,35 @@ protected ClusterBlockException checkBlock(GetSnapshotsRequest request, ClusterS } @Override - protected void masterOperation(final GetSnapshotsRequest request, ClusterState state, + protected void masterOperation(final GetSnapshotsRequest request, final ClusterState state, final ActionListener listener) { try { final String repository = request.repository(); - List snapshotInfoBuilder = new ArrayList<>(); final Map allSnapshotIds = new HashMap<>(); - final List currentSnapshotIds = new ArrayList<>(); + final List currentSnapshots = new ArrayList<>(); for (SnapshotInfo snapshotInfo : snapshotsService.currentSnapshots(repository)) { SnapshotId snapshotId = snapshotInfo.snapshotId(); allSnapshotIds.put(snapshotId.getName(), snapshotId); - currentSnapshotIds.add(snapshotId); + currentSnapshots.add(snapshotInfo); } + + final RepositoryData repositoryData; if (isCurrentSnapshotsOnly(request.snapshots()) == false) { - for (SnapshotId snapshotId : snapshotsService.snapshotIds(repository)) { + repositoryData = snapshotsService.getRepositoryData(repository); + for (SnapshotId snapshotId : repositoryData.getAllSnapshotIds()) { allSnapshotIds.put(snapshotId.getName(), snapshotId); } + } else { + repositoryData = null; } + final Set toResolve = new HashSet<>(); if (isAllSnapshots(request.snapshots())) { toResolve.addAll(allSnapshotIds.values()); } else { for (String snapshotOrPattern : request.snapshots()) { if (GetSnapshotsRequest.CURRENT_SNAPSHOT.equalsIgnoreCase(snapshotOrPattern)) { - toResolve.addAll(currentSnapshotIds); + toResolve.addAll(currentSnapshots.stream().map(SnapshotInfo::snapshotId).collect(Collectors.toList())); } else if (Regex.isSimpleMatchPattern(snapshotOrPattern) == false) { if (allSnapshotIds.containsKey(snapshotOrPattern)) { toResolve.add(allSnapshotIds.get(snapshotOrPattern)); @@ -119,8 +129,23 @@ protected void masterOperation(final GetSnapshotsRequest request, ClusterState s } } - snapshotInfoBuilder.addAll(snapshotsService.snapshots(repository, new ArrayList<>(toResolve), request.ignoreUnavailable())); - listener.onResponse(new GetSnapshotsResponse(snapshotInfoBuilder)); + final List snapshotInfos; + if (request.verbose()) { + final Set incompatibleSnapshots = repositoryData != null ? + new HashSet<>(repositoryData.getIncompatibleSnapshotIds()) : Collections.emptySet(); + snapshotInfos = snapshotsService.snapshots(repository, new ArrayList<>(toResolve), + incompatibleSnapshots, request.ignoreUnavailable()); + } else { + if (repositoryData != null) { + // want non-current snapshots as well, which are found in the repository data + snapshotInfos = buildSimpleSnapshotInfos(toResolve, repositoryData, currentSnapshots); + } else { + // only want current snapshots + snapshotInfos = currentSnapshots.stream().map(SnapshotInfo::basic).collect(Collectors.toList()); + CollectionUtil.timSort(snapshotInfos); + } + } + listener.onResponse(new GetSnapshotsResponse(snapshotInfos)); } catch (Exception e) { listener.onFailure(e); } @@ -133,4 +158,32 @@ private boolean isAllSnapshots(String[] snapshots) { private boolean isCurrentSnapshotsOnly(String[] snapshots) { return (snapshots.length == 1 && GetSnapshotsRequest.CURRENT_SNAPSHOT.equalsIgnoreCase(snapshots[0])); } + + private List buildSimpleSnapshotInfos(final Set toResolve, + final RepositoryData repositoryData, + final List currentSnapshots) { + List snapshotInfos = new ArrayList<>(); + for (SnapshotInfo snapshotInfo : currentSnapshots) { + if (toResolve.remove(snapshotInfo.snapshotId())) { + snapshotInfos.add(snapshotInfo.basic()); + } + } + Map> snapshotsToIndices = new HashMap<>(); + for (IndexId indexId : repositoryData.getIndices().values()) { + for (SnapshotId snapshotId : repositoryData.getSnapshots(indexId)) { + if (toResolve.contains(snapshotId)) { + snapshotsToIndices.computeIfAbsent(snapshotId, (k) -> new ArrayList<>()) + .add(indexId.getName()); + } + } + } + for (Map.Entry> entry : snapshotsToIndices.entrySet()) { + final List indices = entry.getValue(); + CollectionUtil.timSort(indices); + final SnapshotId snapshotId = entry.getKey(); + snapshotInfos.add(new SnapshotInfo(snapshotId, indices, repositoryData.getSnapshotState(snapshotId))); + } + CollectionUtil.timSort(snapshotInfos); + return Collections.unmodifiableList(snapshotInfos); + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequest.java index 641525f00e8bd..7e34cb5a5967e 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequest.java @@ -40,7 +40,7 @@ import static org.elasticsearch.common.settings.Settings.readSettingsFromStream; import static org.elasticsearch.common.settings.Settings.writeSettingsToStream; import static org.elasticsearch.common.settings.Settings.Builder.EMPTY_SETTINGS; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; +import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeBooleanValue; /** * Restore snapshot request @@ -313,15 +313,16 @@ public RestoreSnapshotRequest settings(Settings.Builder settings) { } /** - * Sets repository-specific restore settings in JSON, YAML or properties format + * Sets repository-specific restore settings in JSON or YAML format *

* See repository documentation for more information. * * @param source repository-specific snapshot settings + * @param xContentType the content type of the source * @return this request */ - public RestoreSnapshotRequest settings(String source) { - this.settings = Settings.builder().loadFromSource(source).build(); + public RestoreSnapshotRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); return this; } @@ -337,7 +338,7 @@ public RestoreSnapshotRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -437,8 +438,8 @@ public RestoreSnapshotRequest indexSettings(Settings.Builder settings) { /** * Sets settings that should be added/changed in all restored indices */ - public RestoreSnapshotRequest indexSettings(String source) { - this.indexSettings = Settings.builder().loadFromSource(source).build(); + public RestoreSnapshotRequest indexSettings(String source, XContentType xContentType) { + this.indexSettings = Settings.builder().loadFromSource(source, xContentType).build(); return this; } @@ -449,7 +450,7 @@ public RestoreSnapshotRequest indexSettings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - indexSettings(builder.string()); + indexSettings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -481,16 +482,16 @@ public RestoreSnapshotRequest source(Map source) { throw new IllegalArgumentException("malformed indices section, should be an array of strings"); } } else if (name.equals("partial")) { - partial(lenientNodeBooleanValue(entry.getValue())); + partial(nodeBooleanValue(entry.getValue(), "partial")); } else if (name.equals("settings")) { if (!(entry.getValue() instanceof Map)) { throw new IllegalArgumentException("malformed settings section"); } settings((Map) entry.getValue()); } else if (name.equals("include_global_state")) { - includeGlobalState = lenientNodeBooleanValue(entry.getValue()); + includeGlobalState = nodeBooleanValue(entry.getValue(), "include_global_state"); } else if (name.equals("include_aliases")) { - includeAliases = lenientNodeBooleanValue(entry.getValue()); + includeAliases = nodeBooleanValue(entry.getValue(), "include_aliases"); } else if (name.equals("rename_pattern")) { if (entry.getValue() instanceof String) { renamePattern((String) entry.getValue()); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestBuilder.java index 661a1a1d018af..8e42ef4dbee29 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotRequestBuilder.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.support.master.MasterNodeOperationRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import java.util.List; import java.util.Map; @@ -153,15 +154,16 @@ public RestoreSnapshotRequestBuilder setSettings(Settings.Builder settings) { } /** - * Sets repository-specific restore settings in JSON, YAML or properties format + * Sets repository-specific restore settings in JSON or YAML format *

* See repository documentation for more information. * * @param source repository-specific snapshot settings + * @param xContentType the content type of the source * @return this builder */ - public RestoreSnapshotRequestBuilder setSettings(String source) { - request.settings(source); + public RestoreSnapshotRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); return this; } @@ -250,10 +252,11 @@ public RestoreSnapshotRequestBuilder setIndexSettings(Settings.Builder settings) * Sets index settings that should be added or replaced during restore * * @param source index settings + * @param xContentType the content type of the source * @return this builder */ - public RestoreSnapshotRequestBuilder setIndexSettings(String source) { - request.indexSettings(source); + public RestoreSnapshotRequestBuilder setIndexSettings(String source, XContentType xContentType) { + request.indexSettings(source, xContentType); return this; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotResponse.java index 70f4f2aa4f24c..5a02e4bcb1387 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/restore/RestoreSnapshotResponse.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.snapshots.RestoreInfo; @@ -33,7 +34,7 @@ /** * Contains information about restores snapshot */ -public class RestoreSnapshotResponse extends ActionResponse implements ToXContent { +public class RestoreSnapshotResponse extends ActionResponse implements ToXContentObject { @Nullable private RestoreInfo restoreInfo; @@ -75,12 +76,14 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException { + builder.startObject(); if (restoreInfo != null) { builder.field("snapshot"); restoreInfo.toXContent(builder, params); } else { builder.field("accepted", true); } + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotIndexShardStage.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotIndexShardStage.java index d96daa86f76a7..c523fbbac3b5f 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotIndexShardStage.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotIndexShardStage.java @@ -47,7 +47,7 @@ public enum SnapshotIndexShardStage { private boolean completed; - private SnapshotIndexShardStage(byte value, boolean completed) { + SnapshotIndexShardStage(byte value, boolean completed) { this.value = value; this.completed = completed; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusRequestBuilder.java index 9e4b4652cc589..37d8ad04d0e7e 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusRequestBuilder.java @@ -29,7 +29,7 @@ public class SnapshotsStatusRequestBuilder extends MasterNodeOperationRequestBuilder { /** - * Constructs the new snapshotstatus request + * Constructs the new snapshot status request */ public SnapshotsStatusRequestBuilder(ElasticsearchClient client, SnapshotsStatusAction action) { super(client, action, new SnapshotsStatusRequest()); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusResponse.java index b9800a2d9edb8..d44a490680c9b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/SnapshotsStatusResponse.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -33,7 +33,7 @@ /** * Snapshot status response */ -public class SnapshotsStatusResponse extends ActionResponse implements ToXContent { +public class SnapshotsStatusResponse extends ActionResponse implements ToXContentObject { private List snapshots = Collections.emptyList(); @@ -75,11 +75,13 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray("snapshots"); for (SnapshotStatus snapshot : snapshots) { snapshot.toXContent(builder, params); } builder.endArray(); + builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportNodesSnapshotsStatus.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportNodesSnapshotsStatus.java index 71a709f0b5b40..872793f6ef21a 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportNodesSnapshotsStatus.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportNodesSnapshotsStatus.java @@ -122,11 +122,6 @@ protected NodeSnapshotStatus nodeOperation(NodeRequest request) { } } - @Override - protected boolean accumulateExceptions() { - return true; - } - public static class Request extends BaseNodesRequest { private Snapshot[] snapshots; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java index c73ae48d070f4..7406b0fea4af0 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/snapshots/status/TransportSnapshotsStatusAction.java @@ -36,7 +36,9 @@ import org.elasticsearch.common.util.set.Sets; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.snapshots.IndexShardSnapshotStatus; +import org.elasticsearch.repositories.RepositoryData; import org.elasticsearch.snapshots.Snapshot; +import org.elasticsearch.snapshots.SnapshotException; import org.elasticsearch.snapshots.SnapshotId; import org.elasticsearch.snapshots.SnapshotInfo; import org.elasticsearch.snapshots.SnapshotMissingException; @@ -201,7 +203,8 @@ private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, Li final String repositoryName = request.repository(); if (Strings.hasText(repositoryName) && request.snapshots() != null && request.snapshots().length > 0) { final Set requestedSnapshotNames = Sets.newHashSet(request.snapshots()); - final Map matchedSnapshotIds = snapshotsService.snapshotIds(repositoryName).stream() + final RepositoryData repositoryData = snapshotsService.getRepositoryData(repositoryName); + final Map matchedSnapshotIds = repositoryData.getAllSnapshotIds().stream() .filter(s -> requestedSnapshotNames.contains(s.getName())) .collect(Collectors.toMap(SnapshotId::getName, Function.identity())); for (final String snapshotName : request.snapshots()) { @@ -220,6 +223,8 @@ private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, Li } else { throw new SnapshotMissingException(repositoryName, snapshotName); } + } else if (repositoryData.getIncompatibleSnapshotIds().contains(snapshotId)) { + throw new SnapshotException(repositoryName, snapshotName, "cannot get the status for an incompatible snapshot"); } SnapshotInfo snapshotInfo = snapshotsService.snapshot(repositoryName, snapshotId); List shardStatusBuilder = new ArrayList<>(); @@ -243,7 +248,7 @@ private SnapshotsStatusResponse buildResponse(SnapshotsStatusRequest request, Li default: throw new IllegalArgumentException("Unknown snapshot state " + snapshotInfo.state()); } - builder.add(new SnapshotStatus(new Snapshot(repositoryName, snapshotInfo.snapshotId()), state, Collections.unmodifiableList(shardStatusBuilder))); + builder.add(new SnapshotStatus(new Snapshot(repositoryName, snapshotId), state, Collections.unmodifiableList(shardStatusBuilder))); } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequestBuilder.java index 347a51afa138d..979c81c3c34ca 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateRequestBuilder.java @@ -69,7 +69,7 @@ public ClusterStateRequestBuilder setNodes(boolean filter) { } /** - * Should the cluster state result include teh {@link org.elasticsearch.cluster.routing.RoutingTable}. Defaults + * Should the cluster state result include the {@link org.elasticsearch.cluster.routing.RoutingTable}. Defaults * to true. */ public ClusterStateRequestBuilder setRoutingTable(boolean filter) { diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateResponse.java index 6d6f0da34b559..cdc869e529d3b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/state/ClusterStateResponse.java @@ -19,40 +19,75 @@ package org.elasticsearch.action.admin.cluster.state; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.cluster.ClusterName; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.unit.ByteSizeValue; import java.io.IOException; +/** + * The response for getting the cluster state. + */ public class ClusterStateResponse extends ActionResponse { private ClusterName clusterName; private ClusterState clusterState; + // the total compressed size of the full cluster state, not just + // the parts included in this response + private ByteSizeValue totalCompressedSize; public ClusterStateResponse() { } - public ClusterStateResponse(ClusterName clusterName, ClusterState clusterState) { + public ClusterStateResponse(ClusterName clusterName, ClusterState clusterState, long sizeInBytes) { this.clusterName = clusterName; this.clusterState = clusterState; + this.totalCompressedSize = new ByteSizeValue(sizeInBytes); } + /** + * The requested cluster state. Only the parts of the cluster state that were + * requested are included in the returned {@link ClusterState} instance. + */ public ClusterState getState() { return this.clusterState; } + /** + * The name of the cluster. + */ public ClusterName getClusterName() { return this.clusterName; } + /** + * The total compressed size of the full cluster state, not just the parts + * returned by {@link #getState()}. The total compressed size is the size + * of the cluster state as it would be transmitted over the network during + * intra-node communication. + */ + public ByteSizeValue getTotalCompressedSize() { + return totalCompressedSize; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); clusterName = new ClusterName(in); clusterState = ClusterState.readFrom(in, null); + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { + totalCompressedSize = new ByteSizeValue(in); + } else { + // in a mixed cluster, if a pre 6.0 node processes the get cluster state + // request, then a compressed size won't be returned, so just return 0; + // its a temporary situation until all nodes in the cluster have been upgraded, + // at which point the correct cluster state size will always be reported + totalCompressedSize = new ByteSizeValue(0L); + } } @Override @@ -60,5 +95,8 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); clusterName.writeTo(out); clusterState.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { + totalCompressedSize.writeTo(out); + } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/state/TransportClusterStateAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/state/TransportClusterStateAction.java index 6c965cb3bbd5e..601c69b3189e3 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/state/TransportClusterStateAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/state/TransportClusterStateAction.java @@ -20,6 +20,7 @@ package org.elasticsearch.action.admin.cluster.state; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.master.TransportMasterNodeReadAction; @@ -36,6 +37,10 @@ import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; +import java.io.IOException; + +import static org.elasticsearch.discovery.zen.PublishClusterStateAction.serializeFullClusterState; + public class TransportClusterStateAction extends TransportMasterNodeReadAction { @@ -66,7 +71,8 @@ protected ClusterStateResponse newResponse() { } @Override - protected void masterOperation(final ClusterStateRequest request, final ClusterState state, ActionListener listener) { + protected void masterOperation(final ClusterStateRequest request, final ClusterState state, + final ActionListener listener) throws IOException { ClusterState currentState = clusterService.state(); logger.trace("Serving cluster state request using version {}", currentState.version()); ClusterState.Builder builder = ClusterState.builder(currentState.getClusterName()); @@ -122,7 +128,8 @@ protected void masterOperation(final ClusterStateRequest request, final ClusterS if (request.customs()) { builder.customs(currentState.customs()); } - listener.onResponse(new ClusterStateResponse(currentState.getClusterName(), builder.build())); + listener.onResponse(new ClusterStateResponse(currentState.getClusterName(), builder.build(), + serializeFullClusterState(currentState, Version.CURRENT).length())); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodes.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodes.java index 0d545ddfa70ed..41cacf2a8515c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodes.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/ClusterStatsNodes.java @@ -25,6 +25,7 @@ import org.elasticsearch.action.admin.cluster.node.info.NodeInfo; import org.elasticsearch.action.admin.cluster.node.stats.NodeStats; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.network.NetworkModule; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; @@ -64,8 +65,8 @@ public class ClusterStatsNodes implements ToXContent { this.plugins = new HashSet<>(); Set seenAddresses = new HashSet<>(nodeResponses.size()); - List nodeInfos = new ArrayList<>(); - List nodeStats = new ArrayList<>(); + List nodeInfos = new ArrayList<>(nodeResponses.size()); + List nodeStats = new ArrayList<>(nodeResponses.size()); for (ClusterStatsNodeResponse nodeResponse : nodeResponses) { nodeInfos.add(nodeResponse.nodeInfo()); nodeStats.add(nodeResponse.nodeStats()); @@ -73,7 +74,8 @@ public class ClusterStatsNodes implements ToXContent { this.plugins.addAll(nodeResponse.nodeInfo().getPlugins().getPluginInfos()); // now do the stats that should be deduped by hardware (implemented by ip deduping) - TransportAddress publishAddress = nodeResponse.nodeInfo().getTransport().address().publishAddress(); + TransportAddress publishAddress = + nodeResponse.nodeInfo().getTransport().address().publishAddress(); final InetAddress inetAddress = publishAddress.address().getAddress(); if (!seenAddresses.add(inetAddress)) { continue; @@ -209,7 +211,8 @@ static final class Fields { } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + public XContentBuilder toXContent(XContentBuilder builder, Params params) + throws IOException { builder.field(Fields.TOTAL, total); for (Map.Entry entry : roles.entrySet()) { builder.field(entry.getKey(), entry.getValue()); @@ -280,7 +283,8 @@ static final class Fields { } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + public XContentBuilder toXContent(XContentBuilder builder, Params params) + throws IOException { builder.field(Fields.AVAILABLE_PROCESSORS, availableProcessors); builder.field(Fields.ALLOCATED_PROCESSORS, allocatedProcessors); builder.startArray(Fields.NAMES); @@ -326,7 +330,8 @@ private ProcessStats(List nodeStatsList) { // fd can be -1 if not supported on platform totalOpenFileDescriptors += fd; } - // we still do min max calc on -1, so we'll have an indication of it not being supported on one of the nodes. + // we still do min max calc on -1, so we'll have an indication + // of it not being supported on one of the nodes. minOpenFileDescriptors = Math.min(minOpenFileDescriptors, fd); maxOpenFileDescriptors = Math.max(maxOpenFileDescriptors, fd); } @@ -375,7 +380,8 @@ static final class Fields { } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + public XContentBuilder toXContent(XContentBuilder builder, Params params) + throws IOException { builder.startObject(Fields.CPU).field(Fields.PERCENT, cpuPercent).endObject(); if (count > 0) { builder.startObject(Fields.OPEN_FILE_DESCRIPTORS); @@ -479,7 +485,8 @@ static final class Fields { } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + public XContentBuilder toXContent(XContentBuilder builder, Params params) + throws IOException { builder.timeValueField(Fields.MAX_UPTIME_IN_MILLIS, Fields.MAX_UPTIME, maxUptime); builder.startArray(Fields.VERSIONS); for (ObjectIntCursor v : versions) { @@ -540,17 +547,25 @@ static class NetworkTypes implements ToXContent { private final Map transportTypes; private final Map httpTypes; - private NetworkTypes(final List nodeInfos) { + NetworkTypes(final List nodeInfos) { final Map transportTypes = new HashMap<>(); final Map httpTypes = new HashMap<>(); for (final NodeInfo nodeInfo : nodeInfos) { final Settings settings = nodeInfo.getSettings(); final String transportType = - settings.get(NetworkModule.TRANSPORT_TYPE_KEY, NetworkModule.TRANSPORT_DEFAULT_TYPE_SETTING.get(settings)); + settings.get(NetworkModule.TRANSPORT_TYPE_KEY, + NetworkModule.TRANSPORT_DEFAULT_TYPE_SETTING.get(settings)); final String httpType = - settings.get(NetworkModule.HTTP_TYPE_KEY, NetworkModule.HTTP_DEFAULT_TYPE_SETTING.get(settings)); - transportTypes.computeIfAbsent(transportType, k -> new AtomicInteger()).incrementAndGet(); - httpTypes.computeIfAbsent(httpType, k -> new AtomicInteger()).incrementAndGet(); + settings.get(NetworkModule.HTTP_TYPE_KEY, + NetworkModule.HTTP_DEFAULT_TYPE_SETTING.get(settings)); + if (Strings.hasText(transportType)) { + transportTypes.computeIfAbsent(transportType, + k -> new AtomicInteger()).incrementAndGet(); + } + if (Strings.hasText(httpType)) { + httpTypes.computeIfAbsent(httpType, + k -> new AtomicInteger()).incrementAndGet(); + } } this.transportTypes = Collections.unmodifiableMap(transportTypes); this.httpTypes = Collections.unmodifiableMap(httpTypes); diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java index 45eb83dd9e10d..57eeb2d5eb4f2 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/stats/TransportClusterStatsAction.java @@ -39,7 +39,7 @@ import org.elasticsearch.index.IndexService; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.indices.IndicesService; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -118,11 +118,6 @@ protected ClusterStatsNodeResponse nodeOperation(ClusterStatsNodeRequest nodeReq } - @Override - protected boolean accumulateExceptions() { - return false; - } - public static class ClusterStatsNodeRequest extends BaseNodeRequest { ClusterStatsRequest request; diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java index d01c128cf36b3..c30eae12a82ea 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequest.java @@ -31,66 +31,79 @@ public class DeleteStoredScriptRequest extends AcknowledgedRequest { private String id; - private String scriptLang; + private String lang; DeleteStoredScriptRequest() { + super(); } - public DeleteStoredScriptRequest(String scriptLang, String id) { - this.scriptLang = scriptLang; + public DeleteStoredScriptRequest(String id, String lang) { + super(); + this.id = id; + this.lang = lang; } @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; - if (id == null) { - validationException = addValidationError("id is missing", validationException); + + if (id == null || id.isEmpty()) { + validationException = addValidationError("must specify id for stored script", validationException); } else if (id.contains("#")) { - validationException = addValidationError("id can't contain: '#'", validationException); + validationException = addValidationError("id cannot contain '#' for stored script", validationException); } - if (scriptLang == null) { - validationException = addValidationError("lang is missing", validationException); - } else if (scriptLang.contains("#")) { - validationException = addValidationError("lang can't contain: '#'", validationException); + + if (lang != null && lang.contains("#")) { + validationException = addValidationError("lang cannot contain '#' for stored script", validationException); } + return validationException; } - public String scriptLang() { - return scriptLang; + public String id() { + return id; } - public DeleteStoredScriptRequest scriptLang(String type) { - this.scriptLang = type; + public DeleteStoredScriptRequest id(String id) { + this.id = id; + return this; } - public String id() { - return id; + public String lang() { + return lang; } - public DeleteStoredScriptRequest id(String id) { - this.id = id; + public DeleteStoredScriptRequest lang(String lang) { + this.lang = lang; + return this; } @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - scriptLang = in.readString(); + + lang = in.readString(); + + if (lang.isEmpty()) { + lang = null; + } + id = in.readString(); } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeString(scriptLang); + + out.writeString(lang == null ? "" : lang); out.writeString(id); } @Override public String toString() { - return "delete script {[" + scriptLang + "][" + id + "]}"; + return "delete stored script {id [" + id + "]" + (lang != null ? ", lang [" + lang + "]" : "") + "}"; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java index caf55a03f18e3..8a65506dabd34 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/DeleteStoredScriptRequestBuilder.java @@ -29,13 +29,15 @@ public DeleteStoredScriptRequestBuilder(ElasticsearchClient client, DeleteStored super(client, action, new DeleteStoredScriptRequest()); } - public DeleteStoredScriptRequestBuilder setScriptLang(String scriptLang) { - request.scriptLang(scriptLang); + public DeleteStoredScriptRequestBuilder setLang(String lang) { + request.lang(lang); + return this; } public DeleteStoredScriptRequestBuilder setId(String id) { request.id(id); + return this; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java index bb7a9effd32eb..2bfd547362c80 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptRequest.java @@ -28,61 +28,79 @@ import java.io.IOException; +import static org.elasticsearch.action.ValidateActions.addValidationError; + public class GetStoredScriptRequest extends MasterNodeReadRequest { protected String id; protected String lang; GetStoredScriptRequest() { + super(); } - public GetStoredScriptRequest(String lang, String id) { - this.lang = lang; + public GetStoredScriptRequest(String id, String lang) { + super(); + this.id = id; + this.lang = lang; } @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; - if (lang == null) { - validationException = ValidateActions.addValidationError("lang is missing", validationException); + + if (id == null || id.isEmpty()) { + validationException = addValidationError("must specify id for stored script", validationException); + } else if (id.contains("#")) { + validationException = addValidationError("id cannot contain '#' for stored script", validationException); } - if (id == null) { - validationException = ValidateActions.addValidationError("id is missing", validationException); + + if (lang != null && lang.contains("#")) { + validationException = addValidationError("lang cannot contain '#' for stored script", validationException); } + return validationException; } - public GetStoredScriptRequest lang(@Nullable String type) { - this.lang = type; - return this; + public String id() { + return id; } public GetStoredScriptRequest id(String id) { this.id = id; + return this; } - public String lang() { return lang; } - public String id() { - return id; + public GetStoredScriptRequest lang(String lang) { + this.lang = lang; + + return this; } @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); + lang = in.readString(); + + if (lang.isEmpty()) { + lang = null; + } + id = in.readString(); } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeString(lang); + + out.writeString(lang == null ? "" : lang); out.writeString(id); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptResponse.java index 36dd9beb38a7a..d543ac67e1d91 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/GetStoredScriptResponse.java @@ -19,49 +19,70 @@ package org.elasticsearch.action.admin.cluster.storedscripts; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionResponse; -import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.script.Script; +import org.elasticsearch.script.StoredScriptSource; import java.io.IOException; public class GetStoredScriptResponse extends ActionResponse implements ToXContent { - private String storedScript; + private StoredScriptSource source; GetStoredScriptResponse() { } - GetStoredScriptResponse(String storedScript) { - this.storedScript = storedScript; + GetStoredScriptResponse(StoredScriptSource source) { + this.source = source; } /** * @return if a stored script and if not found null */ - public String getStoredScript() { - return storedScript; + public StoredScriptSource getSource() { + return source; } @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.value(storedScript); + source.toXContent(builder, params); + return builder; } @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - storedScript = in.readOptionalString(); + + if (in.readBoolean()) { + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + source = new StoredScriptSource(in); + } else { + source = new StoredScriptSource(in.readString()); + } + } else { + source = null; + } } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeOptionalString(storedScript); + + if (source == null) { + out.writeBoolean(false); + } else { + out.writeBoolean(true); + + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + source.writeTo(out); + } else { + out.writeString(source.getSource()); + } + } } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java index cfe153d7d9641..d35a26d9c3242 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequest.java @@ -19,108 +19,156 @@ package org.elasticsearch.action.admin.cluster.storedscripts; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.support.master.AcknowledgedRequest; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; +import java.util.Objects; import static org.elasticsearch.action.ValidateActions.addValidationError; public class PutStoredScriptRequest extends AcknowledgedRequest { private String id; - private String scriptLang; - private BytesReference script; + private String lang; + private String context; + private BytesReference content; + private XContentType xContentType; public PutStoredScriptRequest() { super(); } - public PutStoredScriptRequest(String scriptLang) { + public PutStoredScriptRequest(String id, String lang, String context, BytesReference content, XContentType xContentType) { super(); - this.scriptLang = scriptLang; - } - - public PutStoredScriptRequest(String scriptLang, String id) { - super(); - this.scriptLang = scriptLang; this.id = id; + this.lang = lang; + this.context = context; + this.content = content; + this.xContentType = Objects.requireNonNull(xContentType); } @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; - if (id == null) { - validationException = addValidationError("id is missing", validationException); + + if (id == null || id.isEmpty()) { + validationException = addValidationError("must specify id for stored script", validationException); } else if (id.contains("#")) { - validationException = addValidationError("id can't contain: '#'", validationException); + validationException = addValidationError("id cannot contain '#' for stored script", validationException); } - if (scriptLang == null) { - validationException = addValidationError("lang is missing", validationException); - } else if (scriptLang.contains("#")) { - validationException = addValidationError("lang can't contain: '#'", validationException); + + if (lang != null && lang.contains("#")) { + validationException = addValidationError("lang cannot contain '#' for stored script", validationException); } - if (script == null) { - validationException = addValidationError("script is missing", validationException); + + if (content == null) { + validationException = addValidationError("must specify code for stored script", validationException); } + return validationException; } - public String scriptLang() { - return scriptLang; + public String id() { + return id; } - public PutStoredScriptRequest scriptLang(String scriptLang) { - this.scriptLang = scriptLang; + public PutStoredScriptRequest id(String id) { + this.id = id; + return this; } - public String id() { - return id; + public String lang() { + return lang; } - public PutStoredScriptRequest id(String id) { - this.id = id; + public PutStoredScriptRequest lang(String lang) { + this.lang = lang; + return this; } - public BytesReference script() { - return script; + public String context() { + return context; } - public PutStoredScriptRequest script(BytesReference source) { - this.script = source; + public PutStoredScriptRequest context(String context) { + this.context = context; + return this; + } + + public BytesReference content() { + return content; + } + + public XContentType xContentType() { + return xContentType; + } + + /** + * Set the script source and the content type of the bytes. + */ + public PutStoredScriptRequest content(BytesReference content, XContentType xContentType) { + this.content = content; + this.xContentType = Objects.requireNonNull(xContentType); return this; } @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - scriptLang = in.readString(); + + lang = in.readString(); + + if (lang.isEmpty()) { + lang = null; + } + id = in.readOptionalString(); - script = in.readBytesReference(); + content = in.readBytesReference(); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType = XContentType.readFrom(in); + } else { + xContentType = XContentFactory.xContentType(content); + } + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha2)) { + context = in.readOptionalString(); + } } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeString(scriptLang); + + out.writeString(lang == null ? "" : lang); out.writeOptionalString(id); - out.writeBytesReference(script); + out.writeBytesReference(content); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType.writeTo(out); + } + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha2)) { + out.writeOptionalString(context); + } } @Override public String toString() { - String sSource = "_na_"; + String source = "_na_"; + try { - sSource = XContentHelper.convertToJson(script, false); + source = XContentHelper.convertToJson(content, false, xContentType); } catch (Exception e) { // ignore } - return "put script {[" + id + "][" + scriptLang + "], script[" + sSource + "]}"; + + return "put stored script {id [" + id + "]" + (lang != null ? ", lang [" + lang + "]" : "") + ", content [" + source + "]}"; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java index 15c51c2ccd7e5..9985a3394e311 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/PutStoredScriptRequestBuilder.java @@ -22,6 +22,7 @@ import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.xcontent.XContentType; public class PutStoredScriptRequestBuilder extends AcknowledgedRequestBuilder { @@ -30,19 +31,21 @@ public PutStoredScriptRequestBuilder(ElasticsearchClient client, PutStoredScript super(client, action, new PutStoredScriptRequest()); } - public PutStoredScriptRequestBuilder setScriptLang(String scriptLang) { - request.scriptLang(scriptLang); - return this; - } - public PutStoredScriptRequestBuilder setId(String id) { request.id(id); return this; } - public PutStoredScriptRequestBuilder setSource(BytesReference source) { - request.script(source); + /** + * Set the source of the script along with the content type of the source + */ + public PutStoredScriptRequestBuilder setContent(BytesReference source, XContentType xContentType) { + request.content(source, xContentType); return this; } + public PutStoredScriptRequestBuilder setLang(String lang) { + request.lang(lang); + return this; + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java index a91dec8d9b30a..8b4079aee7379 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/storedscripts/TransportPutStoredScriptAction.java @@ -59,7 +59,7 @@ protected PutStoredScriptResponse newResponse() { @Override protected void masterOperation(PutStoredScriptRequest request, ClusterState state, ActionListener listener) throws Exception { - scriptService.storeScript(clusterService, request, listener); + scriptService.putStoredScript(clusterService, request, listener); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksResponse.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksResponse.java index bb1afe5e19e34..ec42e34ec9614 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/PendingClusterTasksResponse.java @@ -23,7 +23,7 @@ import org.elasticsearch.cluster.service.PendingClusterTask; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -31,7 +31,7 @@ import java.util.Iterator; import java.util.List; -public class PendingClusterTasksResponse extends ActionResponse implements Iterable, ToXContent { +public class PendingClusterTasksResponse extends ActionResponse implements Iterable, ToXContentObject { private List pendingTasks; @@ -63,13 +63,15 @@ public String toString() { StringBuilder sb = new StringBuilder(); sb.append("tasks: (").append(pendingTasks.size()).append("):\n"); for (PendingClusterTask pendingClusterTask : this) { - sb.append(pendingClusterTask.getInsertOrder()).append("/").append(pendingClusterTask.getPriority()).append("/").append(pendingClusterTask.getSource()).append("/").append(pendingClusterTask.getTimeInQueue()).append("\n"); + sb.append(pendingClusterTask.getInsertOrder()).append("/").append(pendingClusterTask.getPriority()).append("/") + .append(pendingClusterTask.getSource()).append("/").append(pendingClusterTask.getTimeInQueue()).append("\n"); } return sb.toString(); } @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray(Fields.TASKS); for (PendingClusterTask pendingClusterTask : this) { builder.startObject(); @@ -82,6 +84,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.endObject(); } builder.endArray(); + builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java b/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java index c15758de3cb43..cd58bb8d6d43e 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/cluster/tasks/TransportPendingClusterTasksAction.java @@ -65,7 +65,7 @@ protected PendingClusterTasksResponse newResponse() { @Override protected void masterOperation(PendingClusterTasksRequest request, ClusterState state, ActionListener listener) { logger.trace("fetching pending tasks from cluster service"); - final List pendingTasks = clusterService.pendingTasks(); + final List pendingTasks = clusterService.getMasterService().pendingTasks(); logger.trace("done fetching pending tasks from cluster service"); listener.onResponse(new PendingClusterTasksResponse(pendingTasks)); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java index 524a21ec632ac..07665e9ccf176 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/IndicesAliasesRequest.java @@ -19,20 +19,15 @@ package org.elasticsearch.action.admin.indices.alias; -import com.carrotsearch.hppc.cursors.ObjectCursor; import org.elasticsearch.ElasticsearchGenerationException; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.AliasesRequest; import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.AcknowledgedRequest; import org.elasticsearch.cluster.metadata.AliasAction; -import org.elasticsearch.cluster.metadata.AliasMetaData; -import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.ParsingException; import org.elasticsearch.common.Strings; -import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -42,6 +37,7 @@ import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.query.QueryBuilder; @@ -63,9 +59,10 @@ public class IndicesAliasesRequest extends AcknowledgedRequest { private List allAliasActions = new ArrayList<>(); - //indices options that require every specified index to exist, expand wildcards only to open indices and - //don't allow that no indices are resolved from wildcard expressions - private static final IndicesOptions INDICES_OPTIONS = IndicesOptions.fromOptions(false, false, true, false); + // indices options that require every specified index to exist, expand wildcards only to open + // indices, don't allow that no indices are resolved from wildcard expressions and resolve the + // expressions only against indices + private static final IndicesOptions INDICES_OPTIONS = IndicesOptions.fromOptions(false, false, true, false, true, false, true); public IndicesAliasesRequest() { @@ -92,10 +89,10 @@ public byte value() { public static Type fromValue(byte value) { switch (value) { - case 0: return ADD; - case 1: return REMOVE; - case 2: return REMOVE_INDEX; - default: throw new IllegalArgumentException("No type for action [" + value + "]"); + case 0: return ADD; + case 1: return REMOVE; + case 2: return REMOVE_INDEX; + default: throw new IllegalArgumentException("No type for action [" + value + "]"); } } } @@ -106,20 +103,23 @@ public static Type fromValue(byte value) { public static AliasActions add() { return new AliasActions(AliasActions.Type.ADD); } + /** * Build a new {@link AliasAction} to remove aliases. */ public static AliasActions remove() { return new AliasActions(AliasActions.Type.REMOVE); } + /** - * Build a new {@link AliasAction} to remove aliases. + * Build a new {@link AliasAction} to remove an index. */ public static AliasActions removeIndex() { return new AliasActions(AliasActions.Type.REMOVE_INDEX); } - private static ObjectParser parser(String name, Supplier supplier) { - ObjectParser parser = new ObjectParser<>(name, supplier); + + private static ObjectParser parser(String name, Supplier supplier) { + ObjectParser parser = new ObjectParser<>(name, supplier); parser.declareString((action, index) -> { if (action.indices() != null) { throw new IllegalArgumentException("Only one of [index] and [indices] is supported"); @@ -147,7 +147,7 @@ private static ObjectParser parser(Stri return parser; } - private static final ObjectParser ADD_PARSER = parser("add", AliasActions::add); + private static final ObjectParser ADD_PARSER = parser("add", AliasActions::add); static { ADD_PARSER.declareObject(AliasActions::filter, (parser, m) -> { try { @@ -157,18 +157,17 @@ private static ObjectParser parser(Stri } }, new ParseField("filter")); // Since we need to support numbers AND strings here we have to use ValueType.INT. - ADD_PARSER.declareField(AliasActions::routing, p -> p.text(), new ParseField("routing"), ValueType.INT); - ADD_PARSER.declareField(AliasActions::indexRouting, p -> p.text(), new ParseField("index_routing"), ValueType.INT); - ADD_PARSER.declareField(AliasActions::searchRouting, p -> p.text(), new ParseField("search_routing"), ValueType.INT); + ADD_PARSER.declareField(AliasActions::routing, XContentParser::text, new ParseField("routing"), ValueType.INT); + ADD_PARSER.declareField(AliasActions::indexRouting, XContentParser::text, new ParseField("index_routing"), ValueType.INT); + ADD_PARSER.declareField(AliasActions::searchRouting, XContentParser::text, new ParseField("search_routing"), ValueType.INT); } - private static final ObjectParser REMOVE_PARSER = parser("remove", AliasActions::remove); - private static final ObjectParser REMOVE_INDEX_PARSER = parser("remove_index", - AliasActions::removeIndex); + private static final ObjectParser REMOVE_PARSER = parser("remove", AliasActions::remove); + private static final ObjectParser REMOVE_INDEX_PARSER = parser("remove_index", AliasActions::removeIndex); /** * Parser for any one {@link AliasAction}. */ - public static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>( + public static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>( "alias_action", a -> { // Take the first action and complain if there are more than one actions AliasActions action = null; @@ -403,24 +402,6 @@ public IndicesOptions indicesOptions() { return INDICES_OPTIONS; } - public String[] concreteAliases(MetaData metaData, String concreteIndex) { - if (expandAliasesWildcards()) { - //for DELETE we expand the aliases - String[] indexAsArray = {concreteIndex}; - ImmutableOpenMap> aliasMetaData = metaData.findAliases(aliases, indexAsArray); - List finalAliases = new ArrayList<>(); - for (ObjectCursor> curAliases : aliasMetaData.values()) { - for (AliasMetaData aliasMeta: curAliases.value) { - finalAliases.add(aliasMeta.alias()); - } - } - return finalAliases.toArray(new String[finalAliases.size()]); - } else { - //for add we just return the current aliases - return aliases; - } - } - @Override public String toString() { return "AliasActions[" diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java index 44de63c028db1..9dcd361ae6421 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/alias/TransportIndicesAliasesAction.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.alias; +import com.carrotsearch.hppc.cursors.ObjectCursor; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.admin.indices.alias.IndicesAliasesRequest.AliasActions; import org.elasticsearch.action.support.ActionFilters; @@ -28,9 +29,12 @@ import org.elasticsearch.cluster.block.ClusterBlockException; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.AliasAction; +import org.elasticsearch.cluster.metadata.AliasMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.metadata.MetaDataIndexAliasesService; import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.collect.ImmutableOpenMap; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.rest.action.admin.indices.AliasesNotFoundException; @@ -75,9 +79,7 @@ protected IndicesAliasesResponse newResponse() { protected ClusterBlockException checkBlock(IndicesAliasesRequest request, ClusterState state) { Set indices = new HashSet<>(); for (AliasActions aliasAction : request.aliasActions()) { - for (String index : aliasAction.indices()) { - indices.add(index); - } + Collections.addAll(indices, aliasAction.indices()); } return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indices.toArray(new String[indices.size()])); } @@ -97,12 +99,12 @@ protected void masterOperation(final IndicesAliasesRequest request, final Cluste for (String index : concreteIndices) { switch (action.actionType()) { case ADD: - for (String alias : action.concreteAliases(state.metaData(), index)) { + for (String alias : concreteAliases(action, state.metaData(), index)) { finalActions.add(new AliasAction.Add(index, alias, action.filter(), action.indexRouting(), action.searchRouting())); } break; case REMOVE: - for (String alias : action.concreteAliases(state.metaData(), index)) { + for (String alias : concreteAliases(action, state.metaData(), index)) { finalActions.add(new AliasAction.Remove(index, alias)); } break; @@ -134,4 +136,22 @@ public void onFailure(Exception t) { } }); } + + private static String[] concreteAliases(AliasActions action, MetaData metaData, String concreteIndex) { + if (action.expandAliasesWildcards()) { + //for DELETE we expand the aliases + String[] indexAsArray = {concreteIndex}; + ImmutableOpenMap> aliasMetaData = metaData.findAliases(action.aliases(), indexAsArray); + List finalAliases = new ArrayList<>(); + for (ObjectCursor> curAliases : aliasMetaData.values()) { + for (AliasMetaData aliasMeta: curAliases.value) { + finalAliases.add(aliasMeta.alias()); + } + } + return finalAliases.toArray(new String[finalAliases.size()]); + } else { + //for ADD and REMOVE_INDEX we just return the current aliases + return action.aliases(); + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequest.java index 6d0824eeb31c5..08f220e0199d8 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequest.java @@ -75,7 +75,7 @@ public static class NameOrDefinition implements Writeable { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(definition); - this.definition = Settings.builder().loadFromSource(builder.string()).build(); + this.definition = Settings.builder().loadFromSource(builder.string(), builder.contentType()).build(); } catch (IOException e) { throw new IllegalArgumentException("Failed to parse [" + definition + "]", e); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequestBuilder.java index 344681b997ecb..5070862ed69b5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeRequestBuilder.java @@ -113,7 +113,7 @@ public AnalyzeRequestBuilder setExplain(boolean explain) { /** * Sets attributes that will include results */ - public AnalyzeRequestBuilder setAttributes(String attributes){ + public AnalyzeRequestBuilder setAttributes(String... attributes){ request.attributes(attributes); return this; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeResponse.java index 302597e0e09bd..1e54def2385f8 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/AnalyzeResponse.java @@ -23,7 +23,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -32,25 +32,27 @@ import java.util.List; import java.util.Map; -public class AnalyzeResponse extends ActionResponse implements Iterable, ToXContent { +public class AnalyzeResponse extends ActionResponse implements Iterable, ToXContentObject { - public static class AnalyzeToken implements Streamable, ToXContent { + public static class AnalyzeToken implements Streamable, ToXContentObject { private String term; private int startOffset; private int endOffset; private int position; + private int positionLength = 1; private Map attributes; private String type; AnalyzeToken() { } - public AnalyzeToken(String term, int position, int startOffset, int endOffset, String type, - Map attributes) { + public AnalyzeToken(String term, int position, int startOffset, int endOffset, int positionLength, + String type, Map attributes) { this.term = term; this.position = position; this.startOffset = startOffset; this.endOffset = endOffset; + this.positionLength = positionLength; this.type = type; this.attributes = attributes; } @@ -71,6 +73,10 @@ public int getPosition() { return this.position; } + public int getPositionLength() { + return this.positionLength; + } + public String getType() { return this.type; } @@ -87,6 +93,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(Fields.END_OFFSET, endOffset); builder.field(Fields.TYPE, type); builder.field(Fields.POSITION, position); + if (positionLength > 1) { + builder.field(Fields.POSITION_LENGTH, positionLength); + } if (attributes != null && !attributes.isEmpty()) { for (Map.Entry entity : attributes.entrySet()) { builder.field(entity.getKey(), entity.getValue()); @@ -108,10 +117,16 @@ public void readFrom(StreamInput in) throws IOException { startOffset = in.readInt(); endOffset = in.readInt(); position = in.readVInt(); - type = in.readOptionalString(); - if (in.getVersion().onOrAfter(Version.V_2_2_0)) { - attributes = (Map) in.readGenericValue(); + if (in.getVersion().onOrAfter(Version.V_5_2_0)) { + Integer len = in.readOptionalVInt(); + if (len != null) { + positionLength = len; + } else { + positionLength = 1; + } } + type = in.readOptionalString(); + attributes = (Map) in.readGenericValue(); } @Override @@ -120,10 +135,11 @@ public void writeTo(StreamOutput out) throws IOException { out.writeInt(startOffset); out.writeInt(endOffset); out.writeVInt(position); - out.writeOptionalString(type); - if (out.getVersion().onOrAfter(Version.V_2_2_0)) { - out.writeGenericValue(attributes); + if (out.getVersion().onOrAfter(Version.V_5_2_0)) { + out.writeOptionalVInt(positionLength > 1 ? positionLength : null); } + out.writeOptionalString(type); + out.writeGenericValue(attributes); } } @@ -154,6 +170,7 @@ public Iterator iterator() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); if (tokens != null) { builder.startArray(Fields.TOKENS); for (AnalyzeToken token : tokens) { @@ -167,6 +184,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws detail.toXContent(builder, params); builder.endObject(); } + builder.endObject(); return builder; } @@ -178,9 +196,7 @@ public void readFrom(StreamInput in) throws IOException { for (int i = 0; i < size; i++) { tokens.add(AnalyzeToken.readAnalyzeToken(in)); } - if (in.getVersion().onOrAfter(Version.V_2_2_0)) { - detail = in.readOptionalStreamable(DetailAnalyzeResponse::new); - } + detail = in.readOptionalStreamable(DetailAnalyzeResponse::new); } @Override @@ -194,9 +210,7 @@ public void writeTo(StreamOutput out) throws IOException { } else { out.writeVInt(0); } - if (out.getVersion().onOrAfter(Version.V_2_2_0)) { - out.writeOptionalStreamable(detail); - } + out.writeOptionalStreamable(detail); } static final class Fields { @@ -206,6 +220,7 @@ static final class Fields { static final String END_OFFSET = "end_offset"; static final String TYPE = "type"; static final String POSITION = "position"; + static final String POSITION_LENGTH = "positionLength"; static final String DETAIL = "detail"; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/TransportAnalyzeAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/TransportAnalyzeAction.java index 7d7e9d2dd2ec1..1156637808578 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/TransportAnalyzeAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/analyze/TransportAnalyzeAction.java @@ -24,6 +24,7 @@ import org.apache.lucene.analysis.tokenattributes.CharTermAttribute; import org.apache.lucene.analysis.tokenattributes.OffsetAttribute; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; +import org.apache.lucene.analysis.tokenattributes.PositionLengthAttribute; import org.apache.lucene.analysis.tokenattributes.TypeAttribute; import org.apache.lucene.util.BytesRef; import org.apache.lucene.util.IOUtils; @@ -38,6 +39,7 @@ import org.elasticsearch.cluster.routing.ShardsIterator; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.UUIDs; +import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.io.FastStringReader; import org.elasticsearch.common.settings.Settings; @@ -52,6 +54,7 @@ import org.elasticsearch.index.analysis.TokenFilterFactory; import org.elasticsearch.index.analysis.TokenizerFactory; import org.elasticsearch.index.mapper.AllFieldMapper; +import org.elasticsearch.index.mapper.KeywordFieldMapper; import org.elasticsearch.index.mapper.MappedFieldType; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; @@ -130,10 +133,17 @@ protected AnalyzeResponse shardOperation(AnalyzeRequest request, ShardId shardId } MappedFieldType fieldType = indexService.mapperService().fullName(request.field()); if (fieldType != null) { - if (fieldType.tokenized() == false) { + if (fieldType.tokenized()) { + analyzer = fieldType.indexAnalyzer(); + } else if (fieldType instanceof KeywordFieldMapper.KeywordFieldType) { + analyzer = ((KeywordFieldMapper.KeywordFieldType) fieldType).normalizer(); + if (analyzer == null) { + // this will be KeywordAnalyzer + analyzer = fieldType.indexAnalyzer(); + } + } else { throw new IllegalArgumentException("Can't process field [" + request.field() + "], Analysis requests are only supported on tokenized fields"); } - analyzer = fieldType.indexAnalyzer(); field = fieldType.name(); } } @@ -170,7 +180,8 @@ public static AnalyzeResponse analyze(AnalyzeRequest request, String field, Anal } else if (request.tokenizer() != null) { final IndexSettings indexSettings = indexAnalyzers == null ? null : indexAnalyzers.getIndexSettings(); - TokenizerFactory tokenizerFactory = parseTokenizerFactory(request, indexAnalyzers, analysisRegistry, environment); + Tuple tokenizerFactory = parseTokenizerFactory(request, indexAnalyzers, + analysisRegistry, environment); TokenFilterFactory[] tokenFilterFactories = new TokenFilterFactory[0]; tokenFilterFactories = getTokenFilterFactories(request, indexSettings, analysisRegistry, environment, tokenFilterFactories); @@ -178,7 +189,7 @@ public static AnalyzeResponse analyze(AnalyzeRequest request, String field, Anal CharFilterFactory[] charFilterFactories = new CharFilterFactory[0]; charFilterFactories = getCharFilterFactories(request, indexSettings, analysisRegistry, environment, charFilterFactories); - analyzer = new CustomAnalyzer(tokenizerFactory, charFilterFactories, tokenFilterFactories); + analyzer = new CustomAnalyzer(tokenizerFactory.v1(), tokenizerFactory.v2(), charFilterFactories, tokenFilterFactories); closeAnalyzer = true; } else if (analyzer == null) { if (indexAnalyzers == null) { @@ -218,13 +229,15 @@ private static List simpleAnalyze(AnalyzeRequest r PositionIncrementAttribute posIncr = stream.addAttribute(PositionIncrementAttribute.class); OffsetAttribute offset = stream.addAttribute(OffsetAttribute.class); TypeAttribute type = stream.addAttribute(TypeAttribute.class); + PositionLengthAttribute posLen = stream.addAttribute(PositionLengthAttribute.class); while (stream.incrementToken()) { int increment = posIncr.getPositionIncrement(); if (increment > 0) { lastPosition = lastPosition + increment; } - tokens.add(new AnalyzeResponse.AnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(), lastOffset + offset.endOffset(), type.type(), null)); + tokens.add(new AnalyzeResponse.AnalyzeToken(term.toString(), lastPosition, lastOffset + offset.startOffset(), + lastOffset + offset.endOffset(), posLen.getPositionLength(), type.type(), null)); } stream.end(); @@ -314,7 +327,8 @@ private static DetailAnalyzeResponse detailAnalyze(AnalyzeRequest request, Analy tokenFilterFactories[tokenFilterIndex].name(), tokenFiltersTokenListCreator[tokenFilterIndex].getArrayTokens()); } } - detailResponse = new DetailAnalyzeResponse(charFilteredLists, new DetailAnalyzeResponse.AnalyzeTokenList(tokenizerFactory.name(), tokenizerTokenListCreator.getArrayTokens()), tokenFilterLists); + detailResponse = new DetailAnalyzeResponse(charFilteredLists, new DetailAnalyzeResponse.AnalyzeTokenList( + customAnalyzer.getTokenizerName(), tokenizerTokenListCreator.getArrayTokens()), tokenFilterLists); } else { String name; if (analyzer instanceof NamedAnalyzer) { @@ -381,6 +395,7 @@ private void analyze(TokenStream stream, Analyzer analyzer, String field, Set parseTokenizerFactory(AnalyzeRequest request, IndexAnalyzers indexAnalzyers, AnalysisRegistry analysisRegistry, Environment environment) throws IOException { + String name; TokenizerFactory tokenizerFactory; final AnalyzeRequest.NameOrDefinition tokenizer = request.tokenizer(); // parse anonymous settings @@ -556,6 +572,7 @@ private static TokenizerFactory parseTokenizerFactory(AnalyzeRequest request, In throw new IllegalArgumentException("failed to find global tokenizer under [" + tokenizerTypeName + "]"); } // Need to set anonymous "name" of tokenizer + name = "_anonymous_tokenizer"; tokenizerFactory = tokenizerFactoryFactory.get(getNaIndexSettings(settings), environment, "_anonymous_tokenizer", settings); } else { AnalysisModule.AnalysisProvider tokenizerFactoryFactory; @@ -564,18 +581,20 @@ private static TokenizerFactory parseTokenizerFactory(AnalyzeRequest request, In if (tokenizerFactoryFactory == null) { throw new IllegalArgumentException("failed to find global tokenizer under [" + tokenizer.name + "]"); } + name = tokenizer.name; tokenizerFactory = tokenizerFactoryFactory.get(environment, tokenizer.name); } else { tokenizerFactoryFactory = analysisRegistry.getTokenizerProvider(tokenizer.name, indexAnalzyers.getIndexSettings()); if (tokenizerFactoryFactory == null) { throw new IllegalArgumentException("failed to find tokenizer under [" + tokenizer.name + "]"); } + name = tokenizer.name; tokenizerFactory = tokenizerFactoryFactory.get(indexAnalzyers.getIndexSettings(), environment, tokenizer.name, AnalysisRegistry.getSettingsFromIndexSettings(indexAnalzyers.getIndexSettings(), AnalysisRegistry.INDEX_ANALYSIS_TOKENIZER + "." + tokenizer.name)); } } - return tokenizerFactory; + return new Tuple<>(name, tokenizerFactory); } private static IndexSettings getNaIndexSettings(Settings settings) { diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/close/CloseIndexRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/close/CloseIndexRequest.java index 092a65f9293f0..df0dcd9ff54a5 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/close/CloseIndexRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/close/CloseIndexRequest.java @@ -37,7 +37,7 @@ public class CloseIndexRequest extends AcknowledgedRequest implements IndicesRequest.Replaceable { private String[] indices; - private IndicesOptions indicesOptions = IndicesOptions.fromOptions(false, false, true, false); + private IndicesOptions indicesOptions = IndicesOptions.strictExpandOpen(); public CloseIndexRequest() { } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java index d33f37defec1e..244b8a24b9b67 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/close/TransportCloseIndexAction.java @@ -22,6 +22,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.admin.indices.open.OpenIndexResponse; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.DestructiveOperations; import org.elasticsearch.action.support.master.TransportMasterNodeAction; @@ -97,6 +98,10 @@ protected ClusterBlockException checkBlock(CloseIndexRequest request, ClusterSta @Override protected void masterOperation(final CloseIndexRequest request, final ClusterState state, final ActionListener listener) { final Index[] concreteIndices = indexNameExpressionResolver.concreteIndices(state, request); + if (concreteIndices == null || concreteIndices.length == 0) { + listener.onResponse(new CloseIndexResponse(true)); + return; + } CloseIndexClusterStateUpdateRequest updateRequest = new CloseIndexClusterStateUpdateRequest() .ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout()) .indices(concreteIndices); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java index 203483d89b3f4..0139726903b7c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequest.java @@ -21,6 +21,7 @@ import org.elasticsearch.ElasticsearchGenerationException; import org.elasticsearch.ElasticsearchParseException; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.IndicesRequest; import org.elasticsearch.action.admin.indices.alias.Alias; @@ -43,10 +44,11 @@ import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; -import java.nio.charset.StandardCharsets; +import java.io.UncheckedIOException; import java.util.HashMap; import java.util.HashSet; import java.util.Map; +import java.util.Objects; import java.util.Set; import static org.elasticsearch.action.ValidateActions.addValidationError; @@ -169,10 +171,10 @@ public CreateIndexRequest settings(Settings.Builder settings) { } /** - * The settings to create the index with (either json/yaml/properties format) + * The settings to create the index with (either json or yaml format) */ - public CreateIndexRequest settings(String source) { - this.settings = Settings.builder().loadFromSource(source).build(); + public CreateIndexRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); return this; } @@ -181,7 +183,7 @@ public CreateIndexRequest settings(String source) { */ public CreateIndexRequest settings(XContentBuilder builder) { try { - settings(builder.string()); + settings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate json settings from builder", e); } @@ -196,7 +198,7 @@ public CreateIndexRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), XContentType.JSON); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -208,13 +210,30 @@ public CreateIndexRequest settings(Map source) { * * @param type The mapping type * @param source The mapping source + * @param xContentType The content type of the source + */ + public CreateIndexRequest mapping(String type, String source, XContentType xContentType) { + return mapping(type, new BytesArray(source), xContentType); + } + + /** + * Adds mapping that will be added when the index gets created. + * + * @param type The mapping type + * @param source The mapping source + * @param xContentType the content type of the mapping source */ - public CreateIndexRequest mapping(String type, String source) { + private CreateIndexRequest mapping(String type, BytesReference source, XContentType xContentType) { if (mappings.containsKey(type)) { throw new IllegalStateException("mappings for type \"" + type + "\" were already defined"); } - mappings.put(type, source); - return this; + Objects.requireNonNull(xContentType); + try { + mappings.put(type, XContentHelper.convertToJson(source, false, false, xContentType)); + return this; + } catch (IOException e) { + throw new UncheckedIOException("failed to convert to json", e); + } } /** @@ -232,15 +251,7 @@ public CreateIndexRequest cause(String cause) { * @param source The mapping source */ public CreateIndexRequest mapping(String type, XContentBuilder source) { - if (mappings.containsKey(type)) { - throw new IllegalStateException("mappings for type \"" + type + "\" were already defined"); - } - try { - mappings.put(type, source.string()); - } catch (IOException e) { - throw new IllegalArgumentException("Failed to build json for mapping request", e); - } - return this; + return mapping(type, source.bytes(), source.contentType()); } /** @@ -261,7 +272,7 @@ public CreateIndexRequest mapping(String type, Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - return mapping(type, builder.string()); + return mapping(type, builder); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -332,41 +343,37 @@ public CreateIndexRequest alias(Alias alias) { /** * Sets the settings and mappings as a single source. */ - public CreateIndexRequest source(String source) { - return source(source.getBytes(StandardCharsets.UTF_8)); + public CreateIndexRequest source(String source, XContentType xContentType) { + return source(new BytesArray(source), xContentType); } /** * Sets the settings and mappings as a single source. */ public CreateIndexRequest source(XContentBuilder source) { - return source(source.bytes()); + return source(source.bytes(), source.contentType()); } /** * Sets the settings and mappings as a single source. */ - public CreateIndexRequest source(byte[] source) { - return source(source, 0, source.length); + public CreateIndexRequest source(byte[] source, XContentType xContentType) { + return source(source, 0, source.length, xContentType); } /** * Sets the settings and mappings as a single source. */ - public CreateIndexRequest source(byte[] source, int offset, int length) { - return source(new BytesArray(source, offset, length)); + public CreateIndexRequest source(byte[] source, int offset, int length, XContentType xContentType) { + return source(new BytesArray(source, offset, length), xContentType); } /** * Sets the settings and mappings as a single source. */ - public CreateIndexRequest source(BytesReference source) { - XContentType xContentType = XContentFactory.xContentType(source); - if (xContentType != null) { - source(XContentHelper.convertToMap(source, false).v2()); - } else { - settings(source.utf8ToString()); - } + public CreateIndexRequest source(BytesReference source, XContentType xContentType) { + Objects.requireNonNull(xContentType); + source(XContentHelper.convertToMap(source, false, xContentType).v2()); return this; } @@ -483,7 +490,13 @@ public void readFrom(StreamInput in) throws IOException { readTimeout(in); int size = in.readVInt(); for (int i = 0; i < size; i++) { - mappings.put(in.readString(), in.readString()); + final String type = in.readString(); + String source = in.readString(); + if (in.getVersion().before(Version.V_6_0_0_alpha1)) { // TODO change to 5.3.0 after backport + // we do not know the content type that comes from earlier versions so we autodetect and convert + source = XContentHelper.convertToJson(new BytesArray(source), false, false, XContentFactory.xContentType(source)); + } + mappings.put(type, source); } int customSize = in.readVInt(); for (int i = 0; i < customSize; i++) { diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java index eaae4d53b73fd..f7cc45511e0bb 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexRequestBuilder.java @@ -27,6 +27,7 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Map; @@ -76,13 +77,23 @@ public CreateIndexRequestBuilder setSettings(XContentBuilder builder) { } /** - * The settings to create the index with (either json/yaml/properties format) + * The settings to create the index with (either json or yaml format) + * @deprecated use {@link #setSettings(String, XContentType)} to avoid content type detection */ + @Deprecated public CreateIndexRequestBuilder setSettings(String source) { request.settings(source); return this; } + /** + * The settings to create the index with (either json or yaml format) + */ + public CreateIndexRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); + return this; + } + /** * A simplified version of settings that takes key value pairs settings. */ @@ -104,9 +115,10 @@ public CreateIndexRequestBuilder setSettings(Map source) { * * @param type The mapping type * @param source The mapping source + * @param xContentType The content type of the source */ - public CreateIndexRequestBuilder addMapping(String type, String source) { - request.mapping(type, source); + public CreateIndexRequestBuilder addMapping(String type, String source, XContentType xContentType) { + request.mapping(type, source, xContentType); return this; } @@ -192,32 +204,32 @@ public CreateIndexRequestBuilder addAlias(Alias alias) { /** * Sets the settings and mappings as a single source. */ - public CreateIndexRequestBuilder setSource(String source) { - request.source(source); + public CreateIndexRequestBuilder setSource(String source, XContentType xContentType) { + request.source(source, xContentType); return this; } /** * Sets the settings and mappings as a single source. */ - public CreateIndexRequestBuilder setSource(BytesReference source) { - request.source(source); + public CreateIndexRequestBuilder setSource(BytesReference source, XContentType xContentType) { + request.source(source, xContentType); return this; } /** * Sets the settings and mappings as a single source. */ - public CreateIndexRequestBuilder setSource(byte[] source) { - request.source(source); + public CreateIndexRequestBuilder setSource(byte[] source, XContentType xContentType) { + request.source(source, xContentType); return this; } /** * Sets the settings and mappings as a single source. */ - public CreateIndexRequestBuilder setSource(byte[] source, int offset, int length) { - request.source(source, offset, length); + public CreateIndexRequestBuilder setSource(byte[] source, int offset, int length, XContentType xContentType) { + request.source(source, offset, length, xContentType); return this; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java index 35dd53276cd6d..7d948e7137ebf 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/CreateIndexResponse.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.create; +import org.elasticsearch.Version; import org.elasticsearch.action.support.master.AcknowledgedResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -32,14 +33,16 @@ public class CreateIndexResponse extends AcknowledgedResponse { private boolean shardsAcked; + private String index; protected CreateIndexResponse() { } - protected CreateIndexResponse(boolean acknowledged, boolean shardsAcked) { + protected CreateIndexResponse(boolean acknowledged, boolean shardsAcked, String index) { super(acknowledged); assert acknowledged || shardsAcked == false; // if its not acknowledged, then shards acked should be false too this.shardsAcked = shardsAcked; + this.index = index; } @Override @@ -47,6 +50,9 @@ public void readFrom(StreamInput in) throws IOException { super.readFrom(in); readAcknowledged(in); shardsAcked = in.readBoolean(); + if (in.getVersion().onOrAfter(Version.V_5_6_0)) { + index = in.readString(); + } } @Override @@ -54,6 +60,9 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); writeAcknowledged(out); out.writeBoolean(shardsAcked); + if (out.getVersion().onOrAfter(Version.V_5_6_0)) { + out.writeString(index); + } } /** @@ -65,7 +74,12 @@ public boolean isShardsAcked() { return shardsAcked; } + public String index() { + return index; + } + public void addCustomFields(XContentBuilder builder) throws IOException { builder.field("shards_acknowledged", isShardsAcked()); + builder.field("index", index()); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java index 354dcf2387345..0ac8d02f97760 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/create/TransportCreateIndexAction.java @@ -79,7 +79,7 @@ protected void masterOperation(final CreateIndexRequest request, final ClusterSt .waitForActiveShards(request.waitForActiveShards()); createIndexService.createIndex(updateRequest, ActionListener.wrap(response -> - listener.onResponse(new CreateIndexResponse(response.isAcknowledged(), response.isShardsAcked())), + listener.onResponse(new CreateIndexResponse(response.isAcknowledged(), response.isShardsAcked(), indexName)), listener::onFailure)); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java index 251eed8bdb88b..f5c63bd470d40 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/delete/TransportDeleteIndexAction.java @@ -78,7 +78,7 @@ protected void doExecute(Task task, DeleteIndexRequest request, ActionListener { - public static enum Feature { + public enum Feature { ALIASES((byte) 0, "_aliases", "_alias"), MAPPINGS((byte) 1, "_mappings", "_mapping"), SETTINGS((byte) 2, "_settings"); @@ -52,7 +52,7 @@ public static enum Feature { private final String preferredName; private final byte id; - private Feature(byte id, String... validNames) { + Feature(byte id, String... validNames) { assert validNames != null && validNames.length > 0; this.id = id; this.validNames = Arrays.asList(validNames); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexResponse.java index 6c2e4627523e7..36bfa81a33416 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/get/GetIndexResponse.java @@ -114,7 +114,7 @@ public void readFrom(StreamInput in) throws IOException { for (int i = 0; i < aliasesSize; i++) { String key = in.readString(); int valueSize = in.readVInt(); - List aliasEntryBuilder = new ArrayList<>(); + List aliasEntryBuilder = new ArrayList<>(valueSize); for (int j = 0; j < valueSize; j++) { aliasEntryBuilder.add(new AliasMetaData(in)); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsResponse.java index e0cedcf841e47..3f4ddaf08db2c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/GetFieldMappingsResponse.java @@ -27,6 +27,7 @@ import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentHelper; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.mapper.Mapper; import java.io.IOException; @@ -108,20 +109,25 @@ public String fullName() { /** Returns the mappings as a map. Note that the returned map has a single key which is always the field's {@link Mapper#name}. */ public Map sourceAsMap() { - return XContentHelper.convertToMap(source, true).v2(); + return XContentHelper.convertToMap(source, true, XContentType.JSON).v2(); } public boolean isNull() { return NULL.fullName().equals(fullName) && NULL.source.length() == source.length(); } + //pkg-private for testing + BytesReference getSource() { + return source; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.field("full_name", fullName); if (params.paramAsBoolean("pretty", false)) { builder.field("mapping", sourceAsMap()); } else { - builder.rawField("mapping", source); + builder.rawField("mapping", source, XContentType.JSON); } return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java index 864c6703c48e0..92c23bb856865 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/get/TransportGetFieldMappingsIndexAction.java @@ -50,12 +50,10 @@ import java.io.IOException; import java.util.ArrayList; import java.util.Collection; -import java.util.Iterator; import java.util.Map; import java.util.stream.Collectors; import static java.util.Collections.singletonMap; -import static org.elasticsearch.common.util.CollectionUtils.newLinkedList; /** * Transport action used to retrieve the mappings related to fields that belong to a specific index @@ -174,24 +172,12 @@ private Map findFieldMappingsByType(DocumentMapper addFieldMapper(fieldMapper.fieldType().name(), fieldMapper, fieldMappings, request.includeDefaults()); } } else if (Regex.isSimpleMatchPattern(field)) { - // go through the field mappers 3 times, to make sure we give preference to the resolve order: full name, index name, name. - // also make sure we only store each mapper once. - Collection remainingFieldMappers = newLinkedList(allFieldMappers); - for (Iterator it = remainingFieldMappers.iterator(); it.hasNext(); ) { - final FieldMapper fieldMapper = it.next(); - if (Regex.simpleMatch(field, fieldMapper.fieldType().name())) { - addFieldMapper(fieldMapper.fieldType().name(), fieldMapper, fieldMappings, request.includeDefaults()); - it.remove(); - } - } - for (Iterator it = remainingFieldMappers.iterator(); it.hasNext(); ) { - final FieldMapper fieldMapper = it.next(); + for (FieldMapper fieldMapper : allFieldMappers) { if (Regex.simpleMatch(field, fieldMapper.fieldType().name())) { - addFieldMapper(fieldMapper.fieldType().name(), fieldMapper, fieldMappings, request.includeDefaults()); - it.remove(); + addFieldMapper(fieldMapper.fieldType().name(), fieldMapper, fieldMappings, + request.includeDefaults()); } } - } else { // not a pattern FieldMapper fieldMapper = allFieldMappers.smartNameFieldMapper(field); @@ -220,4 +206,4 @@ private void addFieldMapper(String field, FieldMapper fieldMapper, MapBuilder im private static ObjectHashSet RESERVED_FIELDS = ObjectHashSet.from( "_uid", "_id", "_type", "_source", "_all", "_analyzer", "_parent", "_routing", "_index", - "_size", "_timestamp", "_ttl" + "_size", "_timestamp", "_ttl", "_field_names" ); private String[] indices; @@ -245,7 +250,7 @@ public static XContentBuilder buildFromSimplifiedDef(String type, Object... sour */ public PutMappingRequest source(XContentBuilder mappingBuilder) { try { - return source(mappingBuilder.string()); + return source(mappingBuilder.string(), mappingBuilder.contentType()); } catch (IOException e) { throw new IllegalArgumentException("Failed to build json for mapping request", e); } @@ -259,7 +264,7 @@ public PutMappingRequest source(Map mappingSource) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(mappingSource); - return source(builder.string()); + return source(builder.string(), XContentType.JSON); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + mappingSource + "]", e); } @@ -268,9 +273,21 @@ public PutMappingRequest source(Map mappingSource) { /** * The mapping source definition. */ - public PutMappingRequest source(String mappingSource) { - this.source = mappingSource; - return this; + public PutMappingRequest source(String mappingSource, XContentType xContentType) { + return source(new BytesArray(mappingSource), xContentType); + } + + /** + * The mapping source definition. + */ + public PutMappingRequest source(BytesReference mappingSource, XContentType xContentType) { + Objects.requireNonNull(xContentType); + try { + this.source = XContentHelper.convertToJson(mappingSource, false, false, xContentType); + return this; + } catch (IOException e) { + throw new UncheckedIOException("failed to convert source to json", e); + } } /** True if all fields that span multiple types should be updated, false otherwise */ @@ -291,6 +308,10 @@ public void readFrom(StreamInput in) throws IOException { indicesOptions = IndicesOptions.readIndicesOptions(in); type = in.readOptionalString(); source = in.readString(); + if (in.getVersion().before(Version.V_5_3_0)) { + // we do not know the format from earlier versions so convert if necessary + source = XContentHelper.convertToJson(new BytesArray(source), false, false, XContentFactory.xContentType(source)); + } updateAllTypes = in.readBoolean(); readTimeout(in); concreteIndex = in.readOptionalWriteable(Index::new); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java index c21c40cf041ea..43bfe78c4871b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/mapping/put/PutMappingRequestBuilder.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.Index; import java.util.Map; @@ -83,8 +84,8 @@ public PutMappingRequestBuilder setSource(Map mappingSource) { /** * The mapping source definition. */ - public PutMappingRequestBuilder setSource(String mappingSource) { - request.source(mappingSource); + public PutMappingRequestBuilder setSource(String mappingSource, XContentType xContentType) { + request.source(mappingSource, xContentType); return this; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequest.java index 3f3b5f5c31c8e..06affe8ee69a3 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/open/OpenIndexRequest.java @@ -37,7 +37,7 @@ public class OpenIndexRequest extends AcknowledgedRequest implements IndicesRequest.Replaceable { private String[] indices; - private IndicesOptions indicesOptions = IndicesOptions.fromOptions(false, false, false, true); + private IndicesOptions indicesOptions = IndicesOptions.fromOptions(false, true, false, true); public OpenIndexRequest() { } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java index 1128ebf9875fd..451b9a280be68 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/open/TransportOpenIndexAction.java @@ -22,6 +22,7 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.DestructiveOperations; import org.elasticsearch.action.support.master.TransportMasterNodeAction; @@ -82,6 +83,10 @@ protected ClusterBlockException checkBlock(OpenIndexRequest request, ClusterStat @Override protected void masterOperation(final OpenIndexRequest request, final ClusterState state, final ActionListener listener) { final Index[] concreteIndices = indexNameExpressionResolver.concreteIndices(state, request); + if (concreteIndices == null || concreteIndices.length == 0) { + listener.onResponse(new OpenIndexResponse(true)); + return; + } OpenIndexClusterStateUpdateRequest updateRequest = new OpenIndexClusterStateUpdateRequest() .ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout()) .indices(concreteIndices); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportShardRefreshAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportShardRefreshAction.java index d8e9d8c0b9e72..19cc1b134d7fb 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportShardRefreshAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/refresh/TransportShardRefreshAction.java @@ -24,7 +24,7 @@ import org.elasticsearch.action.support.replication.ReplicationResponse; import org.elasticsearch.action.support.replication.TransportReplicationAction; import org.elasticsearch.cluster.action.shard.ShardStateAction; -import org.elasticsearch.cluster.block.ClusterBlockLevel; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; @@ -66,9 +66,4 @@ protected ReplicaResult shardOperationOnReplica(BasicReplicationRequest request, logger.trace("{} refresh request executed on replica", replica.shardId()); return new ReplicaResult(); } - - @Override - protected boolean shouldExecuteReplication(Settings settings) { - return true; - } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java index 8d9b48f20008e..d6bfaf0a48cec 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/Condition.java @@ -20,7 +20,6 @@ package org.elasticsearch.action.admin.indices.rollover; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.NamedWriteable; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.ObjectParser; @@ -32,8 +31,7 @@ */ public abstract class Condition implements NamedWriteable { - public static ObjectParser, ParseFieldMatcherSupplier> PARSER = - new ObjectParser<>("conditions", null); + public static ObjectParser, Void> PARSER = new ObjectParser<>("conditions", null); static { PARSER.declareString((conditions, s) -> conditions.add(new MaxAgeCondition(TimeValue.parseTimeValue(s, MaxAgeCondition.NAME))), @@ -49,7 +47,7 @@ protected Condition(String name) { this.name = name; } - public abstract Result evaluate(final Stats stats); + public abstract Result evaluate(Stats stats); @Override public final String toString() { diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java index ddd58705bea1f..4804bc577fc58 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverRequest.java @@ -25,7 +25,6 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.AcknowledgedRequest; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; @@ -44,23 +43,19 @@ */ public class RolloverRequest extends AcknowledgedRequest implements IndicesRequest { - public static final ObjectParser PARSER = - new ObjectParser<>("conditions", null); + public static final ObjectParser PARSER = new ObjectParser<>("conditions", null); static { - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> - Condition.PARSER.parse(parser, request.conditions, parseFieldMatcherSupplier), + PARSER.declareField((parser, request, context) -> Condition.PARSER.parse(parser, request.conditions, null), new ParseField("conditions"), ObjectParser.ValueType.OBJECT); - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> - request.createIndexRequest.settings(parser.map()), + PARSER.declareField((parser, request, context) -> request.createIndexRequest.settings(parser.map()), new ParseField("settings"), ObjectParser.ValueType.OBJECT); - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> { + PARSER.declareField((parser, request, context) -> { for (Map.Entry mappingsEntry : parser.map().entrySet()) { request.createIndexRequest.mapping(mappingsEntry.getKey(), (Map) mappingsEntry.getValue()); } }, new ParseField("mappings"), ObjectParser.ValueType.OBJECT); - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> - request.createIndexRequest.aliases(parser.map()), + PARSER.declareField((parser, request, context) -> request.createIndexRequest.aliases(parser.map()), new ParseField("aliases"), ObjectParser.ValueType.OBJECT); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverResponse.java index b495e3c6a0f32..8c1be3501a820 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/rollover/RolloverResponse.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -32,7 +32,7 @@ import java.util.Set; import java.util.stream.Collectors; -public final class RolloverResponse extends ActionResponse implements ToXContent { +public final class RolloverResponse extends ActionResponse implements ToXContentObject { private static final String NEW_INDEX = "new_index"; private static final String OLD_INDEX = "old_index"; @@ -157,6 +157,7 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.field(OLD_INDEX, oldIndex); builder.field(NEW_INDEX, newIndex); builder.field(ROLLED_OVER, rolledOver); @@ -168,6 +169,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(entry.getKey(), entry.getValue()); } builder.endObject(); + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentResponse.java index ed9463d1544e1..43b1033044c8c 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/segments/IndicesSegmentResponse.java @@ -19,6 +19,10 @@ package org.elasticsearch.action.admin.indices.segments; +import org.apache.lucene.search.Sort; +import org.apache.lucene.search.SortField; +import org.apache.lucene.search.SortedNumericSortField; +import org.apache.lucene.search.SortedSetSortField; import org.apache.lucene.util.Accountable; import org.elasticsearch.action.ShardOperationFailedException; import org.elasticsearch.action.support.broadcast.BroadcastResponse; @@ -37,6 +41,7 @@ import java.util.List; import java.util.Map; import java.util.Set; +import java.util.Locale; public class IndicesSegmentResponse extends BroadcastResponse implements ToXContent { @@ -140,6 +145,9 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (segment.getMergeId() != null) { builder.field(Fields.MERGE_ID, segment.getMergeId()); } + if (segment.getSegmentSort() != null) { + toXContent(builder, segment.getSegmentSort()); + } if (segment.ramTree != null) { builder.startArray(Fields.RAM_TREE); for (Accountable child : segment.ramTree.getChildResources()) { @@ -164,6 +172,25 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws return builder; } + static void toXContent(XContentBuilder builder, Sort sort) throws IOException { + builder.startArray("sort"); + for (SortField field : sort.getSort()) { + builder.startObject(); + builder.field("field", field.getField()); + if (field instanceof SortedNumericSortField) { + builder.field("mode", ((SortedNumericSortField) field).getSelector() + .toString().toLowerCase(Locale.ROOT)); + } else if (field instanceof SortedSetSortField) { + builder.field("mode", ((SortedSetSortField) field).getSelector() + .toString().toLowerCase(Locale.ROOT)); + } + builder.field("missing", field.getMissingValue()); + builder.field("reverse", field.getReverse()); + builder.endObject(); + } + builder.endArray(); + } + static void toXContent(XContentBuilder builder, Accountable tree) throws IOException { builder.startObject(); builder.field(Fields.DESCRIPTION, tree.toString()); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java index 67099b4d1004e..d20957c4bd29b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/TransportUpdateSettingsAction.java @@ -62,7 +62,10 @@ protected ClusterBlockException checkBlock(UpdateSettingsRequest request, Cluste if (globalBlock != null) { return globalBlock; } - if (request.settings().getAsMap().size() == 1 && IndexMetaData.INDEX_BLOCKS_METADATA_SETTING.exists(request.settings()) || IndexMetaData.INDEX_READ_ONLY_SETTING.exists(request.settings())) { + if (request.settings().size() == 1 && // we have to allow resetting these settings otherwise users can't unblock an index + IndexMetaData.INDEX_BLOCKS_METADATA_SETTING.exists(request.settings()) + || IndexMetaData.INDEX_READ_ONLY_SETTING.exists(request.settings()) + || IndexMetaData.INDEX_BLOCKS_READ_ONLY_ALLOW_DELETE_SETTING.exists(request.settings())) { return null; } return state.blocks().indicesBlockedException(ClusterBlockLevel.METADATA_WRITE, indexNameExpressionResolver.concreteIndexNames(state, request)); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequest.java index 494b9df7bd37b..f07e913e9c82e 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequest.java @@ -70,7 +70,7 @@ public UpdateSettingsRequest(Settings settings, String... indices) { @Override public ActionRequestValidationException validate() { ActionRequestValidationException validationException = null; - if (settings.getAsMap().isEmpty()) { + if (settings.isEmpty()) { validationException = addValidationError("no settings to update", validationException); } return validationException; @@ -121,10 +121,10 @@ public UpdateSettingsRequest settings(Settings.Builder settings) { } /** - * Sets the settings to be updated (either json/yaml/properties format) + * Sets the settings to be updated (either json or yaml format) */ - public UpdateSettingsRequest settings(String source) { - this.settings = Settings.builder().loadFromSource(source).build(); + public UpdateSettingsRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); return this; } @@ -146,14 +146,14 @@ public UpdateSettingsRequest setPreserveExisting(boolean preserveExisting) { } /** - * Sets the settings to be updated (either json/yaml/properties format) + * Sets the settings to be updated (either json or yaml format) */ @SuppressWarnings("unchecked") public UpdateSettingsRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), builder.contentType()); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java index 36dfbf3b2d49b..8cf86fadc1673 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/settings/put/UpdateSettingsRequestBuilder.java @@ -23,6 +23,7 @@ import org.elasticsearch.action.support.master.AcknowledgedRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Map; @@ -70,15 +71,15 @@ public UpdateSettingsRequestBuilder setSettings(Settings.Builder settings) { } /** - * Sets the settings to be updated (either json/yaml/properties format) + * Sets the settings to be updated (either json or yaml format) */ - public UpdateSettingsRequestBuilder setSettings(String source) { - request.settings(source); + public UpdateSettingsRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); return this; } /** - * Sets the settings to be updated (either json/yaml/properties format) + * Sets the settings to be updated */ public UpdateSettingsRequestBuilder setSettings(Map source) { request.settings(source); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java index f1df6d53e185d..8cded12b03073 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/IndicesShardStoresResponse.java @@ -21,7 +21,9 @@ import com.carrotsearch.hppc.cursors.IntObjectCursor; import com.carrotsearch.hppc.cursors.ObjectObjectCursor; + import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.action.ShardOperationFailedException; import org.elasticsearch.action.support.DefaultShardOperationFailedException; @@ -33,7 +35,6 @@ import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.index.shard.ShardStateMetaData; import java.io.IOException; import java.util.ArrayList; @@ -55,7 +56,6 @@ public class IndicesShardStoresResponse extends ActionResponse implements ToXCon */ public static class StoreStatus implements Streamable, ToXContent, Comparable { private DiscoveryNode node; - private long legacyVersion; private String allocationId; private Exception storeException; private AllocationStatus allocationStatus; @@ -116,9 +116,8 @@ private void writeTo(StreamOutput out) throws IOException { private StoreStatus() { } - public StoreStatus(DiscoveryNode node, long legacyVersion, String allocationId, AllocationStatus allocationStatus, Exception storeException) { + public StoreStatus(DiscoveryNode node, String allocationId, AllocationStatus allocationStatus, Exception storeException) { this.node = node; - this.legacyVersion = legacyVersion; this.allocationId = allocationId; this.allocationStatus = allocationStatus; this.storeException = storeException; @@ -131,13 +130,6 @@ public DiscoveryNode getNode() { return node; } - /** - * Version of the store for pre-3.0 shards that have not yet been active - */ - public long getLegacyVersion() { - return legacyVersion; - } - /** * AllocationStatus id of the store, used to select the store that will be * used as a primary. @@ -173,7 +165,10 @@ public static StoreStatus readStoreStatus(StreamInput in) throws IOException { @Override public void readFrom(StreamInput in) throws IOException { node = new DiscoveryNode(in); - legacyVersion = in.readLong(); + if (in.getVersion().before(Version.V_6_0_0_alpha1)) { + // legacy version + in.readLong(); + } allocationId = in.readOptionalString(); allocationStatus = AllocationStatus.readFrom(in); if (in.readBoolean()) { @@ -184,7 +179,10 @@ public void readFrom(StreamInput in) throws IOException { @Override public void writeTo(StreamOutput out) throws IOException { node.writeTo(out); - out.writeLong(legacyVersion); + if (out.getVersion().before(Version.V_6_0_0_alpha1)) { + // legacy version + out.writeLong(-1L); + } out.writeOptionalString(allocationId); allocationStatus.writeTo(out); if (storeException != null) { @@ -198,16 +196,13 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { node.toXContent(builder, params); - if (legacyVersion != ShardStateMetaData.NO_VERSION) { - builder.field(Fields.LEGACY_VERSION, legacyVersion); - } if (allocationId != null) { builder.field(Fields.ALLOCATION_ID, allocationId); } builder.field(Fields.ALLOCATED, allocationStatus.value()); if (storeException != null) { builder.startObject(Fields.STORE_EXCEPTION); - ElasticsearchException.toXContent(builder, params, storeException); + ElasticsearchException.generateThrowableXContent(builder, params, storeException); builder.endObject(); } return builder; @@ -225,11 +220,7 @@ public int compareTo(StoreStatus other) { } else if (allocationId == null && other.allocationId != null) { return 1; } else if (allocationId == null && other.allocationId == null) { - int compare = Long.compare(other.legacyVersion, legacyVersion); - if (compare == 0) { - return Integer.compare(allocationStatus.id, other.allocationStatus.id); - } - return compare; + return Integer.compare(allocationStatus.id, other.allocationStatus.id); } else { int compare = Integer.compare(allocationStatus.id, other.allocationStatus.id); if (compare == 0) { @@ -405,7 +396,6 @@ static final class Fields { static final String FAILURES = "failures"; static final String STORES = "stores"; // StoreStatus fields - static final String LEGACY_VERSION = "legacy_version"; static final String ALLOCATION_ID = "allocation_id"; static final String STORE_EXCEPTION = "store_exception"; static final String ALLOCATED = "allocation"; diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java index e13578d66de63..c11a2ded83d4b 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shards/TransportIndicesShardStoresAction.java @@ -29,7 +29,6 @@ import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.health.ClusterHealthStatus; import org.elasticsearch.cluster.health.ClusterShardHealth; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.node.DiscoveryNodes; @@ -155,7 +154,7 @@ private class InternalAsyncFetch extends AsyncShardFetch responses, List failures) { + protected synchronized void processAsyncFetch(List responses, List failures, long fetchingRound) { fetchResponses.add(new Response(shardId, responses, failures)); if (expectedOps.countDown()) { finish(); @@ -180,7 +179,7 @@ void finish() { for (NodeGatewayStartedShards response : fetchResponse.responses) { if (shardExistsInNode(response)) { IndicesShardStoresResponse.StoreStatus.AllocationStatus allocationStatus = getAllocationStatus(fetchResponse.shardId.getIndexName(), fetchResponse.shardId.id(), response.getNode()); - storeStatuses.add(new IndicesShardStoresResponse.StoreStatus(response.getNode(), response.legacyVersion(), response.allocationId(), allocationStatus, response.storeException())); + storeStatuses.add(new IndicesShardStoresResponse.StoreStatus(response.getNode(), response.allocationId(), allocationStatus, response.storeException())); } } CollectionUtil.timSort(storeStatuses); @@ -213,7 +212,7 @@ private IndicesShardStoresResponse.StoreStatus.AllocationStatus getAllocationSta * A shard exists/existed in a node only if shard state file exists in the node */ private boolean shardExistsInNode(final NodeGatewayStartedShards response) { - return response.storeException() != null || response.legacyVersion() != -1 || response.allocationId() != null; + return response.storeException() != null || response.allocationId() != null; } @Override @@ -226,7 +225,7 @@ public class Response { private final List responses; private final List failures; - public Response(ShardId shardId, List responses, List failures) { + Response(ShardId shardId, List responses, List failures) { this.shardId = shardId; this.responses = responses; this.failures = failures; diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequest.java index 40a11402501ca..6ea58200a4500 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkRequest.java @@ -25,7 +25,6 @@ import org.elasticsearch.action.support.IndicesOptions; import org.elasticsearch.action.support.master.AcknowledgedRequest; import org.elasticsearch.common.ParseField; -import org.elasticsearch.common.ParseFieldMatcherSupplier; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ObjectParser; @@ -40,14 +39,11 @@ */ public class ShrinkRequest extends AcknowledgedRequest implements IndicesRequest { - public static final ObjectParser PARSER = - new ObjectParser<>("shrink_request", null); + public static final ObjectParser PARSER = new ObjectParser<>("shrink_request", null); static { - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> - request.getShrinkIndexRequest().settings(parser.map()), + PARSER.declareField((parser, request, context) -> request.getShrinkIndexRequest().settings(parser.map()), new ParseField("settings"), ObjectParser.ValueType.OBJECT); - PARSER.declareField((parser, request, parseFieldMatcherSupplier) -> - request.getShrinkIndexRequest().aliases(parser.map()), + PARSER.declareField((parser, request, context) -> request.getShrinkIndexRequest().aliases(parser.map()), new ParseField("aliases"), ObjectParser.ValueType.OBJECT); } @@ -70,6 +66,9 @@ public ActionRequestValidationException validate() { if (shrinkIndexRequest == null) { validationException = addValidationError("shrink index request is missing", validationException); } + if (shrinkIndexRequest.settings().getByPrefix("index.sort.").isEmpty() == false) { + validationException = addValidationError("can't override index sort when shrinking index", validationException); + } return validationException; } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkResponse.java index e7ad0afe3aa17..0c5149f6bf353 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/ShrinkResponse.java @@ -25,7 +25,7 @@ public final class ShrinkResponse extends CreateIndexResponse { ShrinkResponse() { } - ShrinkResponse(boolean acknowledged, boolean shardsAcked) { - super(acknowledged, shardsAcked); + ShrinkResponse(boolean acknowledged, boolean shardsAcked, String index) { + super(acknowledged, shardsAcked, index); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java index 6d27b03db6398..2555299709cda 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/shrink/TransportShrinkAction.java @@ -91,8 +91,13 @@ public void onResponse(IndicesStatsResponse indicesStatsResponse) { IndexShardStats shard = indicesStatsResponse.getIndex(sourceIndex).getIndexShards().get(i); return shard == null ? null : shard.getPrimary().getDocs(); }, indexNameExpressionResolver); - createIndexService.createIndex(updateRequest, ActionListener.wrap(response -> - listener.onResponse(new ShrinkResponse(response.isAcknowledged(), response.isShardsAcked())), listener::onFailure)); + createIndexService.createIndex( + updateRequest, + ActionListener.wrap(response -> + listener.onResponse(new ShrinkResponse(response.isAcknowledged(), response.isShardsAcked(), updateRequest.index())), + listener::onFailure + ) + ); } @Override @@ -131,6 +136,9 @@ static CreateIndexClusterStateUpdateRequest prepareCreateIndexRequest(final Shri } } + if (IndexMetaData.INDEX_ROUTING_PARTITION_SIZE_SETTING.exists(targetIndexSettings)) { + throw new IllegalArgumentException("cannot provide a routing partition size value when shrinking an index"); + } targetIndex.cause("shrink_index"); Settings.Builder settingsBuilder = Settings.builder().put(targetIndexSettings); settingsBuilder.put("index.number_of_shards", numShards); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java index 150b7c6a52bc5..3d1e567fa1cea 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/stats/ShardStats.java @@ -104,7 +104,7 @@ public void readFrom(StreamInput in) throws IOException { statePath = in.readString(); dataPath = in.readString(); isCustomDataPath = in.readBoolean(); - if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { seqNoStats = in.readOptionalWriteable(SeqNoStats::new); } } @@ -117,7 +117,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(statePath); out.writeString(dataPath); out.writeBoolean(isCustomDataPath); - if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { out.writeOptionalWriteable(seqNoStats); } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesResponse.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesResponse.java index 02b08b28f98f4..3c5fb36d6c6aa 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesResponse.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/get/GetIndexTemplatesResponse.java @@ -23,6 +23,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -31,7 +32,7 @@ import static java.util.Collections.singletonMap; -public class GetIndexTemplatesResponse extends ActionResponse implements ToXContent { +public class GetIndexTemplatesResponse extends ActionResponse implements ToXContentObject { private List indexTemplates; @@ -68,10 +69,11 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { params = new ToXContent.DelegatingMapParams(singletonMap("reduce_mappings", "true"), params); - + builder.startObject(); for (IndexTemplateMetaData indexTemplateMetaData : getIndexTemplates()) { IndexTemplateMetaData.Builder.toXContent(indexTemplateMetaData, builder, params); } + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java index a7d6241d31e2b..99ad163f48dd2 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequest.java @@ -45,11 +45,13 @@ import org.elasticsearch.common.xcontent.support.XContentMapValues; import java.io.IOException; +import java.io.UncheckedIOException; import java.util.Collections; import java.util.HashMap; import java.util.HashSet; import java.util.List; import java.util.Map; +import java.util.Objects; import java.util.Set; import java.util.stream.Collectors; @@ -179,21 +181,21 @@ public PutIndexTemplateRequest settings(Settings.Builder settings) { } /** - * The settings to create the index template with (either json/yaml/properties format). + * The settings to create the index template with (either json/yaml format). */ - public PutIndexTemplateRequest settings(String source) { - this.settings = Settings.builder().loadFromSource(source).build(); + public PutIndexTemplateRequest settings(String source, XContentType xContentType) { + this.settings = Settings.builder().loadFromSource(source, xContentType).build(); return this; } /** - * The settings to crete the index template with (either json/yaml/properties format). + * The settings to create the index template with (either json or yaml format). */ public PutIndexTemplateRequest settings(Map source) { try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - settings(builder.string()); + settings(builder.string(), XContentType.JSON); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -209,10 +211,10 @@ public Settings settings() { * * @param type The mapping type * @param source The mapping source + * @param xContentType The type of content contained within the source */ - public PutIndexTemplateRequest mapping(String type, String source) { - mappings.put(type, source); - return this; + public PutIndexTemplateRequest mapping(String type, String source, XContentType xContentType) { + return mapping(type, new BytesArray(source), xContentType); } /** @@ -234,12 +236,24 @@ public String cause() { * @param source The mapping source */ public PutIndexTemplateRequest mapping(String type, XContentBuilder source) { + return mapping(type, source.bytes(), source.contentType()); + } + + /** + * Adds mapping that will be added when the index gets created. + * + * @param type The mapping type + * @param source The mapping source + * @param xContentType the source content type + */ + public PutIndexTemplateRequest mapping(String type, BytesReference source, XContentType xContentType) { + Objects.requireNonNull(xContentType); try { - mappings.put(type, source.string()); + mappings.put(type, XContentHelper.convertToJson(source, false, false, xContentType)); + return this; } catch (IOException e) { - throw new IllegalArgumentException("Failed to build json for mapping request", e); + throw new UncheckedIOException("failed to convert source to json", e); } - return this; } /** @@ -256,7 +270,7 @@ public PutIndexTemplateRequest mapping(String type, Map source) try { XContentBuilder builder = XContentFactory.contentBuilder(XContentType.JSON); builder.map(source); - return mapping(type, builder.string()); + return mapping(type, builder); } catch (IOException e) { throw new ElasticsearchGenerationException("Failed to generate [" + source + "]", e); } @@ -280,7 +294,7 @@ public Map mappings() { */ public PutIndexTemplateRequest source(XContentBuilder templateBuilder) { try { - return source(templateBuilder.bytes()); + return source(templateBuilder.bytes(), templateBuilder.contentType()); } catch (Exception e) { throw new IllegalArgumentException("Failed to build json for template request", e); } @@ -351,29 +365,29 @@ public PutIndexTemplateRequest source(Map templateSource) { /** * The template source definition. */ - public PutIndexTemplateRequest source(String templateSource) { - return source(XContentHelper.convertToMap(XContentFactory.xContent(templateSource), templateSource, true)); + public PutIndexTemplateRequest source(String templateSource, XContentType xContentType) { + return source(XContentHelper.convertToMap(xContentType.xContent(), templateSource, true)); } /** * The template source definition. */ - public PutIndexTemplateRequest source(byte[] source) { - return source(source, 0, source.length); + public PutIndexTemplateRequest source(byte[] source, XContentType xContentType) { + return source(source, 0, source.length, xContentType); } /** * The template source definition. */ - public PutIndexTemplateRequest source(byte[] source, int offset, int length) { - return source(new BytesArray(source, offset, length)); + public PutIndexTemplateRequest source(byte[] source, int offset, int length, XContentType xContentType) { + return source(new BytesArray(source, offset, length), xContentType); } /** * The template source definition. */ - public PutIndexTemplateRequest source(BytesReference source) { - return source(XContentHelper.convertToMap(source, true).v2()); + public PutIndexTemplateRequest source(BytesReference source, XContentType xContentType) { + return source(XContentHelper.convertToMap(source, true, xContentType).v2()); } public PutIndexTemplateRequest custom(IndexMetaData.Custom custom) { @@ -461,7 +475,7 @@ public void readFrom(StreamInput in) throws IOException { cause = in.readString(); name = in.readString(); - if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { indexPatterns = in.readList(StreamInput::readString); } else { indexPatterns = Collections.singletonList(in.readString()); @@ -471,7 +485,14 @@ public void readFrom(StreamInput in) throws IOException { settings = readSettingsFromStream(in); int size = in.readVInt(); for (int i = 0; i < size; i++) { - mappings.put(in.readString(), in.readString()); + final String type = in.readString(); + String mappingSource = in.readString(); + if (in.getVersion().before(Version.V_5_3_0)) { + // we do not know the incoming type so convert it if needed + mappingSource = + XContentHelper.convertToJson(new BytesArray(mappingSource), false, false, XContentFactory.xContentType(mappingSource)); + } + mappings.put(type, mappingSource); } int customSize = in.readVInt(); for (int i = 0; i < customSize; i++) { @@ -491,7 +512,7 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); out.writeString(cause); out.writeString(name); - if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { out.writeStringList(indexPatterns); } else { out.writeString(indexPatterns.size() > 0 ? indexPatterns.get(0) : ""); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java index c1db96ae7ce5c..7b365f94ab498 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/template/put/PutIndexTemplateRequestBuilder.java @@ -24,6 +24,7 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentType; import java.util.Collections; import java.util.List; @@ -100,15 +101,15 @@ public PutIndexTemplateRequestBuilder setSettings(Settings.Builder settings) { } /** - * The settings to crete the index template with (either json/yaml/properties format) + * The settings to crete the index template with (either json or yaml format) */ - public PutIndexTemplateRequestBuilder setSettings(String source) { - request.settings(source); + public PutIndexTemplateRequestBuilder setSettings(String source, XContentType xContentType) { + request.settings(source, xContentType); return this; } /** - * The settings to crete the index template with (either json/yaml/properties format) + * The settings to crete the index template with (either json or yaml format) */ public PutIndexTemplateRequestBuilder setSettings(Map source) { request.settings(source); @@ -120,9 +121,10 @@ public PutIndexTemplateRequestBuilder setSettings(Map source) { * * @param type The mapping type * @param source The mapping source + * @param xContentType The type/format of the source */ - public PutIndexTemplateRequestBuilder addMapping(String type, String source) { - request.mapping(type, source); + public PutIndexTemplateRequestBuilder addMapping(String type, String source, XContentType xContentType) { + request.mapping(type, source, xContentType); return this; } @@ -227,32 +229,24 @@ public PutIndexTemplateRequestBuilder setSource(Map templateSource) { /** * The template source definition. */ - public PutIndexTemplateRequestBuilder setSource(String templateSource) { - request.source(templateSource); + public PutIndexTemplateRequestBuilder setSource(BytesReference templateSource, XContentType xContentType) { + request.source(templateSource, xContentType); return this; } /** * The template source definition. */ - public PutIndexTemplateRequestBuilder setSource(BytesReference templateSource) { - request.source(templateSource); - return this; - } - - /** - * The template source definition. - */ - public PutIndexTemplateRequestBuilder setSource(byte[] templateSource) { - request.source(templateSource); + public PutIndexTemplateRequestBuilder setSource(byte[] templateSource, XContentType xContentType) { + request.source(templateSource, xContentType); return this; } /** * The template source definition. */ - public PutIndexTemplateRequestBuilder setSource(byte[] templateSource, int offset, int length) { - request.source(templateSource, offset, length); + public PutIndexTemplateRequestBuilder setSource(byte[] templateSource, int offset, int length, XContentType xContentType) { + request.source(templateSource, offset, length, xContentType); return this; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/QueryExplanation.java b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/QueryExplanation.java index 6da503ef8281c..df9c12c95f4c9 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/QueryExplanation.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/QueryExplanation.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.validate.query; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; @@ -27,20 +28,26 @@ public class QueryExplanation implements Streamable { + public static final int RANDOM_SHARD = -1; + private String index; - + + private int shard = RANDOM_SHARD; + private boolean valid; - + private String explanation; - + private String error; QueryExplanation() { - + } - - public QueryExplanation(String index, boolean valid, String explanation, String error) { + + public QueryExplanation(String index, int shard, boolean valid, String explanation, + String error) { this.index = index; + this.shard = shard; this.valid = valid; this.explanation = explanation; this.error = error; @@ -50,6 +57,10 @@ public String getIndex() { return this.index; } + public int getShard() { + return this.shard; + } + public boolean isValid() { return this.valid; } @@ -65,6 +76,11 @@ public String getExplanation() { @Override public void readFrom(StreamInput in) throws IOException { index = in.readString(); + if (in.getVersion().onOrAfter(Version.V_5_4_0)) { + shard = in.readInt(); + } else { + shard = RANDOM_SHARD; + } valid = in.readBoolean(); explanation = in.readOptionalString(); error = in.readOptionalString(); @@ -73,6 +89,9 @@ public void readFrom(StreamInput in) throws IOException { @Override public void writeTo(StreamOutput out) throws IOException { out.writeString(index); + if (out.getVersion().onOrAfter(Version.V_5_4_0)) { + out.writeInt(shard); + } out.writeBoolean(valid); out.writeOptionalString(explanation); out.writeOptionalString(error); diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java index b80b721149cd9..3a13915b3aaea 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/TransportValidateQueryAction.java @@ -89,8 +89,14 @@ protected ShardValidateQueryResponse newShardResponse() { @Override protected GroupShardsIterator shards(ClusterState clusterState, ValidateQueryRequest request, String[] concreteIndices) { - // Hard-code routing to limit request to a single shard, but still, randomize it... - Map> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState, Integer.toString(Randomness.get().nextInt(1000)), request.indices()); + final String routing; + if (request.allShards()) { + routing = null; + } else { + // Random routing to limit request to a single shard + routing = Integer.toString(Randomness.get().nextInt(1000)); + } + Map> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState, routing, request.indices()); return clusterService.operationRouting().searchShards(clusterState, concreteIndices, routingMap, "_local"); } @@ -124,12 +130,13 @@ protected ValidateQueryResponse newResponse(ValidateQueryRequest request, Atomic } else { ShardValidateQueryResponse validateQueryResponse = (ShardValidateQueryResponse) shardResponse; valid = valid && validateQueryResponse.isValid(); - if (request.explain() || request.rewrite()) { + if (request.explain() || request.rewrite() || request.allShards()) { if (queryExplanations == null) { queryExplanations = new ArrayList<>(); } queryExplanations.add(new QueryExplanation( validateQueryResponse.getIndex(), + request.allShards() ? validateQueryResponse.getShardId().getId() : QueryExplanation.RANDOM_SHARD, validateQueryResponse.isValid(), validateQueryResponse.getExplanation(), validateQueryResponse.getError() diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequest.java b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequest.java index 41ef37ad621f1..5953a5548c465 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequest.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequest.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.admin.indices.validate.query; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.ValidateActions; import org.elasticsearch.action.support.IndicesOptions; @@ -43,6 +44,7 @@ public class ValidateQueryRequest extends BroadcastRequest private boolean explain; private boolean rewrite; + private boolean allShards; private String[] types = Strings.EMPTY_ARRAY; @@ -125,6 +127,20 @@ public boolean rewrite() { return rewrite; } + /** + * Indicates whether the query should be validated on all shards instead of one random shard + */ + public void allShards(boolean allShards) { + this.allShards = allShards; + } + + /** + * Indicates whether the query should be validated on all shards instead of one random shard + */ + public boolean allShards() { + return allShards; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -138,6 +154,9 @@ public void readFrom(StreamInput in) throws IOException { } explain = in.readBoolean(); rewrite = in.readBoolean(); + if (in.getVersion().onOrAfter(Version.V_5_4_0)) { + allShards = in.readBoolean(); + } } @Override @@ -150,11 +169,14 @@ public void writeTo(StreamOutput out) throws IOException { } out.writeBoolean(explain); out.writeBoolean(rewrite); + if (out.getVersion().onOrAfter(Version.V_5_4_0)) { + out.writeBoolean(allShards); + } } @Override public String toString() { return "[" + Arrays.toString(indices) + "]" + Arrays.toString(types) + ", query[" + query + "], explain:" + explain + - ", rewrite:" + rewrite; + ", rewrite:" + rewrite + ", all_shards:" + allShards; } } diff --git a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequestBuilder.java index 8e377968980c6..bd8067e05cb9f 100644 --- a/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/admin/indices/validate/query/ValidateQueryRequestBuilder.java @@ -64,4 +64,12 @@ public ValidateQueryRequestBuilder setRewrite(boolean rewrite) { request.rewrite(rewrite); return this; } + + /** + * Indicates whether the query should be validated on all shards + */ + public ValidateQueryRequestBuilder setAllShards(boolean rewrite) { + request.allShards(rewrite); + return this; + } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BackoffPolicy.java b/core/src/main/java/org/elasticsearch/action/bulk/BackoffPolicy.java index bc8f8c347ab0e..81084e22377e5 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BackoffPolicy.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BackoffPolicy.java @@ -171,7 +171,7 @@ private static final class ConstantBackoff extends BackoffPolicy { private final int numberOfElements; - public ConstantBackoff(TimeValue delay, int numberOfElements) { + ConstantBackoff(TimeValue delay, int numberOfElements) { assert numberOfElements >= 0; this.delay = delay; this.numberOfElements = numberOfElements; @@ -188,7 +188,7 @@ private static final class ConstantBackoffIterator implements Iterator private final Iterator delegate; private final Runnable onBackoff; - public WrappedBackoffIterator(Iterator delegate, Runnable onBackoff) { + WrappedBackoffIterator(Iterator delegate, Runnable onBackoff) { this.delegate = delegate; this.onBackoff = onBackoff; } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java index 987aa36585b7a..39e03277c37bb 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemRequest.java @@ -19,7 +19,9 @@ package org.elasticsearch.action.bulk; +import org.elasticsearch.Version; import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; @@ -31,12 +33,12 @@ public class BulkItemRequest implements Streamable { private int id; private DocWriteRequest request; private volatile BulkItemResponse primaryResponse; - private volatile boolean ignoreOnReplica; BulkItemRequest() { } + // NOTE: public for testing only public BulkItemRequest(int id, DocWriteRequest request) { this.id = id; this.request = request; @@ -63,17 +65,6 @@ void setPrimaryResponse(BulkItemResponse primaryResponse) { this.primaryResponse = primaryResponse; } - /** - * Marks this request to be ignored and *not* execute on a replica. - */ - void setIgnoreOnReplica() { - this.ignoreOnReplica = true; - } - - boolean isIgnoreOnReplica() { - return ignoreOnReplica; - } - public static BulkItemRequest readBulkItem(StreamInput in) throws IOException { BulkItemRequest item = new BulkItemRequest(); item.readFrom(in); @@ -87,14 +78,37 @@ public void readFrom(StreamInput in) throws IOException { if (in.readBoolean()) { primaryResponse = BulkItemResponse.readBulkItem(in); } - ignoreOnReplica = in.readBoolean(); + if (in.getVersion().before(Version.V_6_0_0_alpha1)) { // TODO remove once backported + boolean ignoreOnReplica = in.readBoolean(); + if (ignoreOnReplica == false && primaryResponse != null) { + assert primaryResponse.isFailed() == false : "expected no failure on the primary response"; + } + } } @Override public void writeTo(StreamOutput out) throws IOException { out.writeVInt(id); - DocWriteRequest.writeDocumentRequest(out, request); + if (out.getVersion().before(Version.V_6_0_0_alpha1)) { // TODO remove once backported + // old nodes expect updated version and version type on the request + if (primaryResponse != null) { + request.version(primaryResponse.getVersion()); + request.versionType(request.versionType().versionTypeForReplicationAndRecovery()); + DocWriteRequest.writeDocumentRequest(out, request); + } else { + DocWriteRequest.writeDocumentRequest(out, request); + } + } else { + DocWriteRequest.writeDocumentRequest(out, request); + } out.writeOptionalStreamable(primaryResponse); - out.writeBoolean(ignoreOnReplica); + if (out.getVersion().before(Version.V_6_0_0_alpha1)) { // TODO remove once backported + if (primaryResponse != null) { + out.writeBoolean(primaryResponse.isFailed() + || primaryResponse.getResponse().getResult() == DocWriteResponse.Result.NOOP); + } else { + out.writeBoolean(false); + } + } } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java index e1a6e48e9e095..daf10521f9ed5 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResponse.java @@ -22,28 +22,40 @@ import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.Version; -import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.action.DocWriteRequest.OpType; +import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.action.delete.DeleteResponse; import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.action.update.UpdateResponse; +import org.elasticsearch.common.CheckedConsumer; import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; import org.elasticsearch.common.io.stream.Writeable; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.ToXContent; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; +import static org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownField; + /** * Represents a single item response for an action executed as part of the bulk API. Holds the index/type/id * of the relevant action, and if it has failed or not (with the failure message incase it failed). */ -public class BulkItemResponse implements Streamable, StatusToXContent { +public class BulkItemResponse implements Streamable, StatusToXContentObject { + + private static final String _INDEX = "_index"; + private static final String _TYPE = "_type"; + private static final String _ID = "_id"; + private static final String STATUS = "status"; + private static final String ERROR = "error"; @Override public RestStatus status() { @@ -52,29 +64,97 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startObject(opType.getLowercase()); if (failure == null) { - response.toXContent(builder, params); - builder.field(Fields.STATUS, response.status().getStatus()); + response.innerToXContent(builder, params); + builder.field(STATUS, response.status().getStatus()); } else { - builder.field(Fields._INDEX, failure.getIndex()); - builder.field(Fields._TYPE, failure.getType()); - builder.field(Fields._ID, failure.getId()); - builder.field(Fields.STATUS, failure.getStatus().getStatus()); - builder.startObject(Fields.ERROR); - ElasticsearchException.toXContent(builder, params, failure.getCause()); + builder.field(_INDEX, failure.getIndex()); + builder.field(_TYPE, failure.getType()); + builder.field(_ID, failure.getId()); + builder.field(STATUS, failure.getStatus().getStatus()); + builder.startObject(ERROR); + ElasticsearchException.generateThrowableXContent(builder, params, failure.getCause()); builder.endObject(); } builder.endObject(); + builder.endObject(); return builder; } - static final class Fields { - static final String _INDEX = "_index"; - static final String _TYPE = "_type"; - static final String _ID = "_id"; - static final String STATUS = "status"; - static final String ERROR = "error"; + /** + * Reads a {@link BulkItemResponse} from a {@link XContentParser}. + * + * @param parser the {@link XContentParser} + * @param id the id to assign to the parsed {@link BulkItemResponse}. It is usually the index of + * the item in the {@link BulkResponse#getItems} array. + */ + public static BulkItemResponse fromXContent(XContentParser parser, int id) throws IOException { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.currentToken(), parser::getTokenLocation); + + XContentParser.Token token = parser.nextToken(); + ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); + + String currentFieldName = parser.currentName(); + token = parser.nextToken(); + + final OpType opType = OpType.fromString(currentFieldName); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + + DocWriteResponse.Builder builder = null; + CheckedConsumer itemParser = null; + + if (opType == OpType.INDEX || opType == OpType.CREATE) { + final IndexResponse.Builder indexResponseBuilder = new IndexResponse.Builder(); + builder = indexResponseBuilder; + itemParser = (indexParser) -> IndexResponse.parseXContentFields(indexParser, indexResponseBuilder); + + } else if (opType == OpType.UPDATE) { + final UpdateResponse.Builder updateResponseBuilder = new UpdateResponse.Builder(); + builder = updateResponseBuilder; + itemParser = (updateParser) -> UpdateResponse.parseXContentFields(updateParser, updateResponseBuilder); + + } else if (opType == OpType.DELETE) { + final DeleteResponse.Builder deleteResponseBuilder = new DeleteResponse.Builder(); + builder = deleteResponseBuilder; + itemParser = (deleteParser) -> DeleteResponse.parseXContentFields(deleteParser, deleteResponseBuilder); + } else { + throwUnknownField(currentFieldName, parser.getTokenLocation()); + } + + RestStatus status = null; + ElasticsearchException exception = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } + + if (ERROR.equals(currentFieldName)) { + if (token == XContentParser.Token.START_OBJECT) { + exception = ElasticsearchException.fromXContent(parser); + } + } else if (STATUS.equals(currentFieldName)) { + if (token == XContentParser.Token.VALUE_NUMBER) { + status = RestStatus.fromCode(parser.intValue()); + } + } else { + itemParser.accept(parser); + } + } + + ensureExpectedToken(XContentParser.Token.END_OBJECT, token, parser::getTokenLocation); + token = parser.nextToken(); + ensureExpectedToken(XContentParser.Token.END_OBJECT, token, parser::getTokenLocation); + + BulkItemResponse bulkItemResponse; + if (exception != null) { + Failure failure = new Failure(builder.getShardId().getIndexName(), builder.getType(), builder.getId(), exception, status); + bulkItemResponse = new BulkItemResponse(id, opType, failure); + } else { + bulkItemResponse = new BulkItemResponse(id, opType, builder.build()); + } + return bulkItemResponse; } /** @@ -90,15 +170,36 @@ public static class Failure implements Writeable, ToXContent { private final String index; private final String type; private final String id; - private final Throwable cause; + private final Exception cause; private final RestStatus status; + private final long seqNo; + + /** + * For write failures before operation was assigned a sequence number. + * + * use @{link {@link #Failure(String, String, String, Exception, long)}} + * to record operation sequence no with failure + */ + public Failure(String index, String type, String id, Exception cause) { + this(index, type, id, cause, ExceptionsHelper.status(cause), SequenceNumbersService.UNASSIGNED_SEQ_NO); + } + + public Failure(String index, String type, String id, Exception cause, RestStatus status) { + this(index, type, id, cause, status, SequenceNumbersService.UNASSIGNED_SEQ_NO); + } + + /** For write failures after operation was assigned a sequence number. */ + public Failure(String index, String type, String id, Exception cause, long seqNo) { + this(index, type, id, cause, ExceptionsHelper.status(cause), seqNo); + } - public Failure(String index, String type, String id, Throwable cause) { + public Failure(String index, String type, String id, Exception cause, RestStatus status, long seqNo) { this.index = index; this.type = type; this.id = id; this.cause = cause; - this.status = ExceptionsHelper.status(cause); + this.status = status; + this.seqNo = seqNo; } /** @@ -110,6 +211,11 @@ public Failure(StreamInput in) throws IOException { id = in.readOptionalString(); cause = in.readException(); status = ExceptionsHelper.status(cause); + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { + seqNo = in.readZLong(); + } else { + seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; + } } @Override @@ -118,6 +224,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeString(getType()); out.writeOptionalString(getId()); out.writeException(getCause()); + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { + out.writeZLong(getSeqNo()); + } } @@ -159,10 +268,19 @@ public RestStatus getStatus() { /** * The actual cause of the failure. */ - public Throwable getCause() { + public Exception getCause() { return cause; } + /** + * The operation sequence number generated by primary + * NOTE: {@link SequenceNumbersService#UNASSIGNED_SEQ_NO} + * indicates sequence number was not generated by primary + */ + public long getSeqNo() { + return seqNo; + } + @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { builder.field(INDEX_FIELD, index); @@ -171,7 +289,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(ID_FIELD, id); } builder.startObject(CAUSE_FIELD); - ElasticsearchException.toXContent(builder, params, cause); + ElasticsearchException.generateThrowableXContent(builder, params, cause); builder.endObject(); builder.field(STATUS_FIELD, status.getStatus()); return builder; @@ -179,7 +297,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws @Override public String toString() { - return Strings.toString(this, true); + return Strings.toString(this); } } @@ -302,7 +420,7 @@ public static BulkItemResponse readBulkItem(StreamInput in) throws IOException { @Override public void readFrom(StreamInput in) throws IOException { id = in.readVInt(); - if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { opType = OpType.fromId(in.readByte()); } else { opType = OpType.fromString(in.readString()); @@ -315,6 +433,7 @@ public void readFrom(StreamInput in) throws IOException { } else if (type == 1) { response = new DeleteResponse(); response.readFrom(in); + } else if (type == 3) { // make 3 instead of 2, because 2 is already in use for 'no responses' response = new UpdateResponse(); response.readFrom(in); @@ -328,7 +447,7 @@ public void readFrom(StreamInput in) throws IOException { @Override public void writeTo(StreamOutput out) throws IOException { out.writeVInt(id); - if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { out.writeByte(opType.getId()); } else { out.writeString(opType.getLowercase()); diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResultHolder.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResultHolder.java new file mode 100644 index 0000000000000..e844f8d6506a5 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkItemResultHolder.java @@ -0,0 +1,42 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.bulk; + +import org.elasticsearch.action.DocWriteResponse; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.index.engine.Engine; + +/** + * A struct-like holder for a bulk items reponse, result, and the resulting + * replica operation to be executed. + */ +class BulkItemResultHolder { + public final @Nullable DocWriteResponse response; + public final @Nullable Engine.Result operationResult; + public final BulkItemRequest replicaRequest; + + BulkItemResultHolder(@Nullable DocWriteResponse response, + @Nullable Engine.Result operationResult, + BulkItemRequest replicaRequest) { + this.response = response; + this.operationResult = operationResult; + this.replicaRequest = replicaRequest; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java index 6dacb21b23903..3269fbc95008f 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkProcessor.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.bulk; +import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.delete.DeleteRequest; import org.elasticsearch.action.index.IndexRequest; @@ -28,16 +29,14 @@ import org.elasticsearch.common.unit.ByteSizeUnit; import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.util.concurrent.EsExecutors; -import org.elasticsearch.common.util.concurrent.FutureUtils; +import org.elasticsearch.common.xcontent.XContentType; +import org.elasticsearch.threadpool.ThreadPool; import java.io.Closeable; import java.util.Objects; -import java.util.concurrent.Executors; -import java.util.concurrent.ScheduledFuture; -import java.util.concurrent.ScheduledThreadPoolExecutor; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicLong; +import java.util.function.BiConsumer; /** * A bulk processor is a thread safe bulk processing class, allowing to easily set when to "flush" a new bulk request @@ -65,7 +64,7 @@ public interface Listener { /** * Callback after a failed execution of bulk request. - * + *

* Note that in case an instance of InterruptedException is passed, which means that request processing has been * cancelled externally, the thread's interruption status has been restored prior to calling this method. */ @@ -77,10 +76,10 @@ public interface Listener { */ public static class Builder { - private final Client client; + private final BiConsumer> consumer; private final Listener listener; + private final ThreadPool threadPool; - private String name; private int concurrentRequests = 1; private int bulkActions = 1000; private ByteSizeValue bulkSize = new ByteSizeValue(5, ByteSizeUnit.MB); @@ -91,17 +90,10 @@ public static class Builder { * Creates a builder of bulk processor with the client to use and the listener that will be used * to be notified on the completion of bulk requests. */ - public Builder(Client client, Listener listener) { - this.client = client; + public Builder(BiConsumer> consumer, Listener listener, ThreadPool threadPool) { + this.consumer = consumer; this.listener = listener; - } - - /** - * Sets an optional name to identify this bulk processor. - */ - public Builder setName(String name) { - this.name = name; - return this; + this.threadPool = threadPool; } /** @@ -163,7 +155,7 @@ public Builder setBackoffPolicy(BackoffPolicy backoffPolicy) { * Builds a new bulk processor. */ public BulkProcessor build() { - return new BulkProcessor(client, backoffPolicy, listener, name, concurrentRequests, bulkActions, bulkSize, flushInterval); + return new BulkProcessor(consumer, backoffPolicy, listener, concurrentRequests, bulkActions, bulkSize, flushInterval, threadPool); } } @@ -171,15 +163,13 @@ public static Builder builder(Client client, Listener listener) { Objects.requireNonNull(client, "client"); Objects.requireNonNull(listener, "listener"); - return new Builder(client, listener); + return new Builder(client::bulk, listener, client.threadPool()); } private final int bulkActions; private final long bulkSize; - - private final ScheduledThreadPoolExecutor scheduler; - private final ScheduledFuture scheduledFuture; + private final ThreadPool.Cancellable cancellableFlushTask; private final AtomicLong executionIdGen = new AtomicLong(); @@ -188,22 +178,16 @@ public static Builder builder(Client client, Listener listener) { private volatile boolean closed = false; - BulkProcessor(Client client, BackoffPolicy backoffPolicy, Listener listener, @Nullable String name, int concurrentRequests, int bulkActions, ByteSizeValue bulkSize, @Nullable TimeValue flushInterval) { + BulkProcessor(BiConsumer> consumer, BackoffPolicy backoffPolicy, Listener listener, + int concurrentRequests, int bulkActions, ByteSizeValue bulkSize, @Nullable TimeValue flushInterval, + ThreadPool threadPool) { this.bulkActions = bulkActions; this.bulkSize = bulkSize.getBytes(); - this.bulkRequest = new BulkRequest(); - this.bulkRequestHandler = (concurrentRequests == 0) ? BulkRequestHandler.syncHandler(client, backoffPolicy, listener) : BulkRequestHandler.asyncHandler(client, backoffPolicy, listener, concurrentRequests); - - if (flushInterval != null) { - this.scheduler = (ScheduledThreadPoolExecutor) Executors.newScheduledThreadPool(1, EsExecutors.daemonThreadFactory(client.settings(), (name != null ? "[" + name + "]" : "") + "bulk_processor")); - this.scheduler.setExecuteExistingDelayedTasksAfterShutdownPolicy(false); - this.scheduler.setContinueExistingPeriodicTasksAfterShutdownPolicy(false); - this.scheduledFuture = this.scheduler.scheduleWithFixedDelay(new Flush(), flushInterval.millis(), flushInterval.millis(), TimeUnit.MILLISECONDS); - } else { - this.scheduler = null; - this.scheduledFuture = null; - } + this.bulkRequestHandler = new BulkRequestHandler(consumer, backoffPolicy, listener, threadPool, concurrentRequests); + + // Start period flushing task after everything is setup + this.cancellableFlushTask = startFlushTask(flushInterval, threadPool); } /** @@ -213,20 +197,20 @@ public static Builder builder(Client client, Listener listener) { public void close() { try { awaitClose(0, TimeUnit.NANOSECONDS); - } catch(InterruptedException exc) { + } catch (InterruptedException exc) { Thread.currentThread().interrupt(); } } /** * Closes the processor. If flushing by time is enabled, then it's shutdown. Any remaining bulk actions are flushed. - * + *

* If concurrent requests are not enabled, returns {@code true} immediately. * If concurrent requests are enabled, waits for up to the specified timeout for all bulk requests to complete then returns {@code true}, * If the specified waiting time elapses before all bulk requests complete, {@code false} is returned. * * @param timeout The maximum time to wait for the bulk requests to complete - * @param unit The time unit of the {@code timeout} argument + * @param unit The time unit of the {@code timeout} argument * @return {@code true} if all bulk requests completed and {@code false} if the waiting time elapsed before all the bulk requests completed * @throws InterruptedException If the current thread is interrupted */ @@ -235,10 +219,9 @@ public synchronized boolean awaitClose(long timeout, TimeUnit unit) throws Inter return true; } closed = true; - if (this.scheduledFuture != null) { - FutureUtils.cancel(this.scheduledFuture); - this.scheduler.shutdown(); - } + + this.cancellableFlushTask.cancel(); + if (bulkRequest.numberOfActions() > 0) { execute(); } @@ -288,16 +271,40 @@ private synchronized void internalAdd(DocWriteRequest request, @Nullable Object executeIfNeeded(); } - public BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType) throws Exception { - return add(data, defaultIndex, defaultType, null, null); + /** + * Adds the data from the bytes to be processed by the bulk processor + */ + public BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, + XContentType xContentType) throws Exception { + return add(data, defaultIndex, defaultType, null, null, xContentType); } - public synchronized BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, @Nullable String defaultPipeline, @Nullable Object payload) throws Exception { - bulkRequest.add(data, defaultIndex, defaultType, null, null, null, defaultPipeline, payload, true); + /** + * Adds the data from the bytes to be processed by the bulk processor + */ + public synchronized BulkProcessor add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, + @Nullable String defaultPipeline, @Nullable Object payload, XContentType xContentType) throws Exception { + bulkRequest.add(data, defaultIndex, defaultType, null, null, null, defaultPipeline, payload, true, xContentType); executeIfNeeded(); return this; } + private ThreadPool.Cancellable startFlushTask(TimeValue flushInterval, ThreadPool threadPool) { + if (flushInterval == null) { + return new ThreadPool.Cancellable() { + @Override + public void cancel() {} + + @Override + public boolean isCancelled() { + return true; + } + }; + } + + return threadPool.scheduleWithFixedDelay(new Flush(), flushInterval, ThreadPool.Names.GENERIC); + } + private void executeIfNeeded() { ensureOpen(); if (!isOverTheLimit()) { diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java index 20d5e64f49a7f..5836da3b8c49a 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequest.java @@ -41,8 +41,8 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.NamedXContentRegistry; import org.elasticsearch.common.xcontent.XContent; -import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; import org.elasticsearch.search.fetch.subphase.FetchSourceContext; @@ -245,33 +245,38 @@ public long estimatedSizeInBytes() { /** * Adds a framed data in binary format */ - public BulkRequest add(byte[] data, int from, int length) throws IOException { - return add(data, from, length, null, null); + public BulkRequest add(byte[] data, int from, int length, XContentType xContentType) throws IOException { + return add(data, from, length, null, null, xContentType); } /** * Adds a framed data in binary format */ - public BulkRequest add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType) throws IOException { - return add(new BytesArray(data, from, length), defaultIndex, defaultType); + public BulkRequest add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType, + XContentType xContentType) throws IOException { + return add(new BytesArray(data, from, length), defaultIndex, defaultType, xContentType); } /** * Adds a framed data in binary format */ - public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType) throws IOException { - return add(data, defaultIndex, defaultType, null, null, null, null, null, true); + public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, + XContentType xContentType) throws IOException { + return add(data, defaultIndex, defaultType, null, null, null, null, null, true, xContentType); } /** * Adds a framed data in binary format */ - public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, boolean allowExplicitIndex) throws IOException { - return add(data, defaultIndex, defaultType, null, null, null, null, null, allowExplicitIndex); + public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, boolean allowExplicitIndex, + XContentType xContentType) throws IOException { + return add(data, defaultIndex, defaultType, null, null, null, null, null, allowExplicitIndex, xContentType); } - public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, @Nullable String defaultRouting, @Nullable String[] defaultFields, @Nullable FetchSourceContext defaultFetchSourceContext, @Nullable String defaultPipeline, @Nullable Object payload, boolean allowExplicitIndex) throws IOException { - XContent xContent = XContentFactory.xContent(data); + public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Nullable String defaultType, @Nullable String + defaultRouting, @Nullable String[] defaultFields, @Nullable FetchSourceContext defaultFetchSourceContext, @Nullable String + defaultPipeline, @Nullable Object payload, boolean allowExplicitIndex, XContentType xContentType) throws IOException { + XContent xContent = xContentType.xContent(); int line = 0; int from = 0; int length = data.length(); @@ -294,10 +299,16 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null if (token == null) { continue; } - assert token == XContentParser.Token.START_OBJECT; + if (token != XContentParser.Token.START_OBJECT) { + throw new IllegalArgumentException("Malformed action/metadata line [" + line + "], expected " + + XContentParser.Token.START_OBJECT + " but found [" + token + "]"); + } // Move to FIELD_NAME, that's the action token = parser.nextToken(); - assert token == XContentParser.Token.FIELD_NAME; + if (token != XContentParser.Token.FIELD_NAME) { + throw new IllegalArgumentException("Malformed action/metadata line [" + line + "], expected " + + XContentParser.Token.FIELD_NAME + " but found [" + token + "]"); + } String action = parser.currentName(); String index = defaultIndex; @@ -381,29 +392,30 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null } line++; - // order is important, we set parent after routing, so routing will be set to parent if not set explicitly // we use internalAdd so we don't fork here, this allows us not to copy over the big byte array to small chunks // of index request. if ("index".equals(action)) { if (opType == null) { internalAdd(new IndexRequest(index, type, id).routing(routing).parent(parent).version(version).versionType(versionType) - .setPipeline(pipeline).source(data.slice(from, nextMarker - from)), payload); + .setPipeline(pipeline) + .source(sliceTrimmingCarriageReturn(data, from, nextMarker,xContentType), xContentType), payload); } else { internalAdd(new IndexRequest(index, type, id).routing(routing).parent(parent).version(version).versionType(versionType) .create("create".equals(opType)).setPipeline(pipeline) - .source(data.slice(from, nextMarker - from)), payload); + .source(sliceTrimmingCarriageReturn(data, from, nextMarker, xContentType), xContentType), payload); } } else if ("create".equals(action)) { internalAdd(new IndexRequest(index, type, id).routing(routing).parent(parent).version(version).versionType(versionType) .create(true).setPipeline(pipeline) - .source(data.slice(from, nextMarker - from)), payload); + .source(sliceTrimmingCarriageReturn(data, from, nextMarker, xContentType), xContentType), payload); } else if ("update".equals(action)) { UpdateRequest updateRequest = new UpdateRequest(index, type, id).routing(routing).parent(parent).retryOnConflict(retryOnConflict) .version(version).versionType(versionType) .routing(routing) .parent(parent); // EMPTY is safe here because we never call namedObject - try (XContentParser sliceParser = xContent.createParser(NamedXContentRegistry.EMPTY, data.slice(from, nextMarker - from))) { + try (XContentParser sliceParser = xContent.createParser(NamedXContentRegistry.EMPTY, + sliceTrimmingCarriageReturn(data, from, nextMarker, xContentType))) { updateRequest.fromXContent(sliceParser); } if (fetchSourceContext != null) { @@ -434,6 +446,20 @@ public BulkRequest add(BytesReference data, @Nullable String defaultIndex, @Null return this; } + /** + * Returns the sliced {@link BytesReference}. If the {@link XContentType} is JSON, the byte preceding the marker is checked to see + * if it is a carriage return and if so, the BytesReference is sliced so that the carriage return is ignored + */ + private BytesReference sliceTrimmingCarriageReturn(BytesReference bytesReference, int from, int nextMarker, XContentType xContentType) { + final int length; + if (XContentType.JSON == xContentType && bytesReference.get(nextMarker - 1) == (byte) '\r') { + length = nextMarker - from - 1; + } else { + length = nextMarker - from; + } + return bytesReference.slice(from, length); + } + /** * Sets the number of shard copies that must be active before proceeding with the write. * See {@link ReplicationRequest#waitForActiveShards(ActiveShardCount)} for details. diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestBuilder.java index c48a8f507b862..7d2bca54d15e2 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestBuilder.java @@ -32,6 +32,7 @@ import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.XContentType; /** * A bulk request holds an ordered {@link IndexRequest}s and {@link DeleteRequest}s and allows to executes @@ -98,16 +99,17 @@ public BulkRequestBuilder add(UpdateRequestBuilder request) { /** * Adds a framed data in binary format */ - public BulkRequestBuilder add(byte[] data, int from, int length) throws Exception { - request.add(data, from, length, null, null); + public BulkRequestBuilder add(byte[] data, int from, int length, XContentType xContentType) throws Exception { + request.add(data, from, length, null, null, xContentType); return this; } /** * Adds a framed data in binary format */ - public BulkRequestBuilder add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType) throws Exception { - request.add(data, from, length, defaultIndex, defaultType); + public BulkRequestBuilder add(byte[] data, int from, int length, @Nullable String defaultIndex, @Nullable String defaultType, + XContentType xContentType) throws Exception { + request.add(data, from, length, defaultIndex, defaultType, xContentType); return this; } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java index 6ad566ca50019..52a83b00483a4 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkRequestHandler.java @@ -22,147 +22,91 @@ import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.action.ActionListener; -import org.elasticsearch.client.Client; import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.concurrent.EsRejectedExecutionException; +import org.elasticsearch.threadpool.ThreadPool; +import java.util.concurrent.CountDownLatch; import java.util.concurrent.Semaphore; import java.util.concurrent.TimeUnit; +import java.util.function.BiConsumer; /** - * Abstracts the low-level details of bulk request handling + * Implements the low-level details of bulk request handling */ -abstract class BulkRequestHandler { - protected final Logger logger; - protected final Client client; - - protected BulkRequestHandler(Client client) { - this.client = client; - this.logger = Loggers.getLogger(getClass(), client.settings()); - } - - - public abstract void execute(BulkRequest bulkRequest, long executionId); - - public abstract boolean awaitClose(long timeout, TimeUnit unit) throws InterruptedException; - - - public static BulkRequestHandler syncHandler(Client client, BackoffPolicy backoffPolicy, BulkProcessor.Listener listener) { - return new SyncBulkRequestHandler(client, backoffPolicy, listener); +public final class BulkRequestHandler { + private final Logger logger; + private final BiConsumer> consumer; + private final BulkProcessor.Listener listener; + private final Semaphore semaphore; + private final Retry retry; + private final int concurrentRequests; + + BulkRequestHandler(BiConsumer> consumer, BackoffPolicy backoffPolicy, + BulkProcessor.Listener listener, ThreadPool threadPool, + int concurrentRequests) { + assert concurrentRequests >= 0; + this.logger = Loggers.getLogger(getClass()); + this.consumer = consumer; + this.listener = listener; + this.concurrentRequests = concurrentRequests; + this.retry = new Retry(EsRejectedExecutionException.class, backoffPolicy, threadPool); + this.semaphore = new Semaphore(concurrentRequests > 0 ? concurrentRequests : 1); } - public static BulkRequestHandler asyncHandler(Client client, BackoffPolicy backoffPolicy, BulkProcessor.Listener listener, int concurrentRequests) { - return new AsyncBulkRequestHandler(client, backoffPolicy, listener, concurrentRequests); - } - - private static class SyncBulkRequestHandler extends BulkRequestHandler { - private final BulkProcessor.Listener listener; - private final BackoffPolicy backoffPolicy; - - public SyncBulkRequestHandler(Client client, BackoffPolicy backoffPolicy, BulkProcessor.Listener listener) { - super(client); - this.backoffPolicy = backoffPolicy; - this.listener = listener; - } - - @Override - public void execute(BulkRequest bulkRequest, long executionId) { - boolean afterCalled = false; - try { - listener.beforeBulk(executionId, bulkRequest); - BulkResponse bulkResponse = Retry - .on(EsRejectedExecutionException.class) - .policy(backoffPolicy) - .withSyncBackoff(client, bulkRequest); - afterCalled = true; - listener.afterBulk(executionId, bulkRequest, bulkResponse); - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - logger.info((Supplier) () -> new ParameterizedMessage("Bulk request {} has been cancelled.", executionId), e); - if (!afterCalled) { - listener.afterBulk(executionId, bulkRequest, e); - } - } catch (Exception e) { - logger.warn((Supplier) () -> new ParameterizedMessage("Failed to execute bulk request {}.", executionId), e); - if (!afterCalled) { - listener.afterBulk(executionId, bulkRequest, e); + public void execute(BulkRequest bulkRequest, long executionId) { + Runnable toRelease = () -> {}; + boolean bulkRequestSetupSuccessful = false; + try { + listener.beforeBulk(executionId, bulkRequest); + semaphore.acquire(); + toRelease = semaphore::release; + CountDownLatch latch = new CountDownLatch(1); + retry.withBackoff(consumer, bulkRequest, new ActionListener() { + @Override + public void onResponse(BulkResponse response) { + try { + listener.afterBulk(executionId, bulkRequest, response); + } finally { + semaphore.release(); + latch.countDown(); + } } - } - } - @Override - public boolean awaitClose(long timeout, TimeUnit unit) throws InterruptedException { - // we are "closed" immediately as there is no request in flight - return true; - } - } - - private static class AsyncBulkRequestHandler extends BulkRequestHandler { - private final BackoffPolicy backoffPolicy; - private final BulkProcessor.Listener listener; - private final Semaphore semaphore; - private final int concurrentRequests; - - private AsyncBulkRequestHandler(Client client, BackoffPolicy backoffPolicy, BulkProcessor.Listener listener, int concurrentRequests) { - super(client); - this.backoffPolicy = backoffPolicy; - assert concurrentRequests > 0; - this.listener = listener; - this.concurrentRequests = concurrentRequests; - this.semaphore = new Semaphore(concurrentRequests); - } - - @Override - public void execute(BulkRequest bulkRequest, long executionId) { - boolean bulkRequestSetupSuccessful = false; - boolean acquired = false; - try { - listener.beforeBulk(executionId, bulkRequest); - semaphore.acquire(); - acquired = true; - Retry.on(EsRejectedExecutionException.class) - .policy(backoffPolicy) - .withAsyncBackoff(client, bulkRequest, new ActionListener() { - @Override - public void onResponse(BulkResponse response) { - try { - listener.afterBulk(executionId, bulkRequest, response); - } finally { - semaphore.release(); - } - } - - @Override - public void onFailure(Exception e) { - try { - listener.afterBulk(executionId, bulkRequest, e); - } finally { - semaphore.release(); - } - } - }); - bulkRequestSetupSuccessful = true; - } catch (InterruptedException e) { - Thread.currentThread().interrupt(); - logger.info((Supplier) () -> new ParameterizedMessage("Bulk request {} has been cancelled.", executionId), e); - listener.afterBulk(executionId, bulkRequest, e); - } catch (Exception e) { - logger.warn((Supplier) () -> new ParameterizedMessage("Failed to execute bulk request {}.", executionId), e); - listener.afterBulk(executionId, bulkRequest, e); - } finally { - if (!bulkRequestSetupSuccessful && acquired) { // if we fail on client.bulk() release the semaphore - semaphore.release(); + @Override + public void onFailure(Exception e) { + try { + listener.afterBulk(executionId, bulkRequest, e); + } finally { + semaphore.release(); + latch.countDown(); + } } + }, Settings.EMPTY); + bulkRequestSetupSuccessful = true; + if (concurrentRequests == 0) { + latch.await(); + } + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + logger.info((Supplier) () -> new ParameterizedMessage("Bulk request {} has been cancelled.", executionId), e); + listener.afterBulk(executionId, bulkRequest, e); + } catch (Exception e) { + logger.warn((Supplier) () -> new ParameterizedMessage("Failed to execute bulk request {}.", executionId), e); + listener.afterBulk(executionId, bulkRequest, e); + } finally { + if (bulkRequestSetupSuccessful == false) { // if we fail on client.bulk() release the semaphore + toRelease.run(); } } + } - @Override - public boolean awaitClose(long timeout, TimeUnit unit) throws InterruptedException { - if (semaphore.tryAcquire(this.concurrentRequests, timeout, unit)) { - semaphore.release(this.concurrentRequests); - return true; - } - return false; + boolean awaitClose(long timeout, TimeUnit unit) throws InterruptedException { + if (semaphore.tryAcquire(this.concurrentRequests, timeout, unit)) { + semaphore.release(this.concurrentRequests); + return true; } + return false; } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkResponse.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkResponse.java index e214f87ddb63b..30bf2dc14773b 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkResponse.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkResponse.java @@ -23,17 +23,32 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.StatusToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import java.util.ArrayList; import java.util.Arrays; import java.util.Iterator; +import java.util.List; + +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; +import static org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownField; +import static org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownToken; /** * A response of a bulk execution. Holding a response for each item responding (in order) of the * bulk requests. Each item holds the index/type/id is operated on, and if it failed or not (with the * failure message). */ -public class BulkResponse extends ActionResponse implements Iterable { +public class BulkResponse extends ActionResponse implements Iterable, StatusToXContentObject { + + private static final String ITEMS = "items"; + private static final String ERRORS = "errors"; + private static final String TOOK = "took"; + private static final String INGEST_TOOK = "ingest_took"; public static final long NO_INGEST_TOOK = -1L; @@ -61,13 +76,6 @@ public TimeValue getTook() { return new TimeValue(tookInMillis); } - /** - * How long the bulk execution took in milliseconds. Excluding ingest preprocessing. - */ - public long getTookInMillis() { - return tookInMillis; - } - /** * If ingest is enabled returns the bulk ingest preprocessing time, otherwise 0 is returned. */ @@ -141,4 +149,61 @@ public void writeTo(StreamOutput out) throws IOException { out.writeVLong(tookInMillis); out.writeZLong(ingestTookInMillis); } + + @Override + public RestStatus status() { + return RestStatus.OK; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(TOOK, tookInMillis); + if (ingestTookInMillis != BulkResponse.NO_INGEST_TOOK) { + builder.field(INGEST_TOOK, ingestTookInMillis); + } + builder.field(ERRORS, hasFailures()); + builder.startArray(ITEMS); + for (BulkItemResponse item : this) { + item.toXContent(builder, params); + } + builder.endArray(); + builder.endObject(); + return builder; + } + + public static BulkResponse fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token = parser.nextToken(); + ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); + + long took = -1L; + long ingestTook = NO_INGEST_TOOK; + List items = new ArrayList<>(); + + String currentFieldName = parser.currentName(); + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (TOOK.equals(currentFieldName)) { + took = parser.longValue(); + } else if (INGEST_TOOK.equals(currentFieldName)) { + ingestTook = parser.longValue(); + } else if (ERRORS.equals(currentFieldName) == false) { + throwUnknownField(currentFieldName, parser.getTokenLocation()); + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (ITEMS.equals(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + items.add(BulkItemResponse.fromXContent(parser, items.size())); + } + } else { + throwUnknownField(currentFieldName, parser.getTokenLocation()); + } + } else { + throwUnknownToken(token, parser.getTokenLocation()); + } + } + return new BulkResponse(items.toArray(new BulkItemResponse[items.size()]), took, ingestTook); + } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkShardRequest.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkShardRequest.java index d53e9f8997ef7..8e2dde7db6370 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkShardRequest.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkShardRequest.java @@ -36,7 +36,7 @@ public class BulkShardRequest extends ReplicatedWriteRequest { public BulkShardRequest() { } - BulkShardRequest(ShardId shardId, RefreshPolicy refreshPolicy, BulkItemRequest[] items) { + public BulkShardRequest(ShardId shardId, RefreshPolicy refreshPolicy, BulkItemRequest[] items) { super(shardId); this.items = items; setRefreshPolicy(refreshPolicy); @@ -85,8 +85,14 @@ public void readFrom(StreamInput in) throws IOException { @Override public String toString() { // This is included in error messages so we'll try to make it somewhat user friendly. - StringBuilder b = new StringBuilder("BulkShardRequest to ["); - b.append(index).append("] containing [").append(items.length).append("] requests"); + StringBuilder b = new StringBuilder("BulkShardRequest ["); + b.append(shardId).append("] containing ["); + if (items.length > 1) { + b.append(items.length).append("] requests"); + } else { + b.append(items[0].request()).append("]"); + } + switch (getRefreshPolicy()) { case IMMEDIATE: b.append(" and a refresh"); diff --git a/core/src/main/java/org/elasticsearch/action/bulk/BulkShardResponse.java b/core/src/main/java/org/elasticsearch/action/bulk/BulkShardResponse.java index b51ce624800a5..aa368c13fb80e 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/BulkShardResponse.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/BulkShardResponse.java @@ -36,7 +36,8 @@ public class BulkShardResponse extends ReplicationResponse implements WriteRespo BulkShardResponse() { } - BulkShardResponse(ShardId shardId, BulkItemResponse[] responses) { + // NOTE: public for testing only + public BulkShardResponse(ShardId shardId, BulkItemResponse[] responses) { this.shardId = shardId; this.responses = responses; } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/MappingUpdatePerformer.java b/core/src/main/java/org/elasticsearch/action/bulk/MappingUpdatePerformer.java new file mode 100644 index 0000000000000..812653d58266b --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/bulk/MappingUpdatePerformer.java @@ -0,0 +1,39 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.bulk; + +import org.elasticsearch.index.mapper.Mapping; +import org.elasticsearch.index.shard.ShardId; + +public interface MappingUpdatePerformer { + + /** + * Update the mappings on the master. + */ + void updateMappings(Mapping update, ShardId shardId, String type) throws Exception; + + /** + * Throws a {@code ReplicationOperation.RetryOnPrimaryException} if the operation needs to be + * retried on the primary due to the mappings not being present yet, or a different exception if + * updating the mappings on the master failed. + */ + void verifyMappings(Mapping update, ShardId shardId) throws Exception; + +} diff --git a/core/src/main/java/org/elasticsearch/action/bulk/Retry.java b/core/src/main/java/org/elasticsearch/action/bulk/Retry.java index a41bd454979f0..8a9ef245f36a6 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/Retry.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/Retry.java @@ -20,11 +20,10 @@ import org.apache.logging.log4j.Logger; import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.action.ActionFuture; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.PlainActionFuture; -import org.elasticsearch.client.Client; import org.elasticsearch.common.logging.Loggers; +import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.FutureUtils; import org.elasticsearch.threadpool.ThreadPool; @@ -33,6 +32,7 @@ import java.util.Iterator; import java.util.List; import java.util.concurrent.ScheduledFuture; +import java.util.function.BiConsumer; import java.util.function.Predicate; /** @@ -40,57 +40,48 @@ */ public class Retry { private final Class retryOnThrowable; + private final BackoffPolicy backoffPolicy; + private final ThreadPool threadPool; - private BackoffPolicy backoffPolicy; - public static Retry on(Class retryOnThrowable) { - return new Retry(retryOnThrowable); - } - - /** - * @param backoffPolicy The backoff policy that defines how long and how often to wait for retries. - */ - public Retry policy(BackoffPolicy backoffPolicy) { - this.backoffPolicy = backoffPolicy; - return this; - } - - Retry(Class retryOnThrowable) { + public Retry(Class retryOnThrowable, BackoffPolicy backoffPolicy, ThreadPool threadPool) { this.retryOnThrowable = retryOnThrowable; + this.backoffPolicy = backoffPolicy; + this.threadPool = threadPool; } /** - * Invokes #bulk(BulkRequest, ActionListener) on the provided client. Backs off on the provided exception and delegates results to the - * provided listener. - * - * @param client Client invoking the bulk request. + * Invokes #accept(BulkRequest, ActionListener). Backs off on the provided exception and delegates results to the + * provided listener. Retries will be scheduled using the class's thread pool. + * @param consumer The consumer to which apply the request and listener * @param bulkRequest The bulk request that should be executed. - * @param listener A listener that is invoked when the bulk request finishes or completes with an exception. The listener is not + * @param listener A listener that is invoked when the bulk request finishes or completes with an exception. The listener is not + * @param settings settings */ - public void withAsyncBackoff(Client client, BulkRequest bulkRequest, ActionListener listener) { - AsyncRetryHandler r = new AsyncRetryHandler(retryOnThrowable, backoffPolicy, client, listener); + public void withBackoff(BiConsumer> consumer, BulkRequest bulkRequest, ActionListener listener, Settings settings) { + RetryHandler r = new RetryHandler(retryOnThrowable, backoffPolicy, consumer, listener, settings, threadPool); r.execute(bulkRequest); - } /** - * Invokes #bulk(BulkRequest) on the provided client. Backs off on the provided exception. + * Invokes #accept(BulkRequest, ActionListener). Backs off on the provided exception. Retries will be scheduled using + * the class's thread pool. * - * @param client Client invoking the bulk request. + * @param consumer The consumer to which apply the request and listener * @param bulkRequest The bulk request that should be executed. - * @return the bulk response as returned by the client. - * @throws Exception Any exception thrown by the callable. + * @param settings settings + * @return a future representing the bulk response returned by the client. */ - public BulkResponse withSyncBackoff(Client client, BulkRequest bulkRequest) throws Exception { - return SyncRetryHandler - .create(retryOnThrowable, backoffPolicy, client) - .executeBlocking(bulkRequest) - .actionGet(); + public PlainActionFuture withBackoff(BiConsumer> consumer, BulkRequest bulkRequest, Settings settings) { + PlainActionFuture future = PlainActionFuture.newFuture(); + withBackoff(consumer, bulkRequest, future, settings); + return future; } - static class AbstractRetryHandler implements ActionListener { + static class RetryHandler implements ActionListener { private final Logger logger; - private final Client client; + private final ThreadPool threadPool; + private final BiConsumer> consumer; private final ActionListener listener; private final Iterator backoff; private final Class retryOnThrowable; @@ -102,12 +93,15 @@ static class AbstractRetryHandler implements ActionListener { private volatile BulkRequest currentBulkRequest; private volatile ScheduledFuture scheduledRequestFuture; - public AbstractRetryHandler(Class retryOnThrowable, BackoffPolicy backoffPolicy, Client client, ActionListener listener) { + RetryHandler(Class retryOnThrowable, BackoffPolicy backoffPolicy, + BiConsumer> consumer, ActionListener listener, + Settings settings, ThreadPool threadPool) { this.retryOnThrowable = retryOnThrowable; this.backoff = backoffPolicy.iterator(); - this.client = client; + this.consumer = consumer; this.listener = listener; - this.logger = Loggers.getLogger(getClass(), client.settings()); + this.logger = Loggers.getLogger(getClass(), settings); + this.threadPool = threadPool; // in contrast to System.currentTimeMillis(), nanoTime() uses a monotonic clock under the hood this.startTimestampNanos = System.nanoTime(); } @@ -142,9 +136,8 @@ private void retry(BulkRequest bulkRequestForRetry) { assert backoff.hasNext(); TimeValue next = backoff.next(); logger.trace("Retry of bulk request scheduled in {} ms.", next.millis()); - Runnable retry = () -> this.execute(bulkRequestForRetry); - retry = client.threadPool().getThreadContext().preserveContext(retry); - scheduledRequestFuture = client.threadPool().schedule(next, ThreadPool.Names.SAME, retry); + Runnable command = threadPool.getThreadContext().preserveContext(() -> this.execute(bulkRequestForRetry)); + scheduledRequestFuture = threadPool.schedule(next, ThreadPool.Names.SAME, command); } private BulkRequest createBulkRequestForRetry(BulkResponse bulkItemResponses) { @@ -208,32 +201,7 @@ private BulkResponse getAccumulatedResponse() { public void execute(BulkRequest bulkRequest) { this.currentBulkRequest = bulkRequest; - client.bulk(bulkRequest, this); - } - } - - static class AsyncRetryHandler extends AbstractRetryHandler { - public AsyncRetryHandler(Class retryOnThrowable, BackoffPolicy backoffPolicy, Client client, ActionListener listener) { - super(retryOnThrowable, backoffPolicy, client, listener); - } - } - - static class SyncRetryHandler extends AbstractRetryHandler { - private final PlainActionFuture actionFuture; - - public static SyncRetryHandler create(Class retryOnThrowable, BackoffPolicy backoffPolicy, Client client) { - PlainActionFuture actionFuture = PlainActionFuture.newFuture(); - return new SyncRetryHandler(retryOnThrowable, backoffPolicy, client, actionFuture); - } - - public SyncRetryHandler(Class retryOnThrowable, BackoffPolicy backoffPolicy, Client client, PlainActionFuture actionFuture) { - super(retryOnThrowable, backoffPolicy, client, actionFuture); - this.actionFuture = actionFuture; - } - - public ActionFuture executeBlocking(BulkRequest bulkRequest) { - super.execute(bulkRequest); - return actionFuture; + consumer.accept(bulkRequest, this); } } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java b/core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java index 27a579db27636..05cf4063205bd 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/TransportBulkAction.java @@ -31,8 +31,6 @@ import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; import org.elasticsearch.action.admin.indices.create.CreateIndexResponse; import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction; -import org.elasticsearch.action.delete.DeleteRequest; -import org.elasticsearch.action.delete.TransportDeleteAction; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.ingest.IngestActionForwarder; import org.elasticsearch.action.support.ActionFilters; @@ -41,6 +39,8 @@ import org.elasticsearch.action.update.TransportUpdateAction; import org.elasticsearch.action.update.UpdateRequest; import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.ClusterStateObserver; +import org.elasticsearch.cluster.block.ClusterBlockException; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; @@ -49,18 +49,23 @@ import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.util.concurrent.AbstractRunnable; import org.elasticsearch.common.util.concurrent.AtomicArray; import org.elasticsearch.index.Index; import org.elasticsearch.index.IndexNotFoundException; +import org.elasticsearch.index.VersionType; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndexClosedException; import org.elasticsearch.ingest.IngestService; +import org.elasticsearch.node.NodeClosedException; import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; import java.util.ArrayList; import java.util.HashMap; +import java.util.HashSet; import java.util.Iterator; import java.util.List; import java.util.Map; @@ -71,6 +76,8 @@ import java.util.function.LongSupplier; import java.util.stream.Collectors; +import static java.util.Collections.emptyMap; + /** * Groups bulk request items by shard, optionally creating non-existent indices and * delegates to {@link TransportShardBulkAction} for shard-level bulk execution @@ -78,7 +85,6 @@ public class TransportBulkAction extends HandledTransportAction { private final AutoCreateIndex autoCreateIndex; - private final boolean allowIdGeneration; private final ClusterService clusterService; private final IngestService ingestService; private final TransportShardBulkAction shardBulkAction; @@ -111,7 +117,6 @@ public TransportBulkAction(Settings settings, ThreadPool threadPool, TransportSe this.shardBulkAction = shardBulkAction; this.createIndexAction = createIndexAction; this.autoCreateIndex = autoCreateIndex; - this.allowIdGeneration = this.settings.getAsBoolean("action.bulk.action.allow_id_generation", true); this.relativeTimeProvider = relativeTimeProvider; this.ingestForwarder = new IngestActionForwarder(transportService); clusterService.addStateApplier(this.ingestForwarder); @@ -137,34 +142,51 @@ protected void doExecute(Task task, BulkRequest bulkRequest, ActionListener responses = new AtomicArray<>(bulkRequest.requests.size()); if (needToCheck()) { - // Keep track of all unique indices and all unique types per index for the create index requests: - final Set autoCreateIndices = bulkRequest.requests.stream() + // Attempt to create all the indices that we're going to need during the bulk before we start. + // Step 1: collect all the indices in the request + final Set indices = bulkRequest.requests.stream() + // delete requests should not attempt to create the index (if the index does not + // exists), unless an external versioning is used + .filter(request -> request.opType() != DocWriteRequest.OpType.DELETE + || request.versionType() == VersionType.EXTERNAL + || request.versionType() == VersionType.EXTERNAL_GTE) .map(DocWriteRequest::index) .collect(Collectors.toSet()); - final AtomicInteger counter = new AtomicInteger(autoCreateIndices.size()); + /* Step 2: filter that to indices that don't exist and we can create. At the same time build a map of indices we can't create + * that we'll use when we try to run the requests. */ + final Map indicesThatCannotBeCreated = new HashMap<>(); + Set autoCreateIndices = new HashSet<>(); ClusterState state = clusterService.state(); - for (String index : autoCreateIndices) { - if (shouldAutoCreate(index, state)) { - CreateIndexRequest createIndexRequest = new CreateIndexRequest(); - createIndexRequest.index(index); - createIndexRequest.cause("auto(bulk api)"); - createIndexRequest.masterNodeTimeout(bulkRequest.timeout()); - createIndexAction.execute(createIndexRequest, new ActionListener() { + for (String index : indices) { + boolean shouldAutoCreate; + try { + shouldAutoCreate = shouldAutoCreate(index, state); + } catch (IndexNotFoundException e) { + shouldAutoCreate = false; + indicesThatCannotBeCreated.put(index, e); + } + if (shouldAutoCreate) { + autoCreateIndices.add(index); + } + } + // Step 3: create all the indices that are missing, if there are any missing. start the bulk after all the creates come back. + if (autoCreateIndices.isEmpty()) { + executeBulk(task, bulkRequest, startTime, listener, responses, indicesThatCannotBeCreated); + } else { + final AtomicInteger counter = new AtomicInteger(autoCreateIndices.size()); + for (String index : autoCreateIndices) { + createIndex(index, bulkRequest.timeout(), new ActionListener() { @Override public void onResponse(CreateIndexResponse result) { if (counter.decrementAndGet() == 0) { - try { - executeBulk(task, bulkRequest, startTime, listener, responses); - } catch (Exception e) { - listener.onFailure(e); - } + executeBulk(task, bulkRequest, startTime, listener, responses, indicesThatCannotBeCreated); } } @Override public void onFailure(Exception e) { if (!(ExceptionsHelper.unwrapCause(e) instanceof ResourceAlreadyExistsException)) { - // fail all requests involving this index, if create didnt work + // fail all requests involving this index, if create didn't work for (int i = 0; i < bulkRequest.requests.size(); i++) { DocWriteRequest request = bulkRequest.requests.get(i); if (request != null && setResponseFailureIfIndexMatches(responses, i, request, index, e)) { @@ -173,23 +195,17 @@ public void onFailure(Exception e) { } } if (counter.decrementAndGet() == 0) { - try { - executeBulk(task, bulkRequest, startTime, listener, responses); - } catch (Exception inner) { + executeBulk(task, bulkRequest, startTime, ActionListener.wrap(listener::onResponse, inner -> { inner.addSuppressed(e); listener.onFailure(inner); - } + }), responses, indicesThatCannotBeCreated); } } }); - } else { - if (counter.decrementAndGet() == 0) { - executeBulk(task, bulkRequest, startTime, listener, responses); - } } } } else { - executeBulk(task, bulkRequest, startTime, listener, responses); + executeBulk(task, bulkRequest, startTime, listener, responses, emptyMap()); } } @@ -201,6 +217,14 @@ boolean shouldAutoCreate(String index, ClusterState state) { return autoCreateIndex.shouldAutoCreate(index, state); } + void createIndex(String index, TimeValue timeout, ActionListener listener) { + CreateIndexRequest createIndexRequest = new CreateIndexRequest(); + createIndexRequest.index(index); + createIndexRequest.cause("auto(bulk api)"); + createIndexRequest.masterNodeTimeout(timeout); + createIndexAction.execute(createIndexRequest, listener); + } + private boolean setResponseFailureIfIndexMatches(AtomicArray responses, int idx, DocWriteRequest request, String index, Exception e) { if (index.equals(request.index())) { responses.set(idx, new BulkItemResponse(idx, request.opType(), new BulkItemResponse.Failure(request.index(), request.type(), request.id(), e))); @@ -209,164 +233,234 @@ private boolean setResponseFailureIfIndexMatches(AtomicArray r return false; } - /** - * This method executes the {@link BulkRequest} and calls the given listener once the request returns. - * This method will not create any indices even if auto-create indices is enabled. - * - * @see #doExecute(BulkRequest, org.elasticsearch.action.ActionListener) - */ - public void executeBulk(final BulkRequest bulkRequest, final ActionListener listener) { - final long startTimeNanos = relativeTime(); - executeBulk(null, bulkRequest, startTimeNanos, listener, new AtomicArray<>(bulkRequest.requests.size())); - } - private long buildTookInMillis(long startTimeNanos) { return TimeUnit.NANOSECONDS.toMillis(relativeTime() - startTimeNanos); } - void executeBulk(Task task, final BulkRequest bulkRequest, final long startTimeNanos, final ActionListener listener, final AtomicArray responses ) { - final ClusterState clusterState = clusterService.state(); - // TODO use timeout to wait here if its blocked... - clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.WRITE); - - final ConcreteIndices concreteIndices = new ConcreteIndices(clusterState, indexNameExpressionResolver); - MetaData metaData = clusterState.metaData(); - for (int i = 0; i < bulkRequest.requests.size(); i++) { - DocWriteRequest docWriteRequest = bulkRequest.requests.get(i); - //the request can only be null because we set it to null in the previous step, so it gets ignored - if (docWriteRequest == null) { - continue; + /** + * retries on retryable cluster blocks, resolves item requests, + * constructs shard bulk requests and delegates execution to shard bulk action + * */ + private final class BulkOperation extends AbstractRunnable { + private final Task task; + private final BulkRequest bulkRequest; + private final ActionListener listener; + private final AtomicArray responses; + private final long startTimeNanos; + private final ClusterStateObserver observer; + private final Map indicesThatCannotBeCreated; + + BulkOperation(Task task, BulkRequest bulkRequest, ActionListener listener, AtomicArray responses, + long startTimeNanos, Map indicesThatCannotBeCreated) { + this.task = task; + this.bulkRequest = bulkRequest; + this.listener = listener; + this.responses = responses; + this.startTimeNanos = startTimeNanos; + this.indicesThatCannotBeCreated = indicesThatCannotBeCreated; + this.observer = new ClusterStateObserver(clusterService, bulkRequest.timeout(), logger, threadPool.getThreadContext()); + } + + @Override + public void onFailure(Exception e) { + listener.onFailure(e); + } + + @Override + protected void doRun() throws Exception { + final ClusterState clusterState = observer.setAndGetObservedState(); + if (handleBlockExceptions(clusterState)) { + return; } - if (addFailureIfIndexIsUnavailable(docWriteRequest, bulkRequest, responses, i, concreteIndices, metaData)) { - continue; + final ConcreteIndices concreteIndices = new ConcreteIndices(clusterState, indexNameExpressionResolver); + MetaData metaData = clusterState.metaData(); + for (int i = 0; i < bulkRequest.requests.size(); i++) { + DocWriteRequest docWriteRequest = bulkRequest.requests.get(i); + //the request can only be null because we set it to null in the previous step, so it gets ignored + if (docWriteRequest == null) { + continue; + } + if (addFailureIfIndexIsUnavailable(docWriteRequest, i, concreteIndices, metaData)) { + continue; + } + Index concreteIndex = concreteIndices.resolveIfAbsent(docWriteRequest); + try { + switch (docWriteRequest.opType()) { + case CREATE: + case INDEX: + IndexRequest indexRequest = (IndexRequest) docWriteRequest; + MappingMetaData mappingMd = null; + final IndexMetaData indexMetaData = metaData.index(concreteIndex); + if (indexMetaData != null) { + mappingMd = indexMetaData.mappingOrDefault(indexRequest.type()); + } + indexRequest.resolveRouting(metaData); + indexRequest.process(mappingMd, concreteIndex.getName()); + break; + case UPDATE: + TransportUpdateAction.resolveAndValidateRouting(metaData, concreteIndex.getName(), (UpdateRequest) docWriteRequest); + break; + case DELETE: + docWriteRequest.routing(metaData.resolveIndexRouting(docWriteRequest.parent(), docWriteRequest.routing(), docWriteRequest.index())); + // check if routing is required, if so, throw error if routing wasn't specified + if (docWriteRequest.routing() == null && metaData.routingRequired(concreteIndex.getName(), docWriteRequest.type())) { + throw new RoutingMissingException(concreteIndex.getName(), docWriteRequest.type(), docWriteRequest.id()); + } + break; + default: throw new AssertionError("request type not supported: [" + docWriteRequest.opType() + "]"); + } + } catch (ElasticsearchParseException | IllegalArgumentException | RoutingMissingException e) { + BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex.getName(), docWriteRequest.type(), docWriteRequest.id(), e); + BulkItemResponse bulkItemResponse = new BulkItemResponse(i, docWriteRequest.opType(), failure); + responses.set(i, bulkItemResponse); + // make sure the request gets never processed again + bulkRequest.requests.set(i, null); + } } - Index concreteIndex = concreteIndices.resolveIfAbsent(docWriteRequest); - try { - switch (docWriteRequest.opType()) { - case CREATE: - case INDEX: - IndexRequest indexRequest = (IndexRequest) docWriteRequest; - MappingMetaData mappingMd = null; - final IndexMetaData indexMetaData = metaData.index(concreteIndex); - if (indexMetaData != null) { - mappingMd = indexMetaData.mappingOrDefault(indexRequest.type()); - } - indexRequest.resolveRouting(metaData); - indexRequest.process(mappingMd, allowIdGeneration, concreteIndex.getName()); - break; - case UPDATE: - TransportUpdateAction.resolveAndValidateRouting(metaData, concreteIndex.getName(), (UpdateRequest) docWriteRequest); - break; - case DELETE: - TransportDeleteAction.resolveAndValidateRouting(metaData, concreteIndex.getName(), (DeleteRequest) docWriteRequest); - break; - default: throw new AssertionError("request type not supported: [" + docWriteRequest.opType() + "]"); + + // first, go over all the requests and create a ShardId -> Operations mapping + Map> requestsByShard = new HashMap<>(); + for (int i = 0; i < bulkRequest.requests.size(); i++) { + DocWriteRequest request = bulkRequest.requests.get(i); + if (request == null) { + continue; } - } catch (ElasticsearchParseException | RoutingMissingException e) { - BulkItemResponse.Failure failure = new BulkItemResponse.Failure(concreteIndex.getName(), docWriteRequest.type(), docWriteRequest.id(), e); - BulkItemResponse bulkItemResponse = new BulkItemResponse(i, docWriteRequest.opType(), failure); - responses.set(i, bulkItemResponse); - // make sure the request gets never processed again - bulkRequest.requests.set(i, null); + String concreteIndex = concreteIndices.getConcreteIndex(request.index()).getName(); + ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, request.id(), request.routing()).shardId(); + List shardRequests = requestsByShard.computeIfAbsent(shardId, shard -> new ArrayList<>()); + shardRequests.add(new BulkItemRequest(i, request)); } - } - // first, go over all the requests and create a ShardId -> Operations mapping - Map> requestsByShard = new HashMap<>(); - for (int i = 0; i < bulkRequest.requests.size(); i++) { - DocWriteRequest request = bulkRequest.requests.get(i); - if (request == null) { - continue; + if (requestsByShard.isEmpty()) { + listener.onResponse(new BulkResponse(responses.toArray(new BulkItemResponse[responses.length()]), buildTookInMillis(startTimeNanos))); + return; + } + + final AtomicInteger counter = new AtomicInteger(requestsByShard.size()); + String nodeId = clusterService.localNode().getId(); + for (Map.Entry> entry : requestsByShard.entrySet()) { + final ShardId shardId = entry.getKey(); + final List requests = entry.getValue(); + BulkShardRequest bulkShardRequest = new BulkShardRequest(shardId, bulkRequest.getRefreshPolicy(), + requests.toArray(new BulkItemRequest[requests.size()])); + bulkShardRequest.waitForActiveShards(bulkRequest.waitForActiveShards()); + bulkShardRequest.timeout(bulkRequest.timeout()); + if (task != null) { + bulkShardRequest.setParentTask(nodeId, task.getId()); + } + shardBulkAction.execute(bulkShardRequest, new ActionListener() { + @Override + public void onResponse(BulkShardResponse bulkShardResponse) { + for (BulkItemResponse bulkItemResponse : bulkShardResponse.getResponses()) { + // we may have no response if item failed + if (bulkItemResponse.getResponse() != null) { + bulkItemResponse.getResponse().setShardInfo(bulkShardResponse.getShardInfo()); + } + responses.set(bulkItemResponse.getItemId(), bulkItemResponse); + } + if (counter.decrementAndGet() == 0) { + finishHim(); + } + } + + @Override + public void onFailure(Exception e) { + // create failures for all relevant requests + for (BulkItemRequest request : requests) { + final String indexName = concreteIndices.getConcreteIndex(request.index()).getName(); + DocWriteRequest docWriteRequest = request.request(); + responses.set(request.id(), new BulkItemResponse(request.id(), docWriteRequest.opType(), + new BulkItemResponse.Failure(indexName, docWriteRequest.type(), docWriteRequest.id(), e))); + } + if (counter.decrementAndGet() == 0) { + finishHim(); + } + } + + private void finishHim() { + listener.onResponse(new BulkResponse(responses.toArray(new BulkItemResponse[responses.length()]), buildTookInMillis(startTimeNanos))); + } + }); } - String concreteIndex = concreteIndices.getConcreteIndex(request.index()).getName(); - ShardId shardId = clusterService.operationRouting().indexShards(clusterState, concreteIndex, request.id(), request.routing()).shardId(); - List shardRequests = requestsByShard.computeIfAbsent(shardId, shard -> new ArrayList<>()); - shardRequests.add(new BulkItemRequest(i, request)); } - if (requestsByShard.isEmpty()) { - listener.onResponse(new BulkResponse(responses.toArray(new BulkItemResponse[responses.length()]), buildTookInMillis(startTimeNanos))); - return; + private boolean handleBlockExceptions(ClusterState state) { + ClusterBlockException blockException = state.blocks().globalBlockedException(ClusterBlockLevel.WRITE); + if (blockException != null) { + if (blockException.retryable()) { + logger.trace("cluster is blocked, scheduling a retry", blockException); + retry(blockException); + } else { + onFailure(blockException); + } + return true; + } + return false; } - final AtomicInteger counter = new AtomicInteger(requestsByShard.size()); - String nodeId = clusterService.localNode().getId(); - for (Map.Entry> entry : requestsByShard.entrySet()) { - final ShardId shardId = entry.getKey(); - final List requests = entry.getValue(); - BulkShardRequest bulkShardRequest = new BulkShardRequest(shardId, bulkRequest.getRefreshPolicy(), - requests.toArray(new BulkItemRequest[requests.size()])); - bulkShardRequest.waitForActiveShards(bulkRequest.waitForActiveShards()); - bulkShardRequest.timeout(bulkRequest.timeout()); - if (task != null) { - bulkShardRequest.setParentTask(nodeId, task.getId()); + void retry(Exception failure) { + assert failure != null; + if (observer.isTimedOut()) { + // we running as a last attempt after a timeout has happened. don't retry + onFailure(failure); + return; } - shardBulkAction.execute(bulkShardRequest, new ActionListener() { + observer.waitForNextChange(new ClusterStateObserver.Listener() { @Override - public void onResponse(BulkShardResponse bulkShardResponse) { - for (BulkItemResponse bulkItemResponse : bulkShardResponse.getResponses()) { - // we may have no response if item failed - if (bulkItemResponse.getResponse() != null) { - bulkItemResponse.getResponse().setShardInfo(bulkShardResponse.getShardInfo()); - } - responses.set(bulkItemResponse.getItemId(), bulkItemResponse); - } - if (counter.decrementAndGet() == 0) { - finishHim(); - } + public void onNewClusterState(ClusterState state) { + run(); } @Override - public void onFailure(Exception e) { - // create failures for all relevant requests - for (BulkItemRequest request : requests) { - final String indexName = concreteIndices.getConcreteIndex(request.index()).getName(); - DocWriteRequest docWriteRequest = request.request(); - responses.set(request.id(), new BulkItemResponse(request.id(), docWriteRequest.opType(), - new BulkItemResponse.Failure(indexName, docWriteRequest.type(), docWriteRequest.id(), e))); - } - if (counter.decrementAndGet() == 0) { - finishHim(); - } + public void onClusterServiceClose() { + onFailure(new NodeClosedException(clusterService.localNode())); } - private void finishHim() { - listener.onResponse(new BulkResponse(responses.toArray(new BulkItemResponse[responses.length()]), buildTookInMillis(startTimeNanos))); + @Override + public void onTimeout(TimeValue timeout) { + // Try one more time... + run(); } }); } - } - private boolean addFailureIfIndexIsUnavailable(DocWriteRequest request, BulkRequest bulkRequest, AtomicArray responses, int idx, - final ConcreteIndices concreteIndices, - final MetaData metaData) { - Index concreteIndex = concreteIndices.getConcreteIndex(request.index()); - Exception unavailableException = null; - if (concreteIndex == null) { - try { - concreteIndex = concreteIndices.resolveIfAbsent(request); - } catch (IndexClosedException | IndexNotFoundException ex) { - // Fix for issue where bulk request references an index that - // cannot be auto-created see issue #8125 - unavailableException = ex; + private boolean addFailureIfIndexIsUnavailable(DocWriteRequest request, int idx, final ConcreteIndices concreteIndices, + final MetaData metaData) { + IndexNotFoundException cannotCreate = indicesThatCannotBeCreated.get(request.index()); + if (cannotCreate != null) { + addFailure(request, idx, cannotCreate); + return true; + } + Index concreteIndex = concreteIndices.getConcreteIndex(request.index()); + if (concreteIndex == null) { + try { + concreteIndex = concreteIndices.resolveIfAbsent(request); + } catch (IndexClosedException | IndexNotFoundException ex) { + addFailure(request, idx, ex); + return true; + } } - } - if (unavailableException == null) { IndexMetaData indexMetaData = metaData.getIndexSafe(concreteIndex); if (indexMetaData.getState() == IndexMetaData.State.CLOSE) { - unavailableException = new IndexClosedException(concreteIndex); + addFailure(request, idx, new IndexClosedException(concreteIndex)); + return true; } + return false; } - if (unavailableException != null) { + + private void addFailure(DocWriteRequest request, int idx, Exception unavailableException) { BulkItemResponse.Failure failure = new BulkItemResponse.Failure(request.index(), request.type(), request.id(), unavailableException); BulkItemResponse bulkItemResponse = new BulkItemResponse(idx, request.opType(), failure); responses.set(idx, bulkItemResponse); // make sure the request gets never processed again bulkRequest.requests.set(idx, null); - return true; } - return false; + } + + void executeBulk(Task task, final BulkRequest bulkRequest, final long startTimeNanos, final ActionListener listener, + final AtomicArray responses, Map indicesThatCannotBeCreated) { + new BulkOperation(task, bulkRequest, listener, responses, startTimeNanos, indicesThatCannotBeCreated).run(); } private static class ConcreteIndices { @@ -475,9 +569,9 @@ BulkRequest getBulkRequest() { ActionListener wrapActionListenerIfNeeded(long ingestTookInMillis, ActionListener actionListener) { if (itemResponses.isEmpty()) { return ActionListener.wrap( - response -> actionListener.onResponse( - new BulkResponse(response.getItems(), response.getTookInMillis(), ingestTookInMillis)), - actionListener::onFailure); + response -> actionListener.onResponse(new BulkResponse(response.getItems(), + response.getTook().getMillis(), ingestTookInMillis)), + actionListener::onFailure); } else { return new IngestBulkResponseListener(ingestTookInMillis, originalSlots, itemResponses, actionListener); } @@ -516,7 +610,9 @@ public void onResponse(BulkResponse response) { for (int i = 0; i < items.length; i++) { itemResponses.add(originalSlots[i], response.getItems()[i]); } - actionListener.onResponse(new BulkResponse(itemResponses.toArray(new BulkItemResponse[itemResponses.size()]), response.getTookInMillis(), ingestTookInMillis)); + actionListener.onResponse(new BulkResponse( + itemResponses.toArray(new BulkItemResponse[itemResponses.size()]), + response.getTook().getMillis(), ingestTookInMillis)); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java index 86024e4dcd592..7a2c5eb02222a 100644 --- a/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java +++ b/core/src/main/java/org/elasticsearch/action/bulk/TransportShardBulkAction.java @@ -19,8 +19,10 @@ package org.elasticsearch.action.bulk; +import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.action.delete.DeleteRequest; @@ -28,6 +30,8 @@ import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.TransportActions; +import org.elasticsearch.action.support.replication.ReplicationOperation; import org.elasticsearch.action.support.replication.ReplicationResponse.ShardInfo; import org.elasticsearch.action.support.replication.TransportWriteAction; import org.elasticsearch.action.update.UpdateHelper; @@ -42,39 +46,38 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.collect.Tuple; import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.logging.ESLoggerFactory; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.engine.EngineClosedException; import org.elasticsearch.index.engine.VersionConflictEngineException; +import org.elasticsearch.index.get.GetResult; import org.elasticsearch.index.mapper.MapperParsingException; +import org.elasticsearch.index.mapper.Mapping; +import org.elasticsearch.index.mapper.SourceToParse; import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.IndexShard; -import org.elasticsearch.index.shard.IndexShardClosedException; +import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.translog.Translog; import org.elasticsearch.indices.IndicesService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportService; +import java.io.IOException; import java.util.Map; - -import static org.elasticsearch.action.delete.TransportDeleteAction.executeDeleteRequestOnPrimary; -import static org.elasticsearch.action.delete.TransportDeleteAction.executeDeleteRequestOnReplica; -import static org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary; -import static org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnReplica; -import static org.elasticsearch.action.support.replication.ReplicationOperation.ignoreReplicaException; -import static org.elasticsearch.action.support.replication.ReplicationOperation.isConflictException; +import java.util.function.LongSupplier; /** Performs shard-level bulk (index, delete or update) operations */ public class TransportShardBulkAction extends TransportWriteAction { public static final String ACTION_NAME = BulkAction.NAME + "[s]"; + private static final Logger logger = ESLoggerFactory.getLogger(TransportShardBulkAction.class); + private final UpdateHelper updateHelper; - private final boolean allowIdGeneration; private final MappingUpdatedAction mappingUpdatedAction; @Inject @@ -85,7 +88,6 @@ public TransportShardBulkAction(Settings settings, TransportService transportSer super(settings, ACTION_NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, actionFilters, indexNameExpressionResolver, BulkShardRequest::new, BulkShardRequest::new, ThreadPool.Names.BULK); this.updateHelper = updateHelper; - this.allowIdGeneration = settings.getAsBoolean("action.allow_id_generation", true); this.mappingUpdatedAction = mappingUpdatedAction; } @@ -105,146 +107,162 @@ protected boolean resolveIndex() { } @Override - protected WritePrimaryResult shardOperationOnPrimary(BulkShardRequest request, IndexShard primary) throws Exception { - final IndexMetaData metaData = primary.indexSettings().getIndexMetaData(); + public WritePrimaryResult shardOperationOnPrimary( + BulkShardRequest request, IndexShard primary) throws Exception { + return performOnPrimary(request, primary, updateHelper, threadPool::absoluteTimeInMillis, new ConcreteMappingUpdatePerformer()); + } - long[] preVersions = new long[request.items().length]; - VersionType[] preVersionTypes = new VersionType[request.items().length]; + public static WritePrimaryResult performOnPrimary( + BulkShardRequest request, + IndexShard primary, + UpdateHelper updateHelper, + LongSupplier nowInMillisSupplier, + MappingUpdatePerformer mappingUpdater) throws Exception { + final IndexMetaData metaData = primary.indexSettings().getIndexMetaData(); Translog.Location location = null; for (int requestIndex = 0; requestIndex < request.items().length; requestIndex++) { - location = executeBulkItemRequest(metaData, primary, request, preVersions, preVersionTypes, location, requestIndex); + location = executeBulkItemRequest(metaData, primary, request, location, requestIndex, + updateHelper, nowInMillisSupplier, mappingUpdater); } - BulkItemResponse[] responses = new BulkItemResponse[request.items().length]; BulkItemRequest[] items = request.items(); for (int i = 0; i < items.length; i++) { responses[i] = items[i].getPrimaryResponse(); } BulkShardResponse response = new BulkShardResponse(request.shardId(), responses); - return new WritePrimaryResult(request, response, location, null, primary); + return new WritePrimaryResult<>(request, response, location, null, primary, logger); } - /** Executes bulk item requests and handles request execution exceptions */ - private Translog.Location executeBulkItemRequest(IndexMetaData metaData, IndexShard primary, - BulkShardRequest request, - long[] preVersions, VersionType[] preVersionTypes, - Translog.Location location, int requestIndex) throws Exception { - final DocWriteRequest itemRequest = request.items()[requestIndex].request(); - preVersions[requestIndex] = itemRequest.version(); - preVersionTypes[requestIndex] = itemRequest.versionType(); - DocWriteRequest.OpType opType = itemRequest.opType(); - try { - // execute item request - final Engine.Result operationResult; - final DocWriteResponse response; - final BulkItemRequest replicaRequest; - switch (itemRequest.opType()) { - case CREATE: - case INDEX: - final IndexRequest indexRequest = (IndexRequest) itemRequest; - Engine.IndexResult indexResult = executeIndexRequestOnPrimary(indexRequest, primary, mappingUpdatedAction); - if (indexResult.hasFailure()) { - response = null; - } else { - // update the version on request so it will happen on the replicas - final long version = indexResult.getVersion(); - indexRequest.version(version); - indexRequest.versionType(indexRequest.versionType().versionTypeForReplicationAndRecovery()); - indexRequest.setSeqNo(indexResult.getSeqNo()); - assert indexRequest.versionType().validateVersionForWrites(indexRequest.version()); - response = new IndexResponse(primary.shardId(), indexRequest.type(), indexRequest.id(), indexResult.getSeqNo(), - indexResult.getVersion(), indexResult.isCreated()); - } - operationResult = indexResult; - replicaRequest = request.items()[requestIndex]; - break; - case UPDATE: - UpdateResultHolder updateResultHolder = executeUpdateRequest(((UpdateRequest) itemRequest), - primary, metaData, request, requestIndex); - operationResult = updateResultHolder.operationResult; - response = updateResultHolder.response; - replicaRequest = updateResultHolder.replicaRequest; - break; - case DELETE: - final DeleteRequest deleteRequest = (DeleteRequest) itemRequest; - Engine.DeleteResult deleteResult = executeDeleteRequestOnPrimary(deleteRequest, primary); - if (deleteResult.hasFailure()) { - response = null; - } else { - // update the request with the version so it will go to the replicas - deleteRequest.versionType(deleteRequest.versionType().versionTypeForReplicationAndRecovery()); - deleteRequest.version(deleteResult.getVersion()); - deleteRequest.setSeqNo(deleteResult.getSeqNo()); - assert deleteRequest.versionType().validateVersionForWrites(deleteRequest.version()); - response = new DeleteResponse(request.shardId(), deleteRequest.type(), deleteRequest.id(), deleteResult.getSeqNo(), - deleteResult.getVersion(), deleteResult.isFound()); - } - operationResult = deleteResult; - replicaRequest = request.items()[requestIndex]; - break; - default: throw new IllegalStateException("unexpected opType [" + itemRequest.opType() + "] found"); - } + private static BulkItemResultHolder executeIndexRequest(final IndexRequest indexRequest, + final BulkItemRequest bulkItemRequest, + final IndexShard primary, + final MappingUpdatePerformer mappingUpdater) throws Exception { + Engine.IndexResult indexResult = executeIndexRequestOnPrimary(indexRequest, primary, mappingUpdater); + if (indexResult.hasFailure()) { + return new BulkItemResultHolder(null, indexResult, bulkItemRequest); + } else { + IndexResponse response = new IndexResponse(primary.shardId(), indexRequest.type(), indexRequest.id(), + indexResult.getSeqNo(), primary.getPrimaryTerm(), indexResult.getVersion(), indexResult.isCreated()); + return new BulkItemResultHolder(response, indexResult, bulkItemRequest); + } + } + + private static BulkItemResultHolder executeDeleteRequest(final DeleteRequest deleteRequest, + final BulkItemRequest bulkItemRequest, + final IndexShard primary, + final MappingUpdatePerformer mappingUpdater) throws Exception { + Engine.DeleteResult deleteResult = executeDeleteRequestOnPrimary(deleteRequest, primary, mappingUpdater); + if (deleteResult.hasFailure()) { + return new BulkItemResultHolder(null, deleteResult, bulkItemRequest); + } else { + DeleteResponse response = new DeleteResponse(primary.shardId(), deleteRequest.type(), deleteRequest.id(), + deleteResult.getSeqNo(), primary.getPrimaryTerm(), deleteResult.getVersion(), deleteResult.isFound()); + return new BulkItemResultHolder(response, deleteResult, bulkItemRequest); + } + } + + static Translog.Location calculateTranslogLocation(final Translog.Location originalLocation, + final BulkItemResultHolder bulkItemResult) { + final Engine.Result operationResult = bulkItemResult.operationResult; + if (operationResult != null && operationResult.hasFailure() == false) { + return locationToSync(originalLocation, operationResult.getTranslogLocation()); + } else { + return originalLocation; + } + } + + // Visible for unit testing + /** + * Creates a BulkItemResponse for the primary operation and returns it. If no bulk response is + * needed (because one already exists and the operation failed), then return null. + */ + static BulkItemResponse createPrimaryResponse(BulkItemResultHolder bulkItemResult, + final DocWriteRequest.OpType opType, + BulkShardRequest request) { + final Engine.Result operationResult = bulkItemResult.operationResult; + final DocWriteResponse response = bulkItemResult.response; + final BulkItemRequest replicaRequest = bulkItemResult.replicaRequest; + + if (operationResult == null) { // in case of noop update operation + assert response.getResult() == DocWriteResponse.Result.NOOP : "only noop updates can have a null operation"; + return new BulkItemResponse(replicaRequest.id(), opType, response); - // update the bulk item request because update request execution can mutate the bulk item request - request.items()[requestIndex] = replicaRequest; - if (operationResult == null) { // in case of noop update operation - assert response.getResult() == DocWriteResponse.Result.NOOP - : "only noop update can have null operation"; - replicaRequest.setIgnoreOnReplica(); - replicaRequest.setPrimaryResponse(new BulkItemResponse(replicaRequest.id(), opType, response)); - } else if (operationResult.hasFailure() == false) { - location = locationToSync(location, operationResult.getTranslogLocation()); - BulkItemResponse primaryResponse = new BulkItemResponse(replicaRequest.id(), opType, response); - replicaRequest.setPrimaryResponse(primaryResponse); - // set the ShardInfo to 0 so we can safely send it to the replicas. We won't use it in the real response though. - primaryResponse.getResponse().setShardInfo(new ShardInfo()); + } else if (operationResult.hasFailure() == false) { + BulkItemResponse primaryResponse = new BulkItemResponse(replicaRequest.id(), opType, response); + // set a blank ShardInfo so we can safely send it to the replicas. We won't use it in the real response though. + primaryResponse.getResponse().setShardInfo(new ShardInfo()); + return primaryResponse; + + } else { + DocWriteRequest docWriteRequest = replicaRequest.request(); + Exception failure = operationResult.getFailure(); + if (isConflictException(failure)) { + logger.trace((Supplier) () -> new ParameterizedMessage("{} failed to execute bulk item ({}) {}", + request.shardId(), docWriteRequest.opType().getLowercase(), request), failure); } else { - DocWriteRequest docWriteRequest = replicaRequest.request(); - Exception failure = operationResult.getFailure(); - if (isConflictException(failure)) { - logger.trace((Supplier) () -> new ParameterizedMessage("{} failed to execute bulk item ({}) {}", - request.shardId(), docWriteRequest.opType().getLowercase(), request), failure); - } else { - logger.debug((Supplier) () -> new ParameterizedMessage("{} failed to execute bulk item ({}) {}", - request.shardId(), docWriteRequest.opType().getLowercase(), request), failure); - } - // if its a conflict failure, and we already executed the request on a primary (and we execute it - // again, due to primary relocation and only processing up to N bulk items when the shard gets closed) - // then just use the response we got from the successful execution - if (replicaRequest.getPrimaryResponse() == null || isConflictException(failure) == false) { - replicaRequest.setIgnoreOnReplica(); - replicaRequest.setPrimaryResponse(new BulkItemResponse(replicaRequest.id(), docWriteRequest.opType(), - new BulkItemResponse.Failure(request.index(), docWriteRequest.type(), docWriteRequest.id(), failure))); - } + logger.debug((Supplier) () -> new ParameterizedMessage("{} failed to execute bulk item ({}) {}", + request.shardId(), docWriteRequest.opType().getLowercase(), request), failure); } - assert replicaRequest.getPrimaryResponse() != null; - assert preVersionTypes[requestIndex] != null; - } catch (Exception e) { - // rethrow the failure if we are going to retry on primary and let parent failure to handle it - if (retryPrimaryException(e)) { - // restore updated versions... - for (int j = 0; j < requestIndex; j++) { - DocWriteRequest docWriteRequest = request.items()[j].request(); - docWriteRequest.version(preVersions[j]); - docWriteRequest.versionType(preVersionTypes[j]); - } + + // if it's a conflict failure, and we already executed the request on a primary (and we execute it + // again, due to primary relocation and only processing up to N bulk items when the shard gets closed) + // then just use the response we got from the failed execution + if (replicaRequest.getPrimaryResponse() == null || isConflictException(failure) == false) { + return new BulkItemResponse(replicaRequest.id(), docWriteRequest.opType(), + // Make sure to use request.index() here, if you + // use docWriteRequest.index() it will use the + // concrete index instead of an alias if used! + new BulkItemResponse.Failure(request.index(), docWriteRequest.type(), docWriteRequest.id(), + failure, operationResult.getSeqNo())); + } else { + assert replicaRequest.getPrimaryResponse() != null : "replica request must have a primary response"; + return null; } - throw e; } - return location; } - private static class UpdateResultHolder { - final BulkItemRequest replicaRequest; - final Engine.Result operationResult; - final DocWriteResponse response; + /** Executes bulk item requests and handles request execution exceptions */ + static Translog.Location executeBulkItemRequest(IndexMetaData metaData, IndexShard primary, + BulkShardRequest request, Translog.Location location, + int requestIndex, UpdateHelper updateHelper, + LongSupplier nowInMillisSupplier, + final MappingUpdatePerformer mappingUpdater) throws Exception { + final DocWriteRequest itemRequest = request.items()[requestIndex].request(); + final DocWriteRequest.OpType opType = itemRequest.opType(); + final BulkItemResultHolder responseHolder; + switch (itemRequest.opType()) { + case CREATE: + case INDEX: + responseHolder = executeIndexRequest((IndexRequest) itemRequest, + request.items()[requestIndex], primary, mappingUpdater); + break; + case UPDATE: + responseHolder = executeUpdateRequest((UpdateRequest) itemRequest, primary, metaData, request, + requestIndex, updateHelper, nowInMillisSupplier, mappingUpdater); + break; + case DELETE: + responseHolder = executeDeleteRequest((DeleteRequest) itemRequest, request.items()[requestIndex], primary, mappingUpdater); + break; + default: throw new IllegalStateException("unexpected opType [" + itemRequest.opType() + "] found"); + } + + final BulkItemRequest replicaRequest = responseHolder.replicaRequest; + + // update the bulk item request because update request execution can mutate the bulk item request + request.items()[requestIndex] = replicaRequest; - private UpdateResultHolder(BulkItemRequest replicaRequest, Engine.Result operationResult, - DocWriteResponse response) { - this.replicaRequest = replicaRequest; - this.operationResult = operationResult; - this.response = response; + // Retrieve the primary response, and update the replica request with the primary's response + BulkItemResponse primaryResponse = createPrimaryResponse(responseHolder, opType, request); + if (primaryResponse != null) { + replicaRequest.setPrimaryResponse(primaryResponse); } + + // Update the translog with the new location, if needed + return calculateTranslogLocation(location, responseHolder); + } + + private static boolean isConflictException(final Exception e) { + return ExceptionsHelper.unwrapCause(e) instanceof VersionConflictEngineException; } /** @@ -252,12 +270,14 @@ private UpdateResultHolder(BulkItemRequest replicaRequest, Engine.Result operati * handles retries on version conflict and constructs update response * NOTE: reassigns bulk item request at requestIndex for replicas to * execute translated update request (NOOP update is an exception). NOOP updates are - * indicated by returning a null operation in {@link UpdateResultHolder} + * indicated by returning a null operation in {@link BulkItemResultHolder} * */ - private UpdateResultHolder executeUpdateRequest(UpdateRequest updateRequest, IndexShard primary, - IndexMetaData metaData, BulkShardRequest request, - int requestIndex) throws Exception { - Engine.Result updateOperationResult = null; + private static BulkItemResultHolder executeUpdateRequest(UpdateRequest updateRequest, IndexShard primary, + IndexMetaData metaData, BulkShardRequest request, + int requestIndex, UpdateHelper updateHelper, + LongSupplier nowInMillis, + final MappingUpdatePerformer mappingUpdater) throws Exception { + Engine.Result result = null; UpdateResponse updateResponse = null; BulkItemRequest replicaRequest = request.items()[requestIndex]; int maxAttempts = updateRequest.retryOnConflict(); @@ -265,11 +285,11 @@ private UpdateResultHolder executeUpdateRequest(UpdateRequest updateRequest, Ind final UpdateHelper.Result translate; // translate update request try { - translate = updateHelper.prepare(updateRequest, primary, threadPool::estimatedTimeInMillis); + translate = updateHelper.prepare(updateRequest, primary, nowInMillis); } catch (Exception failure) { // we may fail translating a update to index or delete operation // we use index result to communicate failure while translating update request - updateOperationResult = new Engine.IndexResult(failure, updateRequest.version(), SequenceNumbersService.UNASSIGNED_SEQ_NO); + result = new Engine.IndexResult(failure, updateRequest.version(), SequenceNumbersService.UNASSIGNED_SEQ_NO); break; // out of retry loop } // execute translated update request @@ -278,54 +298,51 @@ private UpdateResultHolder executeUpdateRequest(UpdateRequest updateRequest, Ind case UPDATED: IndexRequest indexRequest = translate.action(); MappingMetaData mappingMd = metaData.mappingOrDefault(indexRequest.type()); - indexRequest.process(mappingMd, allowIdGeneration, request.index()); - updateOperationResult = executeIndexRequestOnPrimary(indexRequest, primary, mappingUpdatedAction); - if (updateOperationResult.hasFailure() == false) { - // update the version on request so it will happen on the replicas - final long version = updateOperationResult.getVersion(); - indexRequest.version(version); - indexRequest.versionType(indexRequest.versionType().versionTypeForReplicationAndRecovery()); - indexRequest.setSeqNo(updateOperationResult.getSeqNo()); - assert indexRequest.versionType().validateVersionForWrites(indexRequest.version()); - } + indexRequest.process(mappingMd, request.index()); + result = executeIndexRequestOnPrimary(indexRequest, primary, mappingUpdater); break; case DELETED: DeleteRequest deleteRequest = translate.action(); - updateOperationResult = executeDeleteRequestOnPrimary(deleteRequest, primary); - if (updateOperationResult.hasFailure() == false) { - // update the request with the version so it will go to the replicas - deleteRequest.versionType(deleteRequest.versionType().versionTypeForReplicationAndRecovery()); - deleteRequest.version(updateOperationResult.getVersion()); - deleteRequest.setSeqNo(updateOperationResult.getSeqNo()); - assert deleteRequest.versionType().validateVersionForWrites(deleteRequest.version()); - } + result = executeDeleteRequestOnPrimary(deleteRequest, primary, mappingUpdater); break; case NOOP: primary.noopUpdate(updateRequest.type()); break; default: throw new IllegalStateException("Illegal update operation " + translate.getResponseResult()); } - if (updateOperationResult == null) { + if (result == null) { // this is a noop operation updateResponse = translate.action(); break; // out of retry loop - } else if (updateOperationResult.hasFailure() == false) { + } else if (result.hasFailure() == false) { // enrich update response and // set translated update (index/delete) request for replica execution in bulk items - switch (updateOperationResult.getOperationType()) { + switch (result.getOperationType()) { case INDEX: + assert result instanceof Engine.IndexResult : result.getClass(); IndexRequest updateIndexRequest = translate.action(); - final IndexResponse indexResponse = new IndexResponse(primary.shardId(), - updateIndexRequest.type(), updateIndexRequest.id(), updateOperationResult.getSeqNo(), - updateOperationResult.getVersion(), ((Engine.IndexResult) updateOperationResult).isCreated()); + final IndexResponse indexResponse = new IndexResponse( + primary.shardId(), + updateIndexRequest.type(), + updateIndexRequest.id(), + result.getSeqNo(), + primary.getPrimaryTerm(), + result.getVersion(), + ((Engine.IndexResult) result).isCreated()); BytesReference indexSourceAsBytes = updateIndexRequest.source(); - updateResponse = new UpdateResponse(indexResponse.getShardInfo(), - indexResponse.getShardId(), indexResponse.getType(), indexResponse.getId(), indexResponse.getSeqNo(), - indexResponse.getVersion(), indexResponse.getResult()); + updateResponse = new UpdateResponse( + indexResponse.getShardInfo(), + indexResponse.getShardId(), + indexResponse.getType(), + indexResponse.getId(), + indexResponse.getSeqNo(), + indexResponse.getPrimaryTerm(), + indexResponse.getVersion(), + indexResponse.getResult()); if ((updateRequest.fetchSource() != null && updateRequest.fetchSource().fetchSource()) || (updateRequest.fields() != null && updateRequest.fields().length > 0)) { Tuple> sourceAndContent = - XContentHelper.convertToMap(indexSourceAsBytes, true); + XContentHelper.convertToMap(indexSourceAsBytes, true, updateIndexRequest.getContentType()); updateResponse.setGetResult(updateHelper.extractGetResult(updateRequest, request.index(), indexResponse.getVersion(), sourceAndContent.v2(), sourceAndContent.v1(), indexSourceAsBytes)); } @@ -333,91 +350,353 @@ private UpdateResultHolder executeUpdateRequest(UpdateRequest updateRequest, Ind replicaRequest = new BulkItemRequest(request.items()[requestIndex].id(), updateIndexRequest); break; case DELETE: + assert result instanceof Engine.DeleteResult : result.getClass(); DeleteRequest updateDeleteRequest = translate.action(); - DeleteResponse deleteResponse = new DeleteResponse(primary.shardId(), - updateDeleteRequest.type(), updateDeleteRequest.id(), updateOperationResult.getSeqNo(), - updateOperationResult.getVersion(), ((Engine.DeleteResult) updateOperationResult).isFound()); - updateResponse = new UpdateResponse(deleteResponse.getShardInfo(), - deleteResponse.getShardId(), deleteResponse.getType(), deleteResponse.getId(), deleteResponse.getSeqNo(), - deleteResponse.getVersion(), deleteResponse.getResult()); - updateResponse.setGetResult(updateHelper.extractGetResult(updateRequest, - request.index(), deleteResponse.getVersion(), translate.updatedSourceAsMap(), - translate.updateSourceContentType(), null)); + DeleteResponse deleteResponse = new DeleteResponse( + primary.shardId(), + updateDeleteRequest.type(), + updateDeleteRequest.id(), + result.getSeqNo(), + primary.getPrimaryTerm(), + result.getVersion(), + ((Engine.DeleteResult) result).isFound()); + updateResponse = new UpdateResponse( + deleteResponse.getShardInfo(), + deleteResponse.getShardId(), + deleteResponse.getType(), + deleteResponse.getId(), + deleteResponse.getSeqNo(), + deleteResponse.getPrimaryTerm(), + deleteResponse.getVersion(), + deleteResponse.getResult()); + final GetResult getResult = updateHelper.extractGetResult( + updateRequest, + request.index(), + deleteResponse.getVersion(), + translate.updatedSourceAsMap(), + translate.updateSourceContentType(), + null); + updateResponse.setGetResult(getResult); // set translated request as replica request replicaRequest = new BulkItemRequest(request.items()[requestIndex].id(), updateDeleteRequest); break; } - assert (replicaRequest.request() instanceof IndexRequest - && ((IndexRequest) replicaRequest.request()).getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO) || - (replicaRequest.request() instanceof DeleteRequest - && ((DeleteRequest) replicaRequest.request()).getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO); + assert result.getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO; // successful operation break; // out of retry loop - } else if (updateOperationResult.getFailure() instanceof VersionConflictEngineException == false) { + } else if (result.getFailure() instanceof VersionConflictEngineException == false) { // not a version conflict exception break; // out of retry loop } } - return new UpdateResultHolder(replicaRequest, updateOperationResult, updateResponse); + return new BulkItemResultHolder(updateResponse, result, replicaRequest); + } + + /** Modes for executing item request on replica depending on corresponding primary execution result */ + public enum ReplicaItemExecutionMode { + + /** + * When primary execution succeeded + */ + NORMAL, + + /** + * When primary execution failed before sequence no was generated + * or primary execution was a noop (only possible when request is originating from pre-6.0 nodes) + */ + NOOP, + + /** + * When primary execution failed after sequence no was generated + */ + FAILURE + } + + /** + * Determines whether a bulk item request should be executed on the replica. + * @return {@link ReplicaItemExecutionMode#NORMAL} upon normal primary execution with no failures + * {@link ReplicaItemExecutionMode#FAILURE} upon primary execution failure after sequence no generation + * {@link ReplicaItemExecutionMode#NOOP} upon primary execution failure before sequence no generation or + * when primary execution resulted in noop (only possible for write requests from pre-6.0 nodes) + */ + static ReplicaItemExecutionMode replicaItemExecutionMode(final BulkItemRequest request, final int index) { + final BulkItemResponse primaryResponse = request.getPrimaryResponse(); + assert primaryResponse != null : "expected primary response to be set for item [" + index + "] request [" + request.request() + "]"; + if (primaryResponse.isFailed()) { + return primaryResponse.getFailure().getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO + ? ReplicaItemExecutionMode.FAILURE // we have a seq no generated with the failure, replicate as no-op + : ReplicaItemExecutionMode.NOOP; // no seq no generated, ignore replication + } else { + // TODO: once we know for sure that every operation that has been processed on the primary is assigned a seq# + // (i.e., all nodes on the cluster are on v6.0.0 or higher) we can use the existence of a seq# to indicate whether + // an operation should be processed or be treated as a noop. This means we could remove this method and the + // ReplicaItemExecutionMode enum and have a simple boolean check for seq != UNASSIGNED_SEQ_NO which will work for + // both failures and indexing operations. + return primaryResponse.getResponse().getResult() != DocWriteResponse.Result.NOOP + ? ReplicaItemExecutionMode.NORMAL // execution successful on primary + : ReplicaItemExecutionMode.NOOP; // ignore replication + } } @Override - protected WriteReplicaResult shardOperationOnReplica(BulkShardRequest request, IndexShard replica) throws Exception { + public WriteReplicaResult shardOperationOnReplica(BulkShardRequest request, IndexShard replica) throws Exception { + final Translog.Location location = performOnReplica(request, replica); + return new WriteReplicaResult<>(request, location, null, replica, logger); + } + + public static Translog.Location performOnReplica(BulkShardRequest request, IndexShard replica) throws Exception { Translog.Location location = null; + final long primaryTerm = request.primaryTerm(); for (int i = 0; i < request.items().length; i++) { BulkItemRequest item = request.items()[i]; - if (item.isIgnoreOnReplica() == false) { - DocWriteRequest docWriteRequest = item.request(); - final Engine.Result operationResult; - try { - switch (docWriteRequest.opType()) { - case CREATE: - case INDEX: - operationResult = executeIndexRequestOnReplica((IndexRequest) docWriteRequest, replica); - break; - case DELETE: - operationResult = executeDeleteRequestOnReplica((DeleteRequest) docWriteRequest, replica); - break; - default: - throw new IllegalStateException("Unexpected request operation type on replica: " - + docWriteRequest.opType().getLowercase()); - } - if (operationResult.hasFailure()) { - // check if any transient write operation failures should be bubbled up - Exception failure = operationResult.getFailure(); - assert failure instanceof VersionConflictEngineException - || failure instanceof MapperParsingException - || failure instanceof EngineClosedException - || failure instanceof IndexShardClosedException - : "expected any one of [version conflict, mapper parsing, engine closed, index shard closed]" + - " failures. got " + failure; - if (!ignoreReplicaException(failure)) { - throw failure; + final Engine.Result operationResult; + DocWriteRequest docWriteRequest = item.request(); + try { + switch (replicaItemExecutionMode(item, i)) { + case NORMAL: + final DocWriteResponse primaryResponse = item.getPrimaryResponse().getResponse(); + switch (docWriteRequest.opType()) { + case CREATE: + case INDEX: + operationResult = + executeIndexRequestOnReplica(primaryResponse, (IndexRequest) docWriteRequest, primaryTerm, replica); + break; + case DELETE: + operationResult = + executeDeleteRequestOnReplica(primaryResponse, (DeleteRequest) docWriteRequest, primaryTerm, replica); + break; + default: + throw new IllegalStateException("Unexpected request operation type on replica: " + + docWriteRequest.opType().getLowercase()); } - } else { - location = locationToSync(location, operationResult.getTranslogLocation()); - } - } catch (Exception e) { - // if its not an ignore replica failure, we need to make sure to bubble up the failure - // so we will fail the shard - if (!ignoreReplicaException(e)) { - throw e; - } + assert operationResult != null : "operation result must never be null when primary response has no failure"; + location = syncOperationResultOrThrow(operationResult, location); + break; + case NOOP: + break; + case FAILURE: + final BulkItemResponse.Failure failure = item.getPrimaryResponse().getFailure(); + assert failure.getSeqNo() != SequenceNumbersService.UNASSIGNED_SEQ_NO : "seq no must be assigned"; + operationResult = executeFailureNoOpOnReplica(failure, primaryTerm, replica); + assert operationResult != null : "operation result must never be null when primary response has no failure"; + location = syncOperationResultOrThrow(operationResult, location); + break; + default: + throw new IllegalStateException("illegal replica item execution mode for: " + item.request()); + } + } catch (Exception e) { + // if its not an ignore replica failure, we need to make sure to bubble up the failure + // so we will fail the shard + if (!TransportActions.isShardNotAvailableException(e)) { + throw e; } } } - return new WriteReplicaResult(request, location, null, replica); + return location; + } + + /** Syncs operation result to the translog or throws a shard not available failure */ + private static Translog.Location syncOperationResultOrThrow(final Engine.Result operationResult, + final Translog.Location currentLocation) throws Exception { + final Translog.Location location; + if (operationResult.hasFailure()) { + // check if any transient write operation failures should be bubbled up + Exception failure = operationResult.getFailure(); + assert failure instanceof MapperParsingException : "expected mapper parsing failures. got " + failure; + if (!TransportActions.isShardNotAvailableException(failure)) { + throw failure; + } else { + location = currentLocation; + } + } else { + location = locationToSync(currentLocation, operationResult.getTranslogLocation()); + } + return location; } - private Translog.Location locationToSync(Translog.Location current, Translog.Location next) { - /* here we are moving forward in the translog with each operation. Under the hood - * this might cross translog files which is ok since from the user perspective - * the translog is like a tape where only the highest location needs to be fsynced - * in order to sync all previous locations even though they are not in the same file. - * When the translog rolls over files the previous file is fsynced on after closing if needed.*/ + private static Translog.Location locationToSync(Translog.Location current, + Translog.Location next) { + /* here we are moving forward in the translog with each operation. Under the hood this might + * cross translog files which is ok since from the user perspective the translog is like a + * tape where only the highest location needs to be fsynced in order to sync all previous + * locations even though they are not in the same file. When the translog rolls over files + * the previous file is fsynced on after closing if needed.*/ assert next != null : "next operation can't be null"; - assert current == null || current.compareTo(next) < 0 : "translog locations are not increasing"; + assert current == null || current.compareTo(next) < 0 : + "translog locations are not increasing"; return next; } + /** + * Execute the given {@link IndexRequest} on a replica shard, throwing a + * {@link RetryOnReplicaException} if the operation needs to be re-tried. + */ + private static Engine.IndexResult executeIndexRequestOnReplica(DocWriteResponse primaryResponse, IndexRequest request, + long primaryTerm, IndexShard replica) throws IOException { + + final Engine.Index operation; + try { + operation = prepareIndexOperationOnReplica(primaryResponse, request, primaryTerm, replica); + } catch (MapperParsingException e) { + return new Engine.IndexResult(e, primaryResponse.getVersion(), primaryResponse.getSeqNo()); + } + + Mapping update = operation.parsedDoc().dynamicMappingsUpdate(); + if (update != null) { + final ShardId shardId = replica.shardId(); + throw new RetryOnReplicaException(shardId, + "Mappings are not available on the replica yet, triggered update: " + update); + } + return replica.index(operation); + } + + /** Utility method to prepare an index operation on replica shards */ + static Engine.Index prepareIndexOperationOnReplica( + DocWriteResponse primaryResponse, + IndexRequest request, + long primaryTerm, + IndexShard replica) { + + final ShardId shardId = replica.shardId(); + final long version = primaryResponse.getVersion(); + final long seqNo = primaryResponse.getSeqNo(); + final SourceToParse sourceToParse = + SourceToParse.source(shardId.getIndexName(), + request.type(), request.id(), request.source(), request.getContentType()) + .routing(request.routing()).parent(request.parent()); + final VersionType versionType = request.versionType().versionTypeForReplicationAndRecovery(); + assert versionType.validateVersionForWrites(version); + + return replica.prepareIndexOnReplica(sourceToParse, seqNo, primaryTerm, version, versionType, + request.getAutoGeneratedTimestamp(), request.isRetry()); + } + + /** Utility method to prepare an index operation on primary shards */ + private static Engine.Index prepareIndexOperationOnPrimary(IndexRequest request, IndexShard primary) { + final SourceToParse sourceToParse = + SourceToParse.source(request.index(), request.type(), + request.id(), request.source(), request.getContentType()) + .routing(request.routing()).parent(request.parent()); + return primary.prepareIndexOnPrimary(sourceToParse, request.version(), request.versionType(), + request.getAutoGeneratedTimestamp(), request.isRetry()); + } + + /** Executes index operation on primary shard after updates mapping if dynamic mappings are found */ + static Engine.IndexResult executeIndexRequestOnPrimary(IndexRequest request, IndexShard primary, + MappingUpdatePerformer mappingUpdater) throws Exception { + // Update the mappings if parsing the documents includes new dynamic updates + final Engine.Index preUpdateOperation; + final Mapping mappingUpdate; + final boolean mappingUpdateNeeded; + try { + preUpdateOperation = prepareIndexOperationOnPrimary(request, primary); + mappingUpdate = preUpdateOperation.parsedDoc().dynamicMappingsUpdate(); + mappingUpdateNeeded = mappingUpdate != null; + if (mappingUpdateNeeded) { + mappingUpdater.updateMappings(mappingUpdate, primary.shardId(), request.type()); + } + } catch (MapperParsingException | IllegalArgumentException failure) { + return new Engine.IndexResult(failure, request.version()); + } + + // Verify that there are no more mappings that need to be applied. If there are failures, a + // ReplicationOperation.RetryOnPrimaryException is thrown. + final Engine.Index operation; + if (mappingUpdateNeeded) { + try { + operation = prepareIndexOperationOnPrimary(request, primary); + mappingUpdater.verifyMappings(operation.parsedDoc().dynamicMappingsUpdate(), primary.shardId()); + } catch (MapperParsingException | IllegalStateException e) { + // there was an error in parsing the document that was not because + // of pending mapping updates, so return a failure for the result + return new Engine.IndexResult(e, request.version()); + } + } else { + // There was no mapping update, the operation is the same as the pre-update version. + operation = preUpdateOperation; + } + + return primary.index(operation); + } + + private static Engine.DeleteResult executeDeleteRequestOnPrimary(DeleteRequest request, IndexShard primary, + final MappingUpdatePerformer mappingUpdater) throws Exception { + boolean mappingUpdateNeeded = false; + if (primary.indexSettings().isSingleType()) { + // When there is a single type, the unique identifier is only composed of the _id, + // so there is no way to differenciate foo#1 from bar#1. This is especially an issue + // if a user first deletes foo#1 and then indexes bar#1: since we do not encode the + // _type in the uid it might look like we are reindexing the same document, which + // would fail if bar#1 is indexed with a lower version than foo#1 was deleted with. + // In order to work around this issue, we make deletions create types. This way, we + // fail if index and delete operations do not use the same type. + try { + Mapping update = primary.mapperService().documentMapperWithAutoCreate(request.type()).getMapping(); + if (update != null) { + mappingUpdateNeeded = true; + mappingUpdater.updateMappings(update, primary.shardId(), request.type()); + } + } catch (MapperParsingException | IllegalArgumentException e) { + return new Engine.DeleteResult(e, request.version(), SequenceNumbersService.UNASSIGNED_SEQ_NO, false); + } + } + if (mappingUpdateNeeded) { + Mapping update = primary.mapperService().documentMapperWithAutoCreate(request.type()).getMapping(); + mappingUpdater.verifyMappings(update, primary.shardId()); + } + final Engine.Delete delete = primary.prepareDeleteOnPrimary(request.type(), request.id(), request.version(), request.versionType()); + return primary.delete(delete); + } + + private static Engine.DeleteResult executeDeleteRequestOnReplica(DocWriteResponse primaryResponse, DeleteRequest request, + final long primaryTerm, IndexShard replica) throws Exception { + if (replica.indexSettings().isSingleType()) { + // We need to wait for the replica to have the mappings + Mapping update; + try { + update = replica.mapperService().documentMapperWithAutoCreate(request.type()).getMapping(); + } catch (MapperParsingException | IllegalArgumentException e) { + return new Engine.DeleteResult(e, request.version(), primaryResponse.getSeqNo(), false); + } + if (update != null) { + final ShardId shardId = replica.shardId(); + throw new RetryOnReplicaException(shardId, + "Mappings are not available on the replica yet, triggered update: " + update); + } + } + + final VersionType versionType = request.versionType().versionTypeForReplicationAndRecovery(); + final long version = primaryResponse.getVersion(); + assert versionType.validateVersionForWrites(version); + final Engine.Delete delete = replica.prepareDeleteOnReplica(request.type(), request.id(), + primaryResponse.getSeqNo(), primaryTerm, version, versionType); + return replica.delete(delete); + } + + private static Engine.NoOpResult executeFailureNoOpOnReplica(BulkItemResponse.Failure primaryFailure, long primaryTerm, + IndexShard replica) throws IOException { + final Engine.NoOp noOp = replica.prepareMarkingSeqNoAsNoOpOnReplica( + primaryFailure.getSeqNo(), primaryTerm, primaryFailure.getMessage()); + return replica.markSeqNoAsNoOp(noOp); + } + + class ConcreteMappingUpdatePerformer implements MappingUpdatePerformer { + + public void updateMappings(final Mapping update, final ShardId shardId, + final String type) throws Exception { + if (update != null) { + // can throw timeout exception when updating mappings or ISE for attempting to + // update default mappings which are bubbled up + mappingUpdatedAction.updateMappingOnMaster(shardId.getIndex(), type, update); + } + } + + public void verifyMappings(Mapping update, + final ShardId shardId) throws Exception { + if (update != null) { + throw new ReplicationOperation.RetryOnPrimaryException(shardId, + "Dynamic mappings are not available on the node that holds the primary yet"); + } + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/bulk/TransportSingleItemBulkWriteAction.java b/core/src/main/java/org/elasticsearch/action/bulk/TransportSingleItemBulkWriteAction.java new file mode 100644 index 0000000000000..ed17971a77c1d --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/bulk/TransportSingleItemBulkWriteAction.java @@ -0,0 +1,133 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.bulk; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.DocWriteRequest; +import org.elasticsearch.action.DocWriteResponse; +import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.WriteRequest; +import org.elasticsearch.action.support.WriteResponse; +import org.elasticsearch.action.support.replication.ReplicatedWriteRequest; +import org.elasticsearch.action.support.replication.ReplicationResponse; +import org.elasticsearch.action.support.replication.TransportWriteAction; +import org.elasticsearch.cluster.action.shard.ShardStateAction; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.indices.IndicesService; +import org.elasticsearch.tasks.Task; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportService; + +import java.util.function.Supplier; + +/** use transport bulk action directly */ +@Deprecated +public abstract class TransportSingleItemBulkWriteAction< + Request extends ReplicatedWriteRequest, + Response extends ReplicationResponse & WriteResponse + > extends TransportWriteAction { + + private final TransportBulkAction bulkAction; + private final TransportShardBulkAction shardBulkAction; + + + protected TransportSingleItemBulkWriteAction(Settings settings, String actionName, TransportService transportService, + ClusterService clusterService, IndicesService indicesService, ThreadPool threadPool, + ShardStateAction shardStateAction, ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver, Supplier request, + Supplier replicaRequest, String executor, + TransportBulkAction bulkAction, TransportShardBulkAction shardBulkAction) { + super(settings, actionName, transportService, clusterService, indicesService, threadPool, shardStateAction, actionFilters, + indexNameExpressionResolver, request, replicaRequest, executor); + this.bulkAction = bulkAction; + this.shardBulkAction = shardBulkAction; + } + + + @Override + protected void doExecute(Task task, final Request request, final ActionListener listener) { + bulkAction.execute(task, toSingleItemBulkRequest(request), wrapBulkResponse(listener)); + } + + @Override + protected WritePrimaryResult shardOperationOnPrimary( + Request request, final IndexShard primary) throws Exception { + BulkItemRequest[] itemRequests = new BulkItemRequest[1]; + WriteRequest.RefreshPolicy refreshPolicy = request.getRefreshPolicy(); + request.setRefreshPolicy(WriteRequest.RefreshPolicy.NONE); + itemRequests[0] = new BulkItemRequest(0, ((DocWriteRequest) request)); + BulkShardRequest bulkShardRequest = new BulkShardRequest(request.shardId(), refreshPolicy, itemRequests); + WritePrimaryResult bulkResult = + shardBulkAction.shardOperationOnPrimary(bulkShardRequest, primary); + assert bulkResult.finalResponseIfSuccessful.getResponses().length == 1 : "expected only one bulk shard response"; + BulkItemResponse itemResponse = bulkResult.finalResponseIfSuccessful.getResponses()[0]; + final Response response; + final Exception failure; + if (itemResponse.isFailed()) { + failure = itemResponse.getFailure().getCause(); + response = null; + } else { + response = (Response) itemResponse.getResponse(); + failure = null; + } + return new WritePrimaryResult<>(request, response, bulkResult.location, failure, primary, logger); + } + + @Override + protected WriteReplicaResult shardOperationOnReplica( + Request replicaRequest, IndexShard replica) throws Exception { + BulkItemRequest[] itemRequests = new BulkItemRequest[1]; + WriteRequest.RefreshPolicy refreshPolicy = replicaRequest.getRefreshPolicy(); + itemRequests[0] = new BulkItemRequest(0, ((DocWriteRequest) replicaRequest)); + BulkShardRequest bulkShardRequest = new BulkShardRequest(replicaRequest.shardId(), refreshPolicy, itemRequests); + WriteReplicaResult result = shardBulkAction.shardOperationOnReplica(bulkShardRequest, replica); + // a replica operation can never throw a document-level failure, + // as the same document has been already indexed successfully in the primary + return new WriteReplicaResult<>(replicaRequest, result.location, null, replica, logger); + } + + + public static + ActionListener wrapBulkResponse(ActionListener listener) { + return ActionListener.wrap(bulkItemResponses -> { + assert bulkItemResponses.getItems().length == 1 : "expected only one item in bulk request"; + BulkItemResponse bulkItemResponse = bulkItemResponses.getItems()[0]; + if (bulkItemResponse.isFailed() == false) { + final DocWriteResponse response = bulkItemResponse.getResponse(); + listener.onResponse((Response) response); + } else { + listener.onFailure(bulkItemResponse.getFailure().getCause()); + } + }, listener::onFailure); + } + + public static BulkRequest toSingleItemBulkRequest(ReplicatedWriteRequest request) { + BulkRequest bulkRequest = new BulkRequest(); + bulkRequest.add(((DocWriteRequest) request)); + bulkRequest.setRefreshPolicy(request.getRefreshPolicy()); + bulkRequest.timeout(request.timeout()); + bulkRequest.waitForActiveShards(request.waitForActiveShards()); + request.setRefreshPolicy(WriteRequest.RefreshPolicy.NONE); + return bulkRequest; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/delete/DeleteRequest.java b/core/src/main/java/org/elasticsearch/action/delete/DeleteRequest.java index 280324227cc80..776117794bade 100644 --- a/core/src/main/java/org/elasticsearch/action/delete/DeleteRequest.java +++ b/core/src/main/java/org/elasticsearch/action/delete/DeleteRequest.java @@ -20,6 +20,7 @@ package org.elasticsearch.action.delete; import org.elasticsearch.action.ActionRequestValidationException; +import org.elasticsearch.action.CompositeIndicesRequest; import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.support.replication.ReplicatedWriteRequest; import org.elasticsearch.common.Nullable; @@ -27,6 +28,7 @@ import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lucene.uid.Versions; import org.elasticsearch.index.VersionType; +import org.elasticsearch.index.shard.ShardId; import java.io.IOException; @@ -43,7 +45,7 @@ * @see org.elasticsearch.client.Client#delete(DeleteRequest) * @see org.elasticsearch.client.Requests#deleteRequest(String) */ -public class DeleteRequest extends ReplicatedWriteRequest implements DocWriteRequest { +public class DeleteRequest extends ReplicatedWriteRequest implements DocWriteRequest, CompositeIndicesRequest { private String type; private String id; @@ -220,4 +222,34 @@ public void writeTo(StreamOutput out) throws IOException { public String toString() { return "delete {[" + index + "][" + type + "][" + id + "]}"; } + + /** + * Override this method from ReplicationAction, this is where we are storing our state in the request object (which we really shouldn't + * do). Once the transport client goes away we can move away from making this available, but in the meantime this is dangerous to set or + * use because the DeleteRequest object will always be wrapped in a bulk request envelope, which is where this *should* be set. + */ + @Override + public long primaryTerm() { + throw new UnsupportedOperationException("primary term should never be set on DeleteRequest"); + } + + /** + * Override this method from ReplicationAction, this is where we are storing our state in the request object (which we really shouldn't + * do). Once the transport client goes away we can move away from making this available, but in the meantime this is dangerous to set or + * use because the DeleteRequest object will always be wrapped in a bulk request envelope, which is where this *should* be set. + */ + @Override + public void primaryTerm(long term) { + throw new UnsupportedOperationException("primary term should never be set on DeleteRequest"); + } + + /** + * Override this method from ReplicationAction, this is where we are storing our state in the request object (which we really shouldn't + * do). Once the transport client goes away we can move away from making this available, but in the meantime this is dangerous to set or + * use because the DeleteRequest object will always be wrapped in a bulk request envelope, which is where this *should* be set. + */ + @Override + public DeleteRequest setShardId(ShardId shardId) { + throw new UnsupportedOperationException("shard id should never be set on DeleteRequest"); + } } diff --git a/core/src/main/java/org/elasticsearch/action/delete/DeleteResponse.java b/core/src/main/java/org/elasticsearch/action/delete/DeleteResponse.java index 0f4eb897d8334..1e42537395f7b 100644 --- a/core/src/main/java/org/elasticsearch/action/delete/DeleteResponse.java +++ b/core/src/main/java/org/elasticsearch/action/delete/DeleteResponse.java @@ -21,11 +21,14 @@ import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * The response of the delete action. * @@ -34,12 +37,13 @@ */ public class DeleteResponse extends DocWriteResponse { - public DeleteResponse() { + private static final String FOUND = "found"; + public DeleteResponse() { } - public DeleteResponse(ShardId shardId, String type, String id, long seqNo, long version, boolean found) { - super(shardId, type, id, seqNo, version, found ? Result.DELETED : Result.NOT_FOUND); + public DeleteResponse(ShardId shardId, String type, String id, long seqNo, long primaryTerm, long version, boolean found) { + super(shardId, type, id, seqNo, primaryTerm, version, found ? Result.DELETED : Result.NOT_FOUND); } @Override @@ -47,13 +51,6 @@ public RestStatus status() { return result == Result.DELETED ? super.status() : RestStatus.NOT_FOUND; } - @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.field("found", result == Result.DELETED); - super.toXContent(builder, params); - return builder; - } - @Override public String toString() { StringBuilder builder = new StringBuilder(); @@ -66,4 +63,61 @@ public String toString() { builder.append(",shards=").append(getShardInfo()); return builder.append("]").toString(); } + + @Override + public XContentBuilder innerToXContent(XContentBuilder builder, Params params) throws IOException { + builder.field(FOUND, result == Result.DELETED); + super.innerToXContent(builder, params); + return builder; + } + + public static DeleteResponse fromXContent(XContentParser parser) throws IOException { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + + Builder context = new Builder(); + while (parser.nextToken() != XContentParser.Token.END_OBJECT) { + parseXContentFields(parser, context); + } + return context.build(); + } + + /** + * Parse the current token and update the parsing context appropriately. + */ + public static void parseXContentFields(XContentParser parser, Builder context) throws IOException { + XContentParser.Token token = parser.currentToken(); + String currentFieldName = parser.currentName(); + + if (FOUND.equals(currentFieldName)) { + if (token.isValue()) { + context.setFound(parser.booleanValue()); + } + } else { + DocWriteResponse.parseInnerToXContent(parser, context); + } + } + + /** + * Builder class for {@link DeleteResponse}. This builder is usually used during xcontent parsing to + * temporarily store the parsed values, then the {@link DocWriteResponse.Builder#build()} method is called to + * instantiate the {@link DeleteResponse}. + */ + public static class Builder extends DocWriteResponse.Builder { + + private boolean found = false; + + public void setFound(boolean found) { + this.found = found; + } + + @Override + public DeleteResponse build() { + DeleteResponse deleteResponse = new DeleteResponse(shardId, type, id, seqNo, primaryTerm, version, found); + deleteResponse.setForcedRefresh(forcedRefresh); + if (shardInfo != null) { + deleteResponse.setShardInfo(shardInfo); + } + return deleteResponse; + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/delete/TransportDeleteAction.java b/core/src/main/java/org/elasticsearch/action/delete/TransportDeleteAction.java index 5601d54ea4740..3aaf4a472facf 100644 --- a/core/src/main/java/org/elasticsearch/action/delete/TransportDeleteAction.java +++ b/core/src/main/java/org/elasticsearch/action/delete/TransportDeleteAction.java @@ -19,150 +19,39 @@ package org.elasticsearch.action.delete; -import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.ResourceAlreadyExistsException; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.RoutingMissingException; -import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; -import org.elasticsearch.action.admin.indices.create.CreateIndexResponse; -import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction; +import org.elasticsearch.action.bulk.TransportBulkAction; +import org.elasticsearch.action.bulk.TransportShardBulkAction; +import org.elasticsearch.action.bulk.TransportSingleItemBulkWriteAction; import org.elasticsearch.action.support.ActionFilters; -import org.elasticsearch.action.support.AutoCreateIndex; -import org.elasticsearch.action.support.replication.TransportWriteAction; -import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.action.shard.ShardStateAction; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.shard.IndexShard; -import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; -import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; /** * Performs the delete operation. + * + * Deprecated use TransportBulkAction with a single item instead */ -public class TransportDeleteAction extends TransportWriteAction { - - private final AutoCreateIndex autoCreateIndex; - private final TransportCreateIndexAction createIndexAction; +@Deprecated +public class TransportDeleteAction extends TransportSingleItemBulkWriteAction { @Inject public TransportDeleteAction(Settings settings, TransportService transportService, ClusterService clusterService, IndicesService indicesService, ThreadPool threadPool, ShardStateAction shardStateAction, - TransportCreateIndexAction createIndexAction, ActionFilters actionFilters, - IndexNameExpressionResolver indexNameExpressionResolver, - AutoCreateIndex autoCreateIndex) { - super(settings, DeleteAction.NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, actionFilters, - indexNameExpressionResolver, DeleteRequest::new, DeleteRequest::new, ThreadPool.Names.INDEX); - this.createIndexAction = createIndexAction; - this.autoCreateIndex = autoCreateIndex; - } - - @Override - protected void doExecute(Task task, final DeleteRequest request, final ActionListener listener) { - ClusterState state = clusterService.state(); - if (autoCreateIndex.shouldAutoCreate(request.index(), state)) { - CreateIndexRequest createIndexRequest = new CreateIndexRequest() - .index(request.index()) - .cause("auto(delete api)") - .masterNodeTimeout(request.timeout()); - createIndexAction.execute(task, createIndexRequest, new ActionListener() { - @Override - public void onResponse(CreateIndexResponse result) { - innerExecute(task, request, listener); - } - - @Override - public void onFailure(Exception e) { - if (ExceptionsHelper.unwrapCause(e) instanceof ResourceAlreadyExistsException) { - // we have the index, do it - innerExecute(task, request, listener); - } else { - listener.onFailure(e); - } - } - }); - } else { - innerExecute(task, request, listener); - } - } - - @Override - protected void resolveRequest(final MetaData metaData, IndexMetaData indexMetaData, DeleteRequest request) { - super.resolveRequest(metaData, indexMetaData, request); - resolveAndValidateRouting(metaData, indexMetaData.getIndex().getName(), request); - ShardId shardId = clusterService.operationRouting().shardId(clusterService.state(), - indexMetaData.getIndex().getName(), request.id(), request.routing()); - request.setShardId(shardId); - } - - public static void resolveAndValidateRouting(final MetaData metaData, final String concreteIndex, - DeleteRequest request) { - request.routing(metaData.resolveIndexRouting(request.parent(), request.routing(), request.index())); - // check if routing is required, if so, throw error if routing wasn't specified - if (request.routing() == null && metaData.routingRequired(concreteIndex, request.type())) { - throw new RoutingMissingException(concreteIndex, request.type(), request.id()); - } - } - - private void innerExecute(Task task, final DeleteRequest request, final ActionListener listener) { - super.doExecute(task, request, listener); + ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, + TransportBulkAction bulkAction, TransportShardBulkAction shardBulkAction) { + super(settings, DeleteAction.NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, + actionFilters, indexNameExpressionResolver, DeleteRequest::new, DeleteRequest::new, ThreadPool.Names.INDEX, + bulkAction, shardBulkAction); } @Override protected DeleteResponse newResponseInstance() { return new DeleteResponse(); } - - @Override - protected WritePrimaryResult shardOperationOnPrimary(DeleteRequest request, IndexShard primary) throws Exception { - final Engine.DeleteResult result = executeDeleteRequestOnPrimary(request, primary); - final DeleteResponse response; - final DeleteRequest replicaRequest; - if (result.hasFailure() == false) { - // update the request with the version so it will go to the replicas - request.versionType(request.versionType().versionTypeForReplicationAndRecovery()); - request.version(result.getVersion()); - request.setSeqNo(result.getSeqNo()); - assert request.versionType().validateVersionForWrites(request.version()); - replicaRequest = request; - response = new DeleteResponse( - primary.shardId(), - request.type(), - request.id(), - result.getSeqNo(), - result.getVersion(), - result.isFound()); - } else { - response = null; - replicaRequest = null; - } - return new WritePrimaryResult(replicaRequest, response, result.getTranslogLocation(), result.getFailure(), primary); - } - - @Override - protected WriteReplicaResult shardOperationOnReplica(DeleteRequest request, IndexShard replica) throws Exception { - final Engine.DeleteResult result = executeDeleteRequestOnReplica(request, replica); - return new WriteReplicaResult(request, result.getTranslogLocation(), result.getFailure(), replica); - } - - - public static Engine.DeleteResult executeDeleteRequestOnPrimary(DeleteRequest request, IndexShard primary) { - final Engine.Delete delete = primary.prepareDeleteOnPrimary(request.type(), request.id(), request.version(), request.versionType()); - return primary.delete(delete); - } - - public static Engine.DeleteResult executeDeleteRequestOnReplica(DeleteRequest request, IndexShard replica) { - final Engine.Delete delete = replica.prepareDeleteOnReplica(request.type(), request.id(), - request.getSeqNo(), request.primaryTerm(), request.version(), request.versionType()); - return replica.delete(delete); - } - } diff --git a/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java b/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java index 65176c1df392c..72aaeb9eb371a 100644 --- a/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java +++ b/core/src/main/java/org/elasticsearch/action/explain/TransportExplainAction.java @@ -35,8 +35,6 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.engine.Engine; import org.elasticsearch.index.get.GetResult; -import org.elasticsearch.index.mapper.Uid; -import org.elasticsearch.index.mapper.UidFieldMapper; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.search.SearchService; import org.elasticsearch.search.internal.AliasFilter; @@ -93,10 +91,13 @@ protected ExplainResponse shardOperation(ExplainRequest request, ShardId shardId ShardSearchLocalRequest shardSearchLocalRequest = new ShardSearchLocalRequest(shardId, new String[]{request.type()}, request.nowInMillis, request.filteringAlias()); SearchContext context = searchService.createSearchContext(shardSearchLocalRequest, SearchService.NO_TIMEOUT, null); - Term uidTerm = new Term(UidFieldMapper.NAME, Uid.createUidAsBytes(request.type(), request.id())); Engine.GetResult result = null; try { - result = context.indexShard().get(new Engine.Get(false, uidTerm)); + Term uidTerm = context.mapperService().createUidTerm(request.type(), request.id()); + if (uidTerm == null) { + return new ExplainResponse(shardId.getIndexName(), request.type(), request.id(), false); + } + result = context.indexShard().get(new Engine.Get(false, request.type(), request.id(), uidTerm)); if (!result.exists()) { return new ExplainResponse(shardId.getIndexName(), request.type(), request.id(), false); } diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilities.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilities.java new file mode 100644 index 0000000000000..ef7513f38abc2 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilities.java @@ -0,0 +1,282 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Collections; +import java.util.Arrays; +import java.util.List; +import java.util.ArrayList; +import java.util.Comparator; + +/** + * Describes the capabilities of a field optionally merged across multiple indices. + */ +public class FieldCapabilities implements Writeable, ToXContent { + private final String name; + private final String type; + private final boolean isSearchable; + private final boolean isAggregatable; + + private final String[] indices; + private final String[] nonSearchableIndices; + private final String[] nonAggregatableIndices; + + /** + * Constructor + * @param name The name of the field. + * @param type The type associated with the field. + * @param isSearchable Whether this field is indexed for search. + * @param isAggregatable Whether this field can be aggregated on. + */ + FieldCapabilities(String name, String type, boolean isSearchable, boolean isAggregatable) { + this(name, type, isSearchable, isAggregatable, null, null, null); + } + + /** + * Constructor + * @param name The name of the field + * @param type The type associated with the field. + * @param isSearchable Whether this field is indexed for search. + * @param isAggregatable Whether this field can be aggregated on. + * @param indices The list of indices where this field name is defined as {@code type}, + * or null if all indices have the same {@code type} for the field. + * @param nonSearchableIndices The list of indices where this field is not searchable, + * or null if the field is searchable in all indices. + * @param nonAggregatableIndices The list of indices where this field is not aggregatable, + * or null if the field is aggregatable in all indices. + */ + FieldCapabilities(String name, String type, + boolean isSearchable, boolean isAggregatable, + String[] indices, + String[] nonSearchableIndices, + String[] nonAggregatableIndices) { + this.name = name; + this.type = type; + this.isSearchable = isSearchable; + this.isAggregatable = isAggregatable; + this.indices = indices; + this.nonSearchableIndices = nonSearchableIndices; + this.nonAggregatableIndices = nonAggregatableIndices; + } + + FieldCapabilities(StreamInput in) throws IOException { + this.name = in.readString(); + this.type = in.readString(); + this.isSearchable = in.readBoolean(); + this.isAggregatable = in.readBoolean(); + this.indices = in.readOptionalStringArray(); + this.nonSearchableIndices = in.readOptionalStringArray(); + this.nonAggregatableIndices = in.readOptionalStringArray(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(name); + out.writeString(type); + out.writeBoolean(isSearchable); + out.writeBoolean(isAggregatable); + out.writeOptionalStringArray(indices); + out.writeOptionalStringArray(nonSearchableIndices); + out.writeOptionalStringArray(nonAggregatableIndices); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field("type", type); + builder.field("searchable", isSearchable); + builder.field("aggregatable", isAggregatable); + if (indices != null) { + builder.field("indices", indices); + } + if (nonSearchableIndices != null) { + builder.field("non_searchable_indices", nonSearchableIndices); + } + if (nonAggregatableIndices != null) { + builder.field("non_aggregatable_indices", nonAggregatableIndices); + } + builder.endObject(); + return builder; + } + + /** + * The name of the field. + */ + public String getName() { + return name; + } + + /** + * Whether this field is indexed for search on all indices. + */ + public boolean isAggregatable() { + return isAggregatable; + } + + /** + * Whether this field can be aggregated on all indices. + */ + public boolean isSearchable() { + return isSearchable; + } + + /** + * The type of the field. + */ + public String getType() { + return type; + } + + /** + * The list of indices where this field name is defined as {@code type}, + * or null if all indices have the same {@code type} for the field. + */ + public String[] indices() { + return indices; + } + + /** + * The list of indices where this field is not searchable, + * or null if the field is searchable in all indices. + */ + public String[] nonSearchableIndices() { + return nonSearchableIndices; + } + + /** + * The list of indices where this field is not aggregatable, + * or null if the field is aggregatable in all indices. + */ + public String[] nonAggregatableIndices() { + return nonAggregatableIndices; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + FieldCapabilities that = (FieldCapabilities) o; + + if (isSearchable != that.isSearchable) return false; + if (isAggregatable != that.isAggregatable) return false; + if (!name.equals(that.name)) return false; + if (!type.equals(that.type)) return false; + if (!Arrays.equals(indices, that.indices)) return false; + if (!Arrays.equals(nonSearchableIndices, that.nonSearchableIndices)) return false; + return Arrays.equals(nonAggregatableIndices, that.nonAggregatableIndices); + } + + @Override + public int hashCode() { + int result = name.hashCode(); + result = 31 * result + type.hashCode(); + result = 31 * result + (isSearchable ? 1 : 0); + result = 31 * result + (isAggregatable ? 1 : 0); + result = 31 * result + Arrays.hashCode(indices); + result = 31 * result + Arrays.hashCode(nonSearchableIndices); + result = 31 * result + Arrays.hashCode(nonAggregatableIndices); + return result; + } + + static class Builder { + private String name; + private String type; + private boolean isSearchable; + private boolean isAggregatable; + private List indiceList; + + Builder(String name, String type) { + this.name = name; + this.type = type; + this.isSearchable = true; + this.isAggregatable = true; + this.indiceList = new ArrayList<>(); + } + + void add(String index, boolean search, boolean agg) { + IndexCaps indexCaps = new IndexCaps(index, search, agg); + indiceList.add(indexCaps); + this.isSearchable &= search; + this.isAggregatable &= agg; + } + + FieldCapabilities build(boolean withIndices) { + final String[] indices; + /* Eclipse can't deal with o -> o.name, maybe because of + * https://bugs.eclipse.org/bugs/show_bug.cgi?id=511750 */ + Collections.sort(indiceList, Comparator.comparing((IndexCaps o) -> o.name)); + if (withIndices) { + indices = indiceList.stream() + .map(caps -> caps.name) + .toArray(String[]::new); + } else { + indices = null; + } + + final String[] nonSearchableIndices; + if (isSearchable == false && + indiceList.stream().anyMatch((caps) -> caps.isSearchable)) { + // Iff this field is searchable in some indices AND non-searchable in others + // we record the list of non-searchable indices + nonSearchableIndices = indiceList.stream() + .filter((caps) -> caps.isSearchable == false) + .map(caps -> caps.name) + .toArray(String[]::new); + } else { + nonSearchableIndices = null; + } + + final String[] nonAggregatableIndices; + if (isAggregatable == false && + indiceList.stream().anyMatch((caps) -> caps.isAggregatable)) { + // Iff this field is aggregatable in some indices AND non-searchable in others + // we keep the list of non-aggregatable indices + nonAggregatableIndices = indiceList.stream() + .filter((caps) -> caps.isAggregatable == false) + .map(caps -> caps.name) + .toArray(String[]::new); + } else { + nonAggregatableIndices = null; + } + return new FieldCapabilities(name, type, isSearchable, isAggregatable, + indices, nonSearchableIndices, nonAggregatableIndices); + } + } + + private static class IndexCaps { + final String name; + final boolean isSearchable; + final boolean isAggregatable; + + IndexCaps(String name, boolean isSearchable, boolean isAggregatable) { + this.name = name; + this.isSearchable = isSearchable; + this.isAggregatable = isAggregatable; + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesAction.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesAction.java new file mode 100644 index 0000000000000..93d67f3fc3cc4 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesAction.java @@ -0,0 +1,44 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.Action; +import org.elasticsearch.client.ElasticsearchClient; + +public class FieldCapabilitiesAction extends Action { + + public static final FieldCapabilitiesAction INSTANCE = new FieldCapabilitiesAction(); + public static final String NAME = "indices:data/read/field_caps"; + + private FieldCapabilitiesAction() { + super(NAME); + } + + @Override + public FieldCapabilitiesResponse newResponse() { + return new FieldCapabilitiesResponse(); + } + + @Override + public FieldCapabilitiesRequestBuilder newRequestBuilder(ElasticsearchClient client) { + return new FieldCapabilitiesRequestBuilder(client, this); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexRequest.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexRequest.java new file mode 100644 index 0000000000000..460a21ae866aa --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexRequest.java @@ -0,0 +1,65 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.ActionRequestValidationException; +import org.elasticsearch.action.support.single.shard.SingleShardRequest; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; + +import java.io.IOException; + +public class FieldCapabilitiesIndexRequest + extends SingleShardRequest { + + private String[] fields; + + // For serialization + FieldCapabilitiesIndexRequest() {} + + FieldCapabilitiesIndexRequest(String[] fields, String index) { + super(index); + if (fields == null || fields.length == 0) { + throw new IllegalArgumentException("specified fields can't be null or empty"); + } + this.fields = fields; + } + + public String[] fields() { + return fields; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + fields = in.readStringArray(); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeStringArray(fields); + } + + @Override + public ActionRequestValidationException validate() { + return null; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexResponse.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexResponse.java new file mode 100644 index 0000000000000..1e4686245165b --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesIndexResponse.java @@ -0,0 +1,102 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.io.stream.Writeable; + +import java.io.IOException; +import java.util.Map; + +/** + * Response for {@link FieldCapabilitiesIndexRequest} requests. + */ +public class FieldCapabilitiesIndexResponse extends ActionResponse implements Writeable { + private String indexName; + private Map responseMap; + + FieldCapabilitiesIndexResponse(String indexName, Map responseMap) { + this.indexName = indexName; + this.responseMap = responseMap; + } + + FieldCapabilitiesIndexResponse() { + } + + FieldCapabilitiesIndexResponse(StreamInput input) throws IOException { + this.readFrom(input); + } + + + /** + * Get the index name + */ + public String getIndexName() { + return indexName; + } + + /** + * Get the field capabilities map + */ + public Map get() { + return responseMap; + } + + /** + * + * Get the field capabilities for the provided {@code field} + */ + public FieldCapabilities getField(String field) { + return responseMap.get(field); + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + this.indexName = in.readString(); + this.responseMap = + in.readMap(StreamInput::readString, FieldCapabilities::new); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeString(indexName); + out.writeMap(responseMap, + StreamOutput::writeString, (valueOut, fc) -> fc.writeTo(valueOut)); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + FieldCapabilitiesIndexResponse that = (FieldCapabilitiesIndexResponse) o; + + return responseMap.equals(that.responseMap); + } + + @Override + public int hashCode() { + return responseMap.hashCode(); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequest.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequest.java new file mode 100644 index 0000000000000..b04f882076326 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequest.java @@ -0,0 +1,174 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionRequest; +import org.elasticsearch.action.ActionRequestValidationException; +import org.elasticsearch.action.IndicesRequest; +import org.elasticsearch.action.ValidateActions; +import org.elasticsearch.action.support.IndicesOptions; +import org.elasticsearch.common.ParseField; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.XContentParser; + +import java.io.IOException; +import java.util.Arrays; +import java.util.HashSet; +import java.util.Objects; +import java.util.Set; + +import static org.elasticsearch.common.xcontent.ObjectParser.fromList; + +public final class FieldCapabilitiesRequest extends ActionRequest implements IndicesRequest.Replaceable { + public static final ParseField FIELDS_FIELD = new ParseField("fields"); + public static final String NAME = "field_caps_request"; + private String[] indices = Strings.EMPTY_ARRAY; + private IndicesOptions indicesOptions = IndicesOptions.strictExpandOpen(); + private String[] fields = Strings.EMPTY_ARRAY; + // pkg private API mainly for cross cluster search to signal that we do multiple reductions ie. the results should not be merged + private boolean mergeResults = true; + + private static ObjectParser PARSER = + new ObjectParser<>(NAME, FieldCapabilitiesRequest::new); + + static { + PARSER.declareStringArray(fromList(String.class, FieldCapabilitiesRequest::fields), + FIELDS_FIELD); + } + + public FieldCapabilitiesRequest() {} + + /** + * Returns true iff the results should be merged. + */ + boolean isMergeResults() { + return mergeResults; + } + + /** + * if set to true the response will contain only a merged view of the per index field capabilities. Otherwise only + * unmerged per index field capabilities are returned. + */ + void setMergeResults(boolean mergeResults) { + this.mergeResults = mergeResults; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + fields = in.readStringArray(); + if (in.getVersion().onOrAfter(Version.V_5_5_0)) { + indices = in.readStringArray(); + indicesOptions = IndicesOptions.readIndicesOptions(in); + mergeResults = in.readBoolean(); + } else { + mergeResults = true; + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeStringArray(fields); + if (out.getVersion().onOrAfter(Version.V_5_5_0)) { + out.writeStringArray(indices); + indicesOptions.writeIndicesOptions(out); + out.writeBoolean(mergeResults); + } + } + + public static FieldCapabilitiesRequest parseFields(XContentParser parser) throws IOException { + return PARSER.parse(parser, null); + } + + /** + * The list of field names to retrieve + */ + public FieldCapabilitiesRequest fields(String... fields) { + if (fields == null || fields.length == 0) { + throw new IllegalArgumentException("specified fields can't be null or empty"); + } + Set fieldSet = new HashSet<>(Arrays.asList(fields)); + this.fields = fieldSet.toArray(new String[0]); + return this; + } + + public String[] fields() { + return fields; + } + + /** + * + * The list of indices to lookup + */ + public FieldCapabilitiesRequest indices(String... indices) { + this.indices = Objects.requireNonNull(indices, "indices must not be null"); + return this; + } + + public FieldCapabilitiesRequest indicesOptions(IndicesOptions indicesOptions) { + this.indicesOptions = Objects.requireNonNull(indicesOptions, "indices options must not be null"); + return this; + } + + @Override + public String[] indices() { + return indices; + } + + @Override + public IndicesOptions indicesOptions() { + return indicesOptions; + } + + @Override + public ActionRequestValidationException validate() { + ActionRequestValidationException validationException = null; + if (fields == null || fields.length == 0) { + validationException = + ValidateActions.addValidationError("no fields specified", validationException); + } + return validationException; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + FieldCapabilitiesRequest that = (FieldCapabilitiesRequest) o; + + if (!Arrays.equals(indices, that.indices)) return false; + if (!indicesOptions.equals(that.indicesOptions)) return false; + return Arrays.equals(fields, that.fields); + } + + @Override + public int hashCode() { + int result = Arrays.hashCode(indices); + result = 31 * result + indicesOptions.hashCode(); + result = 31 * result + Arrays.hashCode(fields); + return result; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequestBuilder.java new file mode 100644 index 0000000000000..742d5b3ee3297 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesRequestBuilder.java @@ -0,0 +1,41 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.ActionRequestBuilder; +import org.elasticsearch.client.ElasticsearchClient; + +public class FieldCapabilitiesRequestBuilder extends + ActionRequestBuilder { + public FieldCapabilitiesRequestBuilder(ElasticsearchClient client, + FieldCapabilitiesAction action, + String... indices) { + super(client, action, new FieldCapabilitiesRequest().indices(indices)); + } + + /** + * The list of field names to retrieve. + */ + public FieldCapabilitiesRequestBuilder setFields(String... fields) { + request().fields(fields); + return this; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesResponse.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesResponse.java new file mode 100644 index 0000000000000..ae5db5835670a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/FieldCapabilitiesResponse.java @@ -0,0 +1,135 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.Version; +import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.common.io.stream.StreamInput; +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; +import java.util.Map; + +/** + * Response for {@link FieldCapabilitiesRequest} requests. + */ +public class FieldCapabilitiesResponse extends ActionResponse implements ToXContent { + private Map> responseMap; + private List indexResponses; + + FieldCapabilitiesResponse(Map> responseMap) { + this(responseMap, Collections.emptyList()); + } + + FieldCapabilitiesResponse(List indexResponses) { + this(Collections.emptyMap(), indexResponses); + } + + private FieldCapabilitiesResponse(Map> responseMap, + List indexResponses) { + this.responseMap = responseMap; + this.indexResponses = indexResponses; + } + + /** + * Used for serialization + */ + FieldCapabilitiesResponse() { + this.responseMap = Collections.emptyMap(); + } + + /** + * Get the field capabilities map. + */ + public Map> get() { + return responseMap; + } + + + /** + * Returns the actual per-index field caps responses + */ + List getIndexResponses() { + return indexResponses; + } + /** + * + * Get the field capabilities per type for the provided {@code field}. + */ + public Map getField(String field) { + return responseMap.get(field); + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + this.responseMap = + in.readMap(StreamInput::readString, FieldCapabilitiesResponse::readField); + if (in.getVersion().onOrAfter(Version.V_5_5_0)) { + indexResponses = in.readList(FieldCapabilitiesIndexResponse::new); + } else { + indexResponses = Collections.emptyList(); + } + } + + private static Map readField(StreamInput in) throws IOException { + return in.readMap(StreamInput::readString, FieldCapabilities::new); + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + out.writeMap(responseMap, StreamOutput::writeString, FieldCapabilitiesResponse::writeField); + if (out.getVersion().onOrAfter(Version.V_5_5_0)) { + out.writeList(indexResponses); + } + + } + + private static void writeField(StreamOutput out, + Map map) throws IOException { + out.writeMap(map, StreamOutput::writeString, (valueOut, fc) -> fc.writeTo(valueOut)); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.field("fields", responseMap); + return builder; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + + FieldCapabilitiesResponse that = (FieldCapabilitiesResponse) o; + + return responseMap.equals(that.responseMap); + } + + @Override + public int hashCode() { + return responseMap.hashCode(); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesAction.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesAction.java new file mode 100644 index 0000000000000..3f0fb77781bdd --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesAction.java @@ -0,0 +1,197 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.HandledTransportAction; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.Strings; +import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.common.util.concurrent.CountDown; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.RemoteClusterAware; +import org.elasticsearch.transport.RemoteClusterService; +import org.elasticsearch.transport.Transport; +import org.elasticsearch.transport.TransportException; +import org.elasticsearch.transport.TransportRequestOptions; +import org.elasticsearch.transport.TransportResponseHandler; +import org.elasticsearch.transport.TransportService; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +public class TransportFieldCapabilitiesAction extends HandledTransportAction { + private final ClusterService clusterService; + private final TransportFieldCapabilitiesIndexAction shardAction; + private final RemoteClusterService remoteClusterService; + private final TransportService transportService; + + @Inject + public TransportFieldCapabilitiesAction(Settings settings, TransportService transportService, + ClusterService clusterService, ThreadPool threadPool, + TransportFieldCapabilitiesIndexAction shardAction, + ActionFilters actionFilters, + IndexNameExpressionResolver + indexNameExpressionResolver) { + super(settings, FieldCapabilitiesAction.NAME, threadPool, transportService, + actionFilters, indexNameExpressionResolver, FieldCapabilitiesRequest::new); + this.clusterService = clusterService; + this.remoteClusterService = transportService.getRemoteClusterService(); + this.transportService = transportService; + this.shardAction = shardAction; + } + + @Override + protected void doExecute(FieldCapabilitiesRequest request, + final ActionListener listener) { + final ClusterState clusterState = clusterService.state(); + final Map remoteClusterIndices = remoteClusterService.groupIndices(request.indicesOptions(), + request.indices(), idx -> indexNameExpressionResolver.hasIndexOrAlias(idx, clusterState)); + final OriginalIndices localIndices = remoteClusterIndices.remove(RemoteClusterAware.LOCAL_CLUSTER_GROUP_KEY); + final String[] concreteIndices; + if (remoteClusterIndices.isEmpty() == false && localIndices.indices().length == 0) { + // in the case we have one or more remote indices but no local we don't expand to all local indices and just do remote + // indices + concreteIndices = Strings.EMPTY_ARRAY; + } else { + concreteIndices = indexNameExpressionResolver.concreteIndexNames(clusterState, localIndices); + } + final int totalNumRequest = concreteIndices.length + remoteClusterIndices.size(); + final CountDown completionCounter = new CountDown(totalNumRequest); + final List indexResponses = Collections.synchronizedList(new ArrayList<>()); + final Runnable onResponse = () -> { + if (completionCounter.countDown()) { + if (request.isMergeResults()) { + listener.onResponse(merge(indexResponses)); + } else { + listener.onResponse(new FieldCapabilitiesResponse(indexResponses)); + } + } + }; + if (totalNumRequest == 0) { + listener.onResponse(new FieldCapabilitiesResponse()); + } else { + ActionListener innerListener = new ActionListener() { + @Override + public void onResponse(FieldCapabilitiesIndexResponse result) { + indexResponses.add(result); + onResponse.run(); + } + + @Override + public void onFailure(Exception e) { + // TODO we should somehow inform the user that we failed + onResponse.run(); + } + }; + for (String index : concreteIndices) { + shardAction.execute(new FieldCapabilitiesIndexRequest(request.fields(), index), innerListener); + } + + // this is the cross cluster part of this API - we force the other cluster to not merge the results but instead + // send us back all individual index results. + for (Map.Entry remoteIndices : remoteClusterIndices.entrySet()) { + String clusterAlias = remoteIndices.getKey(); + OriginalIndices originalIndices = remoteIndices.getValue(); + // if we are connected this is basically a no-op, if we are not we try to connect in parallel in a non-blocking fashion + remoteClusterService.ensureConnected(clusterAlias, ActionListener.wrap(v -> { + Transport.Connection connection = remoteClusterService.getConnection(clusterAlias); + FieldCapabilitiesRequest remoteRequest = new FieldCapabilitiesRequest(); + remoteRequest.setMergeResults(false); // we need to merge on this node + remoteRequest.indicesOptions(originalIndices.indicesOptions()); + remoteRequest.indices(originalIndices.indices()); + remoteRequest.fields(request.fields()); + transportService.sendRequest(connection, FieldCapabilitiesAction.NAME, remoteRequest, TransportRequestOptions.EMPTY, + new TransportResponseHandler() { + + @Override + public FieldCapabilitiesResponse newInstance() { + return new FieldCapabilitiesResponse(); + } + + @Override + public void handleResponse(FieldCapabilitiesResponse response) { + try { + for (FieldCapabilitiesIndexResponse res : response.getIndexResponses()) { + indexResponses.add(new FieldCapabilitiesIndexResponse(RemoteClusterAware. + buildRemoteIndexName(clusterAlias, res.getIndexName()), res.get())); + } + } finally { + onResponse.run(); + } + } + + @Override + public void handleException(TransportException exp) { + onResponse.run(); + } + + @Override + public String executor() { + return ThreadPool.Names.SAME; + } + }); + }, e -> onResponse.run())); + } + + } + } + + private FieldCapabilitiesResponse merge(List indexResponses) { + Map> responseMapBuilder = new HashMap<> (); + for (FieldCapabilitiesIndexResponse response : indexResponses) { + innerMerge(responseMapBuilder, response.getIndexName(), response.get()); + } + + Map> responseMap = new HashMap<>(); + for (Map.Entry> entry : + responseMapBuilder.entrySet()) { + Map typeMap = new HashMap<>(); + boolean multiTypes = entry.getValue().size() > 1; + for (Map.Entry fieldEntry : + entry.getValue().entrySet()) { + typeMap.put(fieldEntry.getKey(), fieldEntry.getValue().build(multiTypes)); + } + responseMap.put(entry.getKey(), typeMap); + } + + return new FieldCapabilitiesResponse(responseMap); + } + + private void innerMerge(Map> responseMapBuilder, String indexName, + Map map) { + for (Map.Entry entry : map.entrySet()) { + final String field = entry.getKey(); + final FieldCapabilities fieldCap = entry.getValue(); + Map typeMap = responseMapBuilder.computeIfAbsent(field, f -> new HashMap<>()); + FieldCapabilities.Builder builder = typeMap.computeIfAbsent(fieldCap.getType(), key -> new FieldCapabilities.Builder(field, + key)); + builder.add(indexName, fieldCap.isSearchable(), fieldCap.isAggregatable()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesIndexAction.java b/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesIndexAction.java new file mode 100644 index 0000000000000..b9e6f56b6d7ad --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/fieldcaps/TransportFieldCapabilitiesIndexAction.java @@ -0,0 +1,100 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.fieldcaps; + +import org.elasticsearch.action.support.ActionFilters; +import org.elasticsearch.action.support.single.shard.TransportSingleShardAction; +import org.elasticsearch.cluster.ClusterState; +import org.elasticsearch.cluster.block.ClusterBlockException; +import org.elasticsearch.cluster.block.ClusterBlockLevel; +import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.routing.ShardsIterator; +import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.inject.Inject; +import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.mapper.MappedFieldType; +import org.elasticsearch.index.mapper.MapperService; +import org.elasticsearch.index.shard.ShardId; +import org.elasticsearch.indices.IndicesService; +import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportService; + +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; + +public class TransportFieldCapabilitiesIndexAction extends TransportSingleShardAction { + + private static final String ACTION_NAME = FieldCapabilitiesAction.NAME + "[index]"; + + private final IndicesService indicesService; + + @Inject + public TransportFieldCapabilitiesIndexAction(Settings settings, ClusterService clusterService, TransportService transportService, + IndicesService indicesService, ThreadPool threadPool, ActionFilters actionFilters, + IndexNameExpressionResolver indexNameExpressionResolver) { + super(settings, ACTION_NAME, threadPool, clusterService, transportService, actionFilters, indexNameExpressionResolver, + FieldCapabilitiesIndexRequest::new, ThreadPool.Names.MANAGEMENT); + this.indicesService = indicesService; + } + + @Override + protected boolean resolveIndex(FieldCapabilitiesIndexRequest request) { + //internal action, index already resolved + return false; + } + + @Override + protected ShardsIterator shards(ClusterState state, InternalRequest request) { + // Will balance requests between shards + // Resolve patterns and deduplicate + return state.routingTable().index(request.concreteIndex()).randomAllActiveShardsIt(); + } + + @Override + protected FieldCapabilitiesIndexResponse shardOperation(final FieldCapabilitiesIndexRequest request, ShardId shardId) { + MapperService mapperService = indicesService.indexServiceSafe(shardId.getIndex()).mapperService(); + Set fieldNames = new HashSet<>(); + for (String field : request.fields()) { + fieldNames.addAll(mapperService.simpleMatchToIndexNames(field)); + } + Map responseMap = new HashMap<>(); + for (String field : fieldNames) { + MappedFieldType ft = mapperService.fullName(field); + if (ft != null) { + FieldCapabilities fieldCap = new FieldCapabilities(field, ft.typeName(), ft.isSearchable(), ft.isAggregatable()); + responseMap.put(field, fieldCap); + } + } + return new FieldCapabilitiesIndexResponse(shardId.getIndexName(), responseMap); + } + + @Override + protected FieldCapabilitiesIndexResponse newResponse() { + return new FieldCapabilitiesIndexResponse(); + } + + @Override + protected ClusterBlockException checkRequestBlock(ClusterState state, InternalRequest request) { + return state.blocks().indexBlockedException(ClusterBlockLevel.METADATA_READ, request.concreteIndex()); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStats.java b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStats.java index 9cc8909505731..44b330dc37cc7 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStats.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStats.java @@ -124,13 +124,15 @@ public String getDisplayType() { return "string"; case 4: return "ip"; + case 5: + return "geo_point"; default: throw new IllegalArgumentException("Unknown type."); } } /** - * @return true if min/max informations are available for this field + * @return true if min/max information is available for this field */ public boolean hasMinMax() { return hasMinMax; @@ -276,7 +278,7 @@ public final void accumulate(FieldStats other) { } } - private void updateMinMax(T min, T max) { + protected void updateMinMax(T min, T max) { if (compare(minValue, min) > 0) { minValue = min; } @@ -321,12 +323,14 @@ public final void writeTo(StreamOutput out) throws IOException { out.writeLong(sumTotalTermFreq); out.writeBoolean(isSearchable); out.writeBoolean(isAggregatable); - if (out.getVersion().onOrAfter(Version.V_5_2_0_UNRELEASED)) { + if (out.getVersion().onOrAfter(Version.V_5_2_0)) { out.writeBoolean(hasMinMax); if (hasMinMax) { writeMinMax(out); } } else { + assert hasMinMax : "cannot serialize null min/max fieldstats in a mixed-cluster " + + "with pre-" + Version.V_5_2_0 + " nodes, remote version [" + out.getVersion() + "]"; writeMinMax(out); } } @@ -643,6 +647,55 @@ public String getMaxValueAsString() { } } + public static class GeoPoint extends FieldStats { + public GeoPoint(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, + boolean isSearchable, boolean isAggregatable) { + super((byte) 5, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable); + } + + public GeoPoint(long maxDoc, long docCount, long sumDocFreq, long sumTotalTermFreq, + boolean isSearchable, boolean isAggregatable, + org.elasticsearch.common.geo.GeoPoint minValue, org.elasticsearch.common.geo.GeoPoint maxValue) { + super((byte) 5, maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, + minValue, maxValue); + } + + @Override + public org.elasticsearch.common.geo.GeoPoint valueOf(String value, String fmt) { + return org.elasticsearch.common.geo.GeoPoint.parseFromLatLon(value); + } + + @Override + protected void updateMinMax(org.elasticsearch.common.geo.GeoPoint min, org.elasticsearch.common.geo.GeoPoint max) { + minValue.reset(Math.min(min.lat(), minValue.lat()), Math.min(min.lon(), minValue.lon())); + maxValue.reset(Math.max(max.lat(), maxValue.lat()), Math.max(max.lon(), maxValue.lon())); + } + + @Override + public int compare(org.elasticsearch.common.geo.GeoPoint p1, org.elasticsearch.common.geo.GeoPoint p2) { + throw new IllegalArgumentException("compare is not supported for geo_point field stats"); + } + + @Override + public void writeMinMax(StreamOutput out) throws IOException { + out.writeDouble(minValue.lat()); + out.writeDouble(minValue.lon()); + out.writeDouble(maxValue.lat()); + out.writeDouble(maxValue.lon()); + } + + @Override + public String getMinValueAsString() { + return minValue.toString(); + } + + @Override + public String getMaxValueAsString() { + return maxValue.toString(); + } + } + public static FieldStats readFrom(StreamInput in) throws IOException { byte type = in.readByte(); long maxDoc = in.readLong(); @@ -652,7 +705,7 @@ public static FieldStats readFrom(StreamInput in) throws IOException { boolean isSearchable = in.readBoolean(); boolean isAggregatable = in.readBoolean(); boolean hasMinMax = true; - if (in.getVersion().onOrAfter(Version.V_5_2_0_UNRELEASED)) { + if (in.getVersion().onOrAfter(Version.V_5_2_0)) { hasMinMax = in.readBoolean(); } switch (type) { @@ -690,7 +743,7 @@ public static FieldStats readFrom(StreamInput in) throws IOException { isSearchable, isAggregatable); } - case 4: + case 4: { if (hasMinMax == false) { return new Ip(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable); @@ -705,7 +758,17 @@ public static FieldStats readFrom(StreamInput in) throws IOException { InetAddress max = InetAddressPoint.decode(b2); return new Ip(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, isSearchable, isAggregatable, min, max); - + } + case 5: { + if (hasMinMax == false) { + return new GeoPoint(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable); + } + org.elasticsearch.common.geo.GeoPoint min = new org.elasticsearch.common.geo.GeoPoint(in.readDouble(), in.readDouble()); + org.elasticsearch.common.geo.GeoPoint max = new org.elasticsearch.common.geo.GeoPoint(in.readDouble(), in.readDouble()); + return new GeoPoint(maxDoc, docCount, sumDocFreq, sumTotalTermFreq, + isSearchable, isAggregatable, min, max); + } default: throw new IllegalArgumentException("Unknown type."); } diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsRequest.java b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsRequest.java index 7dfcdcfa10841..6453e4dff3538 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsRequest.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsRequest.java @@ -19,7 +19,6 @@ package org.elasticsearch.action.fieldstats; -import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.ValidateActions; import org.elasticsearch.action.support.broadcast.BroadcastRequest; @@ -200,9 +199,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeByte(indexConstraint.getProperty().getId()); out.writeByte(indexConstraint.getComparison().getId()); out.writeString(indexConstraint.getValue()); - if (out.getVersion().onOrAfter(Version.V_2_0_1)) { - out.writeOptionalString(indexConstraint.getOptionalFormat()); - } + out.writeOptionalString(indexConstraint.getOptionalFormat()); } out.writeString(level); out.writeBoolean(useCache); diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsResponse.java b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsResponse.java index f126c73d04d0a..2046aeddc1b6a 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsResponse.java @@ -93,7 +93,7 @@ public void writeTo(StreamOutput out) throws IOException { for (Map.Entry> entry1 : indicesMergedFieldStats.entrySet()) { out.writeString(entry1.getKey()); int size = entry1.getValue().size(); - if (out.getVersion().before(Version.V_5_2_0_UNRELEASED)) { + if (out.getVersion().before(Version.V_5_2_0)) { // filter fieldstats without min/max information for (FieldStats stats : entry1.getValue().values()) { if (stats.hasMinMax() == false) { @@ -103,7 +103,7 @@ public void writeTo(StreamOutput out) throws IOException { } out.writeVInt(size); for (Map.Entry entry2 : entry1.getValue().entrySet()) { - if (entry2.getValue().hasMinMax() || out.getVersion().onOrAfter(Version.V_5_2_0_UNRELEASED)) { + if (entry2.getValue().hasMinMax() || out.getVersion().onOrAfter(Version.V_5_2_0)) { out.writeString(entry2.getKey()); entry2.getValue().writeTo(out); } diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardRequest.java b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardRequest.java index 3844895bc2459..b393a3789cf51 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardRequest.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardRequest.java @@ -39,8 +39,7 @@ public FieldStatsShardRequest() { public FieldStatsShardRequest(ShardId shardId, FieldStatsRequest request) { super(shardId, request); - Set fields = new HashSet<>(); - fields.addAll(Arrays.asList(request.getFields())); + Set fields = new HashSet<>(Arrays.asList(request.getFields())); for (IndexConstraint indexConstraint : request.getIndexConstraints()) { fields.add(indexConstraint.getField()); } diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardResponse.java b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardResponse.java index d94cfcd2958f8..d2f3a7d5e4564 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardResponse.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/FieldStatsShardResponse.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.fieldstats; +import org.elasticsearch.Version; import org.elasticsearch.action.support.broadcast.BroadcastShardResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; @@ -27,6 +28,7 @@ import java.io.IOException; import java.util.HashMap; import java.util.Map; +import java.util.stream.Collectors; public class FieldStatsShardResponse extends BroadcastShardResponse { @@ -44,6 +46,12 @@ public Map> getFieldStats() { return fieldStats; } + Map > filterNullMinMax() { + return fieldStats.entrySet().stream() + .filter((e) -> e.getValue().hasMinMax()) + .collect(Collectors.toMap(p -> p.getKey(), p -> p.getValue())); + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -59,8 +67,17 @@ public void readFrom(StreamInput in) throws IOException { @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); - out.writeVInt(fieldStats.size()); - for (Map.Entry> entry : fieldStats.entrySet()) { + final Map > stats; + if (out.getVersion().before(Version.V_5_2_0)) { + /** + * FieldStats with null min/max are not (de)serializable in versions prior to {@link Version.V_5_2_0_UNRELEASED} + */ + stats = filterNullMinMax(); + } else { + stats = getFieldStats(); + } + out.writeVInt(stats.size()); + for (Map.Entry> entry : stats.entrySet()) { out.writeString(entry.getKey()); entry.getValue().writeTo(out); } diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/IndexConstraint.java b/core/src/main/java/org/elasticsearch/action/fieldstats/IndexConstraint.java index 62eaf207e31da..fe39ba6e3772f 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/IndexConstraint.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/IndexConstraint.java @@ -19,7 +19,6 @@ package org.elasticsearch.action.fieldstats; -import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import java.io.IOException; @@ -39,11 +38,7 @@ public class IndexConstraint { this.property = Property.read(input.readByte()); this.comparison = Comparison.read(input.readByte()); this.value = input.readString(); - if (input.getVersion().onOrAfter(Version.V_2_0_1)) { - this.optionalFormat = input.readOptionalString(); - } else { - this.optionalFormat = null; - } + this.optionalFormat = input.readOptionalString(); } public IndexConstraint(String field, Property property, Comparison comparison, String value) { diff --git a/core/src/main/java/org/elasticsearch/action/fieldstats/TransportFieldStatsAction.java b/core/src/main/java/org/elasticsearch/action/fieldstats/TransportFieldStatsAction.java index e65f69514320f..9ee72223a6684 100644 --- a/core/src/main/java/org/elasticsearch/action/fieldstats/TransportFieldStatsAction.java +++ b/core/src/main/java/org/elasticsearch/action/fieldstats/TransportFieldStatsAction.java @@ -36,8 +36,6 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.mapper.MappedFieldType; -import org.elasticsearch.index.mapper.MapperService; import org.elasticsearch.index.shard.IndexShard; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; diff --git a/core/src/main/java/org/elasticsearch/action/get/GetResponse.java b/core/src/main/java/org/elasticsearch/action/get/GetResponse.java index 3ba21c447e71c..296fbe6610e02 100644 --- a/core/src/main/java/org/elasticsearch/action/get/GetResponse.java +++ b/core/src/main/java/org/elasticsearch/action/get/GetResponse.java @@ -25,7 +25,7 @@ import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.get.GetField; @@ -42,7 +42,7 @@ * @see GetRequest * @see org.elasticsearch.client.Client#get(GetRequest) */ -public class GetResponse extends ActionResponse implements Iterable, ToXContent { +public class GetResponse extends ActionResponse implements Iterable, ToXContentObject { GetResult getResult; @@ -194,6 +194,6 @@ public int hashCode() { @Override public String toString() { - return Strings.toString(this, true); + return Strings.toString(this); } } diff --git a/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java b/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java index 5407184ded31a..20a619cec2c70 100644 --- a/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java +++ b/core/src/main/java/org/elasticsearch/action/get/MultiGetRequest.java @@ -44,6 +44,7 @@ import java.util.Collections; import java.util.Iterator; import java.util.List; +import java.util.Locale; public class MultiGetRequest extends ActionRequest implements Iterable, CompositeIndicesRequest, RealtimeRequest { @@ -319,6 +320,14 @@ public MultiGetRequest add(@Nullable String defaultIndex, @Nullable String defau boolean allowExplicitIndex) throws IOException { XContentParser.Token token; String currentFieldName = null; + if ((token = parser.nextToken()) != XContentParser.Token.START_OBJECT) { + final String message = String.format( + Locale.ROOT, + "unexpected token [%s], expected [%s]", + token, + XContentParser.Token.START_OBJECT); + throw new ParsingException(parser.getTokenLocation(), message); + } while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); @@ -327,7 +336,22 @@ public MultiGetRequest add(@Nullable String defaultIndex, @Nullable String defau parseDocuments(parser, this.items, defaultIndex, defaultType, defaultFields, defaultFetchSource, defaultRouting, allowExplicitIndex); } else if ("ids".equals(currentFieldName)) { parseIds(parser, this.items, defaultIndex, defaultType, defaultFields, defaultFetchSource, defaultRouting); + } else { + final String message = String.format( + Locale.ROOT, + "unknown key [%s] for a %s, expected [docs] or [ids]", + currentFieldName, + token); + throw new ParsingException(parser.getTokenLocation(), message); } + } else { + final String message = String.format( + Locale.ROOT, + "unexpected token [%s], expected [%s] or [%s]", + token, + XContentParser.Token.FIELD_NAME, + XContentParser.Token.START_ARRAY); + throw new ParsingException(parser.getTokenLocation(), message); } } return this; @@ -379,7 +403,8 @@ public static void parseDocuments(XContentParser parser, List items, @Null } else if ("_version_type".equals(currentFieldName) || "_versionType".equals(currentFieldName) || "version_type".equals(currentFieldName) || "versionType".equals(currentFieldName)) { versionType = VersionType.fromString(parser.text()); } else if ("_source".equals(currentFieldName)) { - if (parser.isBooleanValue()) { + // check lenient to avoid interpreting the value as string but parse strict in order to provoke an error early on. + if (parser.isBooleanValueLenient()) { fetchSourceContext = new FetchSourceContext(parser.booleanValue(), fetchSourceContext.includes(), fetchSourceContext.excludes()); } else if (token == XContentParser.Token.VALUE_STRING) { diff --git a/core/src/main/java/org/elasticsearch/action/get/MultiGetResponse.java b/core/src/main/java/org/elasticsearch/action/get/MultiGetResponse.java index 4fc766e2b30d1..93e4272bd956c 100644 --- a/core/src/main/java/org/elasticsearch/action/get/MultiGetResponse.java +++ b/core/src/main/java/org/elasticsearch/action/get/MultiGetResponse.java @@ -24,14 +24,14 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.Arrays; import java.util.Iterator; -public class MultiGetResponse extends ActionResponse implements Iterable, ToXContent { +public class MultiGetResponse extends ActionResponse implements Iterable, ToXContentObject { /** * Represents a failure. @@ -128,6 +128,7 @@ public Iterator iterator() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray(Fields.DOCS); for (MultiGetItemResponse response : responses) { if (response.isFailed()) { @@ -136,7 +137,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(Fields._INDEX, failure.getIndex()); builder.field(Fields._TYPE, failure.getType()); builder.field(Fields._ID, failure.getId()); - ElasticsearchException.renderException(builder, params, failure.getFailure()); + ElasticsearchException.generateFailureXContent(builder, params, failure.getFailure(), true); builder.endObject(); } else { GetResponse getResponse = response.getResponse(); @@ -144,6 +145,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } } builder.endArray(); + builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/get/TransportGetAction.java b/core/src/main/java/org/elasticsearch/action/get/TransportGetAction.java index 6b9de7ecf64e3..884af4a3af998 100644 --- a/core/src/main/java/org/elasticsearch/action/get/TransportGetAction.java +++ b/core/src/main/java/org/elasticsearch/action/get/TransportGetAction.java @@ -68,13 +68,6 @@ protected ShardIterator shards(ClusterState state, InternalRequest request) { @Override protected void resolveRequest(ClusterState state, InternalRequest request) { IndexMetaData indexMeta = state.getMetaData().index(request.concreteIndex()); - if (request.request().realtime && // if the realtime flag is set - request.request().preference() == null && // the preference flag is not already set - indexMeta != null && // and we have the index - IndexMetaData.isIndexUsingShadowReplicas(indexMeta.getSettings())) { // and the index uses shadow replicas - // set the preference for the request to use "_primary" automatically - request.request().preference(Preference.PRIMARY.type()); - } // update the routing (request#index here is possibly an alias) request.request().routing(state.metaData().resolveIndexRouting(request.request().parent(), request.request().routing(), request.request().index())); // Fail fast on the node that received the request. diff --git a/core/src/main/java/org/elasticsearch/action/index/IndexRequest.java b/core/src/main/java/org/elasticsearch/action/index/IndexRequest.java index 5809280946c03..5667bf5f9d517 100644 --- a/core/src/main/java/org/elasticsearch/action/index/IndexRequest.java +++ b/core/src/main/java/org/elasticsearch/action/index/IndexRequest.java @@ -22,9 +22,11 @@ import org.elasticsearch.ElasticsearchGenerationException; import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; +import org.elasticsearch.action.CompositeIndicesRequest; import org.elasticsearch.action.DocWriteRequest; import org.elasticsearch.action.RoutingMissingException; import org.elasticsearch.action.support.replication.ReplicatedWriteRequest; +import org.elasticsearch.action.support.replication.ReplicationRequest; import org.elasticsearch.client.Requests; import org.elasticsearch.cluster.metadata.MappingMetaData; import org.elasticsearch.cluster.metadata.MetaData; @@ -35,17 +37,20 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.lucene.uid.Versions; +import org.elasticsearch.common.unit.ByteSizeValue; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.index.VersionType; +import org.elasticsearch.index.shard.ShardId; import java.io.IOException; import java.nio.charset.StandardCharsets; import java.util.Locale; import java.util.Map; +import java.util.Objects; import static org.elasticsearch.action.ValidateActions.addValidationError; @@ -54,10 +59,10 @@ * created using {@link org.elasticsearch.client.Requests#indexRequest(String)}. * * The index requires the {@link #index()}, {@link #type(String)}, {@link #id(String)} and - * {@link #source(byte[])} to be set. + * {@link #source(byte[], XContentType)} to be set. * - * The source (content to index) can be set in its bytes form using ({@link #source(byte[])}), - * its string form ({@link #source(String)}) or using a {@link org.elasticsearch.common.xcontent.XContentBuilder} + * The source (content to index) can be set in its bytes form using ({@link #source(byte[], XContentType)}), + * its string form ({@link #source(String, XContentType)}) or using a {@link org.elasticsearch.common.xcontent.XContentBuilder} * ({@link #source(org.elasticsearch.common.xcontent.XContentBuilder)}). * * If the {@link #id(String)} is not set, it will be automatically generated. @@ -66,7 +71,14 @@ * @see org.elasticsearch.client.Requests#indexRequest(String) * @see org.elasticsearch.client.Client#index(IndexRequest) */ -public class IndexRequest extends ReplicatedWriteRequest implements DocWriteRequest { +public class IndexRequest extends ReplicatedWriteRequest implements DocWriteRequest, CompositeIndicesRequest { + + /** + * Max length of the source document to include into toString() + * + * @see ReplicationRequest#createTask(long, java.lang.String, java.lang.String, org.elasticsearch.tasks.TaskId) + */ + static final int MAX_SOURCE_LENGTH_IN_TOSTRING = 2048; private String type; private String id; @@ -82,7 +94,7 @@ public class IndexRequest extends ReplicatedWriteRequest implement private long version = Versions.MATCH_ANY; private VersionType versionType = VersionType.INTERNAL; - private XContentType contentType = Requests.INDEX_CONTENT_TYPE; + private XContentType contentType; private String pipeline; @@ -102,7 +114,7 @@ public IndexRequest() { /** * Constructs a new index request against the specific index. The {@link #type(String)} - * {@link #source(byte[])} must be set. + * {@link #source(byte[], XContentType)} must be set. */ public IndexRequest(String index) { this.index = index; @@ -110,7 +122,7 @@ public IndexRequest(String index) { /** * Constructs a new index request against the specific index and type. The - * {@link #source(byte[])} must be set. + * {@link #source(byte[], XContentType)} must be set. */ public IndexRequest(String index, String type) { this.index = index; @@ -139,7 +151,9 @@ public ActionRequestValidationException validate() { if (source == null) { validationException = addValidationError("source is missing", validationException); } - + if (contentType == null) { + validationException = addValidationError("content type is missing", validationException); + } final long resolvedVersion = resolveVersionDefaults(); if (opType() == OpType.CREATE) { if (versionType != VersionType.INTERNAL) { @@ -178,20 +192,13 @@ public ActionRequestValidationException validate() { } /** - * The content type that will be used when generating a document from user provided objects like Maps. + * The content type. This will be used when generating a document from user provided objects like Maps and when parsing the + * source at index time */ public XContentType getContentType() { return contentType; } - /** - * Sets the content type that will be used when generating a document from user provided objects (like Map). - */ - public IndexRequest contentType(XContentType contentType) { - this.contentType = contentType; - return this; - } - /** * The type of the indexed document. */ @@ -283,16 +290,16 @@ public BytesReference source() { } public Map sourceAsMap() { - return XContentHelper.convertToMap(source, false).v2(); + return XContentHelper.convertToMap(source, false, contentType).v2(); } /** - * Index the Map as a {@link org.elasticsearch.client.Requests#INDEX_CONTENT_TYPE}. + * Index the Map in {@link Requests#INDEX_CONTENT_TYPE} format * * @param source The map to index */ public IndexRequest source(Map source) throws ElasticsearchGenerationException { - return source(source, contentType); + return source(source, Requests.INDEX_CONTENT_TYPE); } /** @@ -314,23 +321,21 @@ public IndexRequest source(Map source, XContentType contentType) throws Elastics * Sets the document source to index. * * Note, its preferable to either set it using {@link #source(org.elasticsearch.common.xcontent.XContentBuilder)} - * or using the {@link #source(byte[])}. + * or using the {@link #source(byte[], XContentType)}. */ - public IndexRequest source(String source) { - this.source = new BytesArray(source.getBytes(StandardCharsets.UTF_8)); - return this; + public IndexRequest source(String source, XContentType xContentType) { + return source(new BytesArray(source), xContentType); } /** * Sets the content source to index. */ public IndexRequest source(XContentBuilder sourceBuilder) { - source = sourceBuilder.bytes(); - return this; + return source(sourceBuilder.bytes(), sourceBuilder.contentType()); } /** - * Sets the content source to index. + * Sets the content source to index using the default content type ({@link Requests#INDEX_CONTENT_TYPE}) *

* Note: the number of objects passed to this method must be an even * number. Also the first argument in each pair (the field name) must have a @@ -338,6 +343,18 @@ public IndexRequest source(XContentBuilder sourceBuilder) { *

*/ public IndexRequest source(Object... source) { + return source(Requests.INDEX_CONTENT_TYPE, source); + } + + /** + * Sets the content source to index. + *

+ * Note: the number of objects passed to this method as varargs must be an even + * number. Also the first argument in each pair (the field name) must have a + * valid String representation. + *

+ */ + public IndexRequest source(XContentType xContentType, Object... source) { if (source.length % 2 != 0) { throw new IllegalArgumentException("The number of object passed must be even but was [" + source.length + "]"); } @@ -345,7 +362,7 @@ public IndexRequest source(Object... source) { throw new IllegalArgumentException("you are using the removed method for source with bytes and unsafe flag, the unsafe flag was removed, please just use source(BytesReference)"); } try { - XContentBuilder builder = XContentFactory.contentBuilder(contentType); + XContentBuilder builder = XContentFactory.contentBuilder(xContentType); builder.startObject(); for (int i = 0; i < source.length; i++) { builder.field(source[i++].toString(), source[i]); @@ -360,16 +377,17 @@ public IndexRequest source(Object... source) { /** * Sets the document to index in bytes form. */ - public IndexRequest source(BytesReference source) { - this.source = source; + public IndexRequest source(BytesReference source, XContentType xContentType) { + this.source = Objects.requireNonNull(source); + this.contentType = Objects.requireNonNull(xContentType); return this; } /** * Sets the document to index in bytes form. */ - public IndexRequest source(byte[] source) { - return source(source, 0, source.length); + public IndexRequest source(byte[] source, XContentType xContentType) { + return source(source, 0, source.length, xContentType); } /** @@ -380,9 +398,8 @@ public IndexRequest source(byte[] source) { * @param offset The offset in the byte array * @param length The length of the data */ - public IndexRequest source(byte[] source, int offset, int length) { - this.source = new BytesArray(source, offset, length); - return this; + public IndexRequest source(byte[] source, int offset, int length, XContentType xContentType) { + return source(new BytesArray(source, offset, length), xContentType); } /** @@ -467,7 +484,7 @@ public VersionType versionType() { } - public void process(@Nullable MappingMetaData mappingMd, boolean allowIdGeneration, String concreteIndex) { + public void process(@Nullable MappingMetaData mappingMd, String concreteIndex) { if (mappingMd != null) { // might as well check for routing here if (mappingMd.routing().required() && routing == null) { @@ -475,17 +492,21 @@ public void process(@Nullable MappingMetaData mappingMd, boolean allowIdGenerati } if (parent != null && !mappingMd.hasParentField()) { - throw new IllegalArgumentException("Can't specify parent if no parent field has been configured"); + throw new IllegalArgumentException("can't specify parent if no parent field has been configured"); } } else { if (parent != null) { - throw new IllegalArgumentException("Can't specify parent if no parent field has been configured"); + throw new IllegalArgumentException("can't specify parent if no parent field has been configured"); } } - // generate id if not already provided and id generation is allowed - if (allowIdGeneration && id == null) { - assert autoGeneratedTimestamp == -1; + if ("".equals(id)) { + throw new IllegalArgumentException("if _id is specified it must not be empty"); + } + + // generate id if not already provided + if (id == null) { + assert autoGeneratedTimestamp == -1 : "timestamp has already been generated!"; autoGeneratedTimestamp = Math.max(0, System.currentTimeMillis()); // extra paranoia id(UUIDs.base64UUID()); } @@ -503,7 +524,7 @@ public void readFrom(StreamInput in) throws IOException { id = in.readOptionalString(); routing = in.readOptionalString(); parent = in.readOptionalString(); - if (in.getVersion().before(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (in.getVersion().before(Version.V_6_0_0_alpha1)) { in.readOptionalString(); // timestamp in.readOptionalWriteable(TimeValue::new); // ttl } @@ -514,6 +535,11 @@ public void readFrom(StreamInput in) throws IOException { pipeline = in.readOptionalString(); isRetry = in.readBoolean(); autoGeneratedTimestamp = in.readLong(); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + contentType = in.readOptionalWriteable(XContentType::readFrom); + } else { + contentType = XContentFactory.xContentType(source); + } } @Override @@ -523,7 +549,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(id); out.writeOptionalString(routing); out.writeOptionalString(parent); - if (out.getVersion().before(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (out.getVersion().before(Version.V_6_0_0_alpha1)) { // Serialize a fake timestamp. 5.x expect this value to be set by the #process method so we can't use null. // On the other hand, indices created on 5.x do not index the timestamp field. Therefore passing a 0 (or any value) for // the transport layer OK as it will be ignored. @@ -533,7 +559,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBytesReference(source); out.writeByte(opType.getId()); // ES versions below 5.1.2 don't know about resolveVersionDefaults but resolve the version eagerly (which messes with validation). - if (out.getVersion().before(Version.V_5_1_2_UNRELEASED)) { + if (out.getVersion().before(Version.V_5_1_2)) { out.writeLong(resolveVersionDefaults()); } else { out.writeLong(version); @@ -542,13 +568,21 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(pipeline); out.writeBoolean(isRetry); out.writeLong(autoGeneratedTimestamp); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + out.writeOptionalWriteable(contentType); + } } @Override public String toString() { String sSource = "_na_"; try { - sSource = XContentHelper.convertToJson(source, false); + if (source.length() > MAX_SOURCE_LENGTH_IN_TOSTRING) { + sSource = "n/a, actual length: [" + new ByteSizeValue(source.length()).toString() + "], max length: " + + new ByteSizeValue(MAX_SOURCE_LENGTH_IN_TOSTRING).toString(); + } else { + sSource = XContentHelper.convertToJson(source, false); + } } catch (Exception e) { // ignore } @@ -575,4 +609,35 @@ public void onRetry() { public long getAutoGeneratedTimestamp() { return autoGeneratedTimestamp; } + + /** + * Override this method from ReplicationAction, this is where we are storing our state in the request object (which we really shouldn't + * do). Once the transport client goes away we can move away from making this available, but in the meantime this is dangerous to set or + * use because the IndexRequest object will always be wrapped in a bulk request envelope, which is where this *should* be set. + */ + @Override + public long primaryTerm() { + throw new UnsupportedOperationException("primary term should never be set on IndexRequest"); + } + + /** + * Override this method from ReplicationAction, this is where we are storing our state in the request object (which we really shouldn't + * do). Once the transport client goes away we can move away from making this available, but in the meantime this is dangerous to set or + * use because the IndexRequest object will always be wrapped in a bulk request envelope, which is where this *should* be set. + */ + @Override + public void primaryTerm(long term) { + throw new UnsupportedOperationException("primary term should never be set on IndexRequest"); + } + + /** + * Override this method from ReplicationAction, this is where we are storing our state in the request object (which we really shouldn't + * do). Once the transport client goes away we can move away from making this available, but in the meantime this is dangerous to set or + * use because the IndexRequest object will always be wrapped in a bulk request envelope, which is where this *should* be set. + */ + @Override + public IndexRequest setShardId(ShardId shardId) { + throw new UnsupportedOperationException("shard id should never be set on IndexRequest"); + } + } diff --git a/core/src/main/java/org/elasticsearch/action/index/IndexRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/index/IndexRequestBuilder.java index f7df8bffced3d..88b094a33f521 100644 --- a/core/src/main/java/org/elasticsearch/action/index/IndexRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/index/IndexRequestBuilder.java @@ -83,8 +83,8 @@ public IndexRequestBuilder setParent(String parent) { /** * Sets the source. */ - public IndexRequestBuilder setSource(BytesReference source) { - request.source(source); + public IndexRequestBuilder setSource(BytesReference source, XContentType xContentType) { + request.source(source, xContentType); return this; } @@ -112,10 +112,10 @@ public IndexRequestBuilder setSource(Map source, XContentType content * Sets the document source to index. *

* Note, its preferable to either set it using {@link #setSource(org.elasticsearch.common.xcontent.XContentBuilder)} - * or using the {@link #setSource(byte[])}. + * or using the {@link #setSource(byte[], XContentType)}. */ - public IndexRequestBuilder setSource(String source) { - request.source(source); + public IndexRequestBuilder setSource(String source, XContentType xContentType) { + request.source(source, xContentType); return this; } @@ -130,8 +130,8 @@ public IndexRequestBuilder setSource(XContentBuilder sourceBuilder) { /** * Sets the document to index in bytes form. */ - public IndexRequestBuilder setSource(byte[] source) { - request.source(source); + public IndexRequestBuilder setSource(byte[] source, XContentType xContentType) { + request.source(source, xContentType); return this; } @@ -142,9 +142,10 @@ public IndexRequestBuilder setSource(byte[] source) { * @param source The source to index * @param offset The offset in the byte array * @param length The length of the data + * @param xContentType The type/format of the source */ - public IndexRequestBuilder setSource(byte[] source, int offset, int length) { - request.source(source, offset, length); + public IndexRequestBuilder setSource(byte[] source, int offset, int length, XContentType xContentType) { + request.source(source, offset, length, xContentType); return this; } @@ -162,10 +163,15 @@ public IndexRequestBuilder setSource(Object... source) { } /** - * The content type that will be used to generate a document from user provided objects (like Map). + * Constructs a simple document with a field name and value pairs. + *

+ * Note: the number of objects passed as varargs to this method must be an even + * number. Also the first argument in each pair (the field name) must have a + * valid String representation. + *

*/ - public IndexRequestBuilder setContentType(XContentType contentType) { - request.contentType(contentType); + public IndexRequestBuilder setSource(XContentType xContentType, Object... source) { + request.source(xContentType, source); return this; } diff --git a/core/src/main/java/org/elasticsearch/action/index/IndexResponse.java b/core/src/main/java/org/elasticsearch/action/index/IndexResponse.java index b092e7e8e74bb..f3b71d590ff88 100644 --- a/core/src/main/java/org/elasticsearch/action/index/IndexResponse.java +++ b/core/src/main/java/org/elasticsearch/action/index/IndexResponse.java @@ -22,11 +22,14 @@ import org.elasticsearch.action.DocWriteResponse; import org.elasticsearch.common.Strings; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * A response of an index operation, * @@ -35,11 +38,13 @@ */ public class IndexResponse extends DocWriteResponse { + private static final String CREATED = "created"; + public IndexResponse() { } - public IndexResponse(ShardId shardId, String type, String id, long seqNo, long version, boolean created) { - super(shardId, type, id, seqNo, version, created ? Result.CREATED : Result.UPDATED); + public IndexResponse(ShardId shardId, String type, String id, long seqNo, long primaryTerm, long version, boolean created) { + super(shardId, type, id, seqNo, primaryTerm, version, created ? Result.CREATED : Result.UPDATED); } @Override @@ -57,14 +62,65 @@ public String toString() { builder.append(",version=").append(getVersion()); builder.append(",result=").append(getResult().getLowercase()); builder.append(",seqNo=").append(getSeqNo()); - builder.append(",shards=").append(Strings.toString(getShardInfo(), true)); + builder.append(",primaryTerm=").append(getPrimaryTerm()); + builder.append(",shards=").append(Strings.toString(getShardInfo())); return builder.append("]").toString(); } @Override - public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - super.toXContent(builder, params); - builder.field("created", result == Result.CREATED); + public XContentBuilder innerToXContent(XContentBuilder builder, Params params) throws IOException { + super.innerToXContent(builder, params); + builder.field(CREATED, result == Result.CREATED); return builder; } + + public static IndexResponse fromXContent(XContentParser parser) throws IOException { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + + Builder context = new Builder(); + while (parser.nextToken() != XContentParser.Token.END_OBJECT) { + parseXContentFields(parser, context); + } + return context.build(); + } + + /** + * Parse the current token and update the parsing context appropriately. + */ + public static void parseXContentFields(XContentParser parser, Builder context) throws IOException { + XContentParser.Token token = parser.currentToken(); + String currentFieldName = parser.currentName(); + + if (CREATED.equals(currentFieldName)) { + if (token.isValue()) { + context.setCreated(parser.booleanValue()); + } + } else { + DocWriteResponse.parseInnerToXContent(parser, context); + } + } + + /** + * Builder class for {@link IndexResponse}. This builder is usually used during xcontent parsing to + * temporarily store the parsed values, then the {@link Builder#build()} method is called to + * instantiate the {@link IndexResponse}. + */ + public static class Builder extends DocWriteResponse.Builder { + + private boolean created = false; + + public void setCreated(boolean created) { + this.created = created; + } + + @Override + public IndexResponse build() { + IndexResponse indexResponse = new IndexResponse(shardId, type, id, seqNo, primaryTerm, version, created); + indexResponse.setForcedRefresh(forcedRefresh); + if (shardInfo != null) { + indexResponse.setShardInfo(shardInfo); + } + return indexResponse; + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java b/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java index 9ed9f7f7cd11d..88a210c718019 100644 --- a/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java +++ b/core/src/main/java/org/elasticsearch/action/index/TransportIndexAction.java @@ -19,39 +19,16 @@ package org.elasticsearch.action.index; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.elasticsearch.ExceptionsHelper; -import org.elasticsearch.ResourceAlreadyExistsException; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.admin.indices.create.CreateIndexRequest; -import org.elasticsearch.action.admin.indices.create.CreateIndexResponse; -import org.elasticsearch.action.admin.indices.create.TransportCreateIndexAction; -import org.elasticsearch.action.ingest.IngestActionForwarder; +import org.elasticsearch.action.bulk.TransportBulkAction; +import org.elasticsearch.action.bulk.TransportShardBulkAction; +import org.elasticsearch.action.bulk.TransportSingleItemBulkWriteAction; import org.elasticsearch.action.support.ActionFilters; -import org.elasticsearch.action.support.AutoCreateIndex; -import org.elasticsearch.action.support.replication.ReplicationOperation; -import org.elasticsearch.action.support.replication.TransportWriteAction; -import org.elasticsearch.cluster.ClusterState; -import org.elasticsearch.cluster.action.index.MappingUpdatedAction; import org.elasticsearch.cluster.action.shard.ShardStateAction; -import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.metadata.MappingMetaData; -import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.service.ClusterService; -import org.elasticsearch.common.Strings; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.index.engine.Engine; -import org.elasticsearch.index.mapper.MapperParsingException; -import org.elasticsearch.index.mapper.Mapping; -import org.elasticsearch.index.mapper.SourceToParse; -import org.elasticsearch.index.shard.IndexShard; -import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.indices.IndicesService; -import org.elasticsearch.ingest.IngestService; -import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -64,205 +41,25 @@ * Defaults to true. *
  • allowIdGeneration: If the id is set not, should it be generated. Defaults to true. * + * + * Deprecated use TransportBulkAction with a single item instead */ -public class TransportIndexAction extends TransportWriteAction { - - private final AutoCreateIndex autoCreateIndex; - private final boolean allowIdGeneration; - private final TransportCreateIndexAction createIndexAction; - - private final ClusterService clusterService; - private final IngestService ingestService; - private final MappingUpdatedAction mappingUpdatedAction; - private final IngestActionForwarder ingestForwarder; +@Deprecated +public class TransportIndexAction extends TransportSingleItemBulkWriteAction { @Inject public TransportIndexAction(Settings settings, TransportService transportService, ClusterService clusterService, - IndicesService indicesService, IngestService ingestService, ThreadPool threadPool, - ShardStateAction shardStateAction, TransportCreateIndexAction createIndexAction, - MappingUpdatedAction mappingUpdatedAction, ActionFilters actionFilters, - IndexNameExpressionResolver indexNameExpressionResolver, AutoCreateIndex autoCreateIndex) { + IndicesService indicesService, + ThreadPool threadPool, ShardStateAction shardStateAction, + ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, + TransportBulkAction bulkAction, TransportShardBulkAction shardBulkAction) { super(settings, IndexAction.NAME, transportService, clusterService, indicesService, threadPool, shardStateAction, - actionFilters, indexNameExpressionResolver, IndexRequest::new, IndexRequest::new, ThreadPool.Names.INDEX); - this.mappingUpdatedAction = mappingUpdatedAction; - this.createIndexAction = createIndexAction; - this.autoCreateIndex = autoCreateIndex; - this.allowIdGeneration = settings.getAsBoolean("action.allow_id_generation", true); - this.clusterService = clusterService; - this.ingestService = ingestService; - this.ingestForwarder = new IngestActionForwarder(transportService); - clusterService.addStateApplier(this.ingestForwarder); - } - - @Override - protected void doExecute(Task task, final IndexRequest request, final ActionListener listener) { - if (Strings.hasText(request.getPipeline())) { - if (clusterService.localNode().isIngestNode()) { - processIngestIndexRequest(task, request, listener); - } else { - ingestForwarder.forwardIngestRequest(IndexAction.INSTANCE, request, listener); - } - return; - } - // if we don't have a master, we don't have metadata, that's fine, let it find a master using create index API - ClusterState state = clusterService.state(); - if (shouldAutoCreate(request, state)) { - CreateIndexRequest createIndexRequest = new CreateIndexRequest(); - createIndexRequest.index(request.index()); - createIndexRequest.cause("auto(index api)"); - createIndexRequest.masterNodeTimeout(request.timeout()); - createIndexAction.execute(task, createIndexRequest, new ActionListener() { - @Override - public void onResponse(CreateIndexResponse result) { - innerExecute(task, request, listener); - } - - @Override - public void onFailure(Exception e) { - if (ExceptionsHelper.unwrapCause(e) instanceof ResourceAlreadyExistsException) { - // we have the index, do it - try { - innerExecute(task, request, listener); - } catch (Exception inner) { - inner.addSuppressed(e); - listener.onFailure(inner); - } - } else { - listener.onFailure(e); - } - } - }); - } else { - innerExecute(task, request, listener); - } - } - - protected boolean shouldAutoCreate(IndexRequest request, ClusterState state) { - return autoCreateIndex.shouldAutoCreate(request.index(), state); - } - - @Override - protected void resolveRequest(MetaData metaData, IndexMetaData indexMetaData, IndexRequest request) { - super.resolveRequest(metaData, indexMetaData, request); - MappingMetaData mappingMd =indexMetaData.mappingOrDefault(request.type()); - request.resolveRouting(metaData); - request.process(mappingMd, allowIdGeneration, indexMetaData.getIndex().getName()); - ShardId shardId = clusterService.operationRouting().shardId(clusterService.state(), - indexMetaData.getIndex().getName(), request.id(), request.routing()); - request.setShardId(shardId); - } - - protected void innerExecute(Task task, final IndexRequest request, final ActionListener listener) { - super.doExecute(task, request, listener); + actionFilters, indexNameExpressionResolver, IndexRequest::new, IndexRequest::new, ThreadPool.Names.INDEX, + bulkAction, shardBulkAction); } @Override protected IndexResponse newResponseInstance() { return new IndexResponse(); } - - @Override - protected WritePrimaryResult shardOperationOnPrimary(IndexRequest request, IndexShard primary) throws Exception { - final Engine.IndexResult indexResult = executeIndexRequestOnPrimary(request, primary, mappingUpdatedAction); - final IndexResponse response; - final IndexRequest replicaRequest; - if (indexResult.hasFailure() == false) { - // update the version on request so it will happen on the replicas - final long version = indexResult.getVersion(); - request.version(version); - request.versionType(request.versionType().versionTypeForReplicationAndRecovery()); - request.setSeqNo(indexResult.getSeqNo()); - assert request.versionType().validateVersionForWrites(request.version()); - replicaRequest = request; - response = new IndexResponse(primary.shardId(), request.type(), request.id(), indexResult.getSeqNo(), - indexResult.getVersion(), indexResult.isCreated()); - } else { - response = null; - replicaRequest = null; - } - return new WritePrimaryResult(replicaRequest, response, indexResult.getTranslogLocation(), indexResult.getFailure(), primary); - } - - @Override - protected WriteReplicaResult shardOperationOnReplica(IndexRequest request, IndexShard replica) throws Exception { - final Engine.IndexResult indexResult = executeIndexRequestOnReplica(request, replica); - return new WriteReplicaResult(request, indexResult.getTranslogLocation(), indexResult.getFailure(), replica); - } - - /** - * Execute the given {@link IndexRequest} on a replica shard, throwing a - * {@link RetryOnReplicaException} if the operation needs to be re-tried. - */ - public static Engine.IndexResult executeIndexRequestOnReplica(IndexRequest request, IndexShard replica) { - final ShardId shardId = replica.shardId(); - SourceToParse sourceToParse = SourceToParse.source(SourceToParse.Origin.REPLICA, shardId.getIndexName(), request.type(), request.id(), request.source()) - .routing(request.routing()).parent(request.parent()); - - final Engine.Index operation; - try { - operation = replica.prepareIndexOnReplica(sourceToParse, request.getSeqNo(), request.version(), request.versionType(), request.getAutoGeneratedTimestamp(), request.isRetry()); - } catch (MapperParsingException e) { - return new Engine.IndexResult(e, request.version(), request.getSeqNo()); - } - Mapping update = operation.parsedDoc().dynamicMappingsUpdate(); - if (update != null) { - throw new RetryOnReplicaException(shardId, "Mappings are not available on the replica yet, triggered update: " + update); - } - return replica.index(operation); - } - - /** Utility method to prepare an index operation on primary shards */ - static Engine.Index prepareIndexOperationOnPrimary(IndexRequest request, IndexShard primary) { - SourceToParse sourceToParse = SourceToParse.source(SourceToParse.Origin.PRIMARY, request.index(), request.type(), request.id(), request.source()) - .routing(request.routing()).parent(request.parent()); - return primary.prepareIndexOnPrimary(sourceToParse, request.version(), request.versionType(), request.getAutoGeneratedTimestamp(), request.isRetry()); - } - - public static Engine.IndexResult executeIndexRequestOnPrimary(IndexRequest request, IndexShard primary, - MappingUpdatedAction mappingUpdatedAction) throws Exception { - Engine.Index operation; - try { - operation = prepareIndexOperationOnPrimary(request, primary); - } catch (MapperParsingException | IllegalArgumentException e) { - return new Engine.IndexResult(e, request.version(), request.getSeqNo()); - } - Mapping update = operation.parsedDoc().dynamicMappingsUpdate(); - final ShardId shardId = primary.shardId(); - if (update != null) { - // can throw timeout exception when updating mappings or ISE for attempting to update default mappings - // which are bubbled up - try { - mappingUpdatedAction.updateMappingOnMaster(shardId.getIndex(), request.type(), update); - } catch (IllegalArgumentException e) { - // throws IAE on conflicts merging dynamic mappings - return new Engine.IndexResult(e, request.version(), request.getSeqNo()); - } - try { - operation = prepareIndexOperationOnPrimary(request, primary); - } catch (MapperParsingException | IllegalArgumentException e) { - return new Engine.IndexResult(e, request.version(), request.getSeqNo()); - } - update = operation.parsedDoc().dynamicMappingsUpdate(); - if (update != null) { - throw new ReplicationOperation.RetryOnPrimaryException(shardId, - "Dynamic mappings are not available on the node that holds the primary yet"); - } - } - - return primary.index(operation); - } - - private void processIngestIndexRequest(Task task, IndexRequest indexRequest, ActionListener listener) { - ingestService.getPipelineExecutionService().executeIndexRequest(indexRequest, t -> { - logger.error((Supplier) () -> new ParameterizedMessage("failed to execute pipeline [{}]", indexRequest.getPipeline()), t); - listener.onFailure(t); - }, success -> { - // TransportIndexAction uses IndexRequest and same action name on the node that receives the request and the node that - // processes the primary action. This could lead to a pipeline being executed twice for the same - // index request, hence we set the pipeline to null once its execution completed. - indexRequest.setPipeline(null); - doExecute(task, indexRequest, listener); - }); - } - } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java index 74ce894b05321..45cb83634f84f 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/DeletePipelineTransportAction.java @@ -30,7 +30,7 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.ingest.PipelineStore; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineResponse.java b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineResponse.java index 3b66a294a50cc..30843bdff9b28 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineResponse.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineResponse.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.ingest.PipelineConfiguration; import org.elasticsearch.rest.RestStatus; @@ -31,7 +31,7 @@ import java.util.ArrayList; import java.util.List; -public class GetPipelineResponse extends ActionResponse implements StatusToXContent { +public class GetPipelineResponse extends ActionResponse implements StatusToXContentObject { private List pipelines; @@ -76,9 +76,11 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); for (PipelineConfiguration pipeline : pipelines) { builder.field(pipeline.getId(), pipeline.getConfigAsMap()); } + builder.endObject(); return builder; } } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java index 8bac5c7b80434..f64b36d47aedb 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/GetPipelineTransportAction.java @@ -30,7 +30,7 @@ import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.ingest.PipelineStore; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequest.java b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequest.java index 10416146ba853..394349ca01691 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequest.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequest.java @@ -19,32 +19,40 @@ package org.elasticsearch.action.ingest; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.support.master.AcknowledgedRequest; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentType; import java.io.IOException; import java.util.Objects; -import static org.elasticsearch.action.ValidateActions.addValidationError; - public class PutPipelineRequest extends AcknowledgedRequest { private String id; private BytesReference source; + private XContentType xContentType; + /** + * Create a new pipeline request + * @deprecated use {@link #PutPipelineRequest(String, BytesReference, XContentType)} to avoid content type auto-detection + */ + @Deprecated public PutPipelineRequest(String id, BytesReference source) { - if (id == null) { - throw new IllegalArgumentException("id is missing"); - } - if (source == null) { - throw new IllegalArgumentException("source is missing"); - } + this(id, source, XContentFactory.xContentType(source)); + } - this.id = id; - this.source = source; + /** + * Create a new pipeline request with the id and source along with the content type of the source + */ + public PutPipelineRequest(String id, BytesReference source, XContentType xContentType) { + this.id = Objects.requireNonNull(id); + this.source = Objects.requireNonNull(source); + this.xContentType = Objects.requireNonNull(xContentType); } PutPipelineRequest() { @@ -63,11 +71,20 @@ public BytesReference getSource() { return source; } + public XContentType getXContentType() { + return xContentType; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); id = in.readString(); source = in.readBytesReference(); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType = XContentType.readFrom(in); + } else { + xContentType = XContentFactory.xContentType(source); + } } @Override @@ -75,5 +92,8 @@ public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); out.writeString(id); out.writeBytesReference(source); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType.writeTo(out); + } } } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequestBuilder.java index bd927115fb5ff..c03b3b84f8b5b 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineRequestBuilder.java @@ -22,6 +22,7 @@ import org.elasticsearch.action.ActionRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.xcontent.XContentType; public class PutPipelineRequestBuilder extends ActionRequestBuilder { @@ -29,8 +30,13 @@ public PutPipelineRequestBuilder(ElasticsearchClient client, PutPipelineAction a super(client, action, new PutPipelineRequest()); } + @Deprecated public PutPipelineRequestBuilder(ElasticsearchClient client, PutPipelineAction action, String id, BytesReference source) { super(client, action, new PutPipelineRequest(id, source)); } + public PutPipelineRequestBuilder(ElasticsearchClient client, PutPipelineAction action, String id, BytesReference source, + XContentType xContentType) { + super(client, action, new PutPipelineRequest(id, source, xContentType)); + } } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java index 82cd8d8eb7b32..7dde981804939 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/PutPipelineTransportAction.java @@ -36,7 +36,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.ingest.PipelineStore; import org.elasticsearch.ingest.IngestInfo; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulateDocumentBaseResult.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulateDocumentBaseResult.java index 82b39ac897242..c6252feea276c 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulateDocumentBaseResult.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulateDocumentBaseResult.java @@ -84,7 +84,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (failure == null) { ingestDocument.toXContent(builder, params); } else { - ElasticsearchException.renderException(builder, params, failure); + ElasticsearchException.generateFailureXContent(builder, params, failure, true); } builder.endObject(); return builder; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequest.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequest.java index ef7b5e3d5bbed..30beb32681aea 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequest.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequest.java @@ -19,11 +19,14 @@ package org.elasticsearch.action.ingest; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionRequest; import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.common.bytes.BytesReference; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.XContentFactory; +import org.elasticsearch.common.xcontent.XContentType; import org.elasticsearch.ingest.ConfigurationUtils; import org.elasticsearch.ingest.IngestDocument; import org.elasticsearch.ingest.Pipeline; @@ -34,6 +37,7 @@ import java.util.Collections; import java.util.List; import java.util.Map; +import java.util.Objects; import static org.elasticsearch.ingest.IngestDocument.MetaData; @@ -42,12 +46,23 @@ public class SimulatePipelineRequest extends ActionRequest { private String id; private boolean verbose; private BytesReference source; + private XContentType xContentType; + /** + * Create a new request + * @deprecated use {@link #SimulatePipelineRequest(BytesReference, XContentType)} that does not attempt content autodetection + */ + @Deprecated public SimulatePipelineRequest(BytesReference source) { - if (source == null) { - throw new IllegalArgumentException("source is missing"); - } - this.source = source; + this(source, XContentFactory.xContentType(source)); + } + + /** + * Creates a new request with the given source and its content type + */ + public SimulatePipelineRequest(BytesReference source, XContentType xContentType) { + this.source = Objects.requireNonNull(source); + this.xContentType = Objects.requireNonNull(xContentType); } SimulatePipelineRequest() { @@ -78,12 +93,21 @@ public BytesReference getSource() { return source; } + public XContentType getXContentType() { + return xContentType; + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); id = in.readOptionalString(); verbose = in.readBoolean(); source = in.readBytesReference(); + if (in.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType = XContentType.readFrom(in); + } else { + xContentType = XContentFactory.xContentType(source); + } } @Override @@ -92,6 +116,9 @@ public void writeTo(StreamOutput out) throws IOException { out.writeOptionalString(id); out.writeBoolean(verbose); out.writeBytesReference(source); + if (out.getVersion().onOrAfter(Version.V_5_3_0)) { + xContentType.writeTo(out); + } } public static final class Fields { @@ -135,18 +162,18 @@ static Parsed parseWithPipelineId(String pipelineId, Map config, if (pipeline == null) { throw new IllegalArgumentException("pipeline [" + pipelineId + "] does not exist"); } - List ingestDocumentList = parseDocs(config); + List ingestDocumentList = parseDocs(config, pipelineStore.isNewIngestDateFormat()); return new Parsed(pipeline, ingestDocumentList, verbose); } static Parsed parse(Map config, boolean verbose, PipelineStore pipelineStore) throws Exception { Map pipelineConfig = ConfigurationUtils.readMap(null, null, config, Fields.PIPELINE); Pipeline pipeline = PIPELINE_FACTORY.create(SIMULATED_PIPELINE_ID, pipelineConfig, pipelineStore.getProcessorFactories()); - List ingestDocumentList = parseDocs(config); + List ingestDocumentList = parseDocs(config, pipelineStore.isNewIngestDateFormat()); return new Parsed(pipeline, ingestDocumentList, verbose); } - private static List parseDocs(Map config) { + private static List parseDocs(Map config, boolean newDateFormat) { List> docs = ConfigurationUtils.readList(null, null, config, Fields.DOCS); List ingestDocumentList = new ArrayList<>(); for (Map dataMap : docs) { @@ -156,7 +183,7 @@ private static List parseDocs(Map config) { ConfigurationUtils.readStringProperty(null, null, dataMap, MetaData.ID.getFieldName(), "_id"), ConfigurationUtils.readOptionalStringProperty(null, null, dataMap, MetaData.ROUTING.getFieldName()), ConfigurationUtils.readOptionalStringProperty(null, null, dataMap, MetaData.PARENT.getFieldName()), - document); + document, newDateFormat); ingestDocumentList.add(ingestDocument); } return ingestDocumentList; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequestBuilder.java index 4a13fa111e6a2..bb5d0e4e40003 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineRequestBuilder.java @@ -22,22 +22,46 @@ import org.elasticsearch.action.ActionRequestBuilder; import org.elasticsearch.client.ElasticsearchClient; import org.elasticsearch.common.bytes.BytesReference; +import org.elasticsearch.common.xcontent.XContentType; public class SimulatePipelineRequestBuilder extends ActionRequestBuilder { + /** + * Create a new builder for {@link SimulatePipelineRequest}s + */ public SimulatePipelineRequestBuilder(ElasticsearchClient client, SimulatePipelineAction action) { super(client, action, new SimulatePipelineRequest()); } + /** + * Create a new builder for {@link SimulatePipelineRequest}s + * @deprecated use {@link #SimulatePipelineRequestBuilder(ElasticsearchClient, SimulatePipelineAction, BytesReference, XContentType)} to + * avoid content type auto-detection on the source bytes + */ + @Deprecated public SimulatePipelineRequestBuilder(ElasticsearchClient client, SimulatePipelineAction action, BytesReference source) { super(client, action, new SimulatePipelineRequest(source)); } + /** + * Create a new builder for {@link SimulatePipelineRequest}s + */ + public SimulatePipelineRequestBuilder(ElasticsearchClient client, SimulatePipelineAction action, BytesReference source, + XContentType xContentType) { + super(client, action, new SimulatePipelineRequest(source, xContentType)); + } + + /** + * Set the id for the pipeline to simulate + */ public SimulatePipelineRequestBuilder setId(String id) { request.setId(id); return this; } + /** + * Enable or disable verbose mode + */ public SimulatePipelineRequestBuilder setVerbose(boolean verbose) { request.setVerbose(verbose); return this; diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineResponse.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineResponse.java index 83029a1aab502..e9ea1a7750738 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineResponse.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineResponse.java @@ -22,7 +22,7 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; @@ -30,7 +30,7 @@ import java.util.Collections; import java.util.List; -public class SimulatePipelineResponse extends ActionResponse implements ToXContent { +public class SimulatePipelineResponse extends ActionResponse implements ToXContentObject { private String pipelineId; private boolean verbose; private List results; @@ -88,11 +88,13 @@ public void readFrom(StreamInput in) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray(Fields.DOCUMENTS); for (SimulateDocumentResult response : results) { response.toXContent(builder, params); } builder.endArray(); + builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java index 4f9a219c8ad9e..3f67007df690d 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulatePipelineTransportAction.java @@ -19,7 +19,6 @@ package org.elasticsearch.action.ingest; -import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.HandledTransportAction; @@ -28,7 +27,7 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.xcontent.XContentHelper; import org.elasticsearch.ingest.PipelineStore; -import org.elasticsearch.node.service.NodeService; +import org.elasticsearch.node.NodeService; import org.elasticsearch.threadpool.ThreadPool; import org.elasticsearch.transport.TransportService; @@ -48,7 +47,7 @@ public SimulatePipelineTransportAction(Settings settings, ThreadPool threadPool, @Override protected void doExecute(SimulatePipelineRequest request, ActionListener listener) { - final Map source = XContentHelper.convertToMap(request.getSource(), false).v2(); + final Map source = XContentHelper.convertToMap(request.getSource(), false, request.getXContentType()).v2(); final SimulatePipelineRequest.Parsed simulateRequest; try { diff --git a/core/src/main/java/org/elasticsearch/action/ingest/SimulateProcessorResult.java b/core/src/main/java/org/elasticsearch/action/ingest/SimulateProcessorResult.java index c978cc56d9ef1..3ebcb6cb6f373 100644 --- a/core/src/main/java/org/elasticsearch/action/ingest/SimulateProcessorResult.java +++ b/core/src/main/java/org/elasticsearch/action/ingest/SimulateProcessorResult.java @@ -19,7 +19,6 @@ package org.elasticsearch.action.ingest; import org.elasticsearch.ElasticsearchException; -import org.elasticsearch.ElasticsearchParseException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Writeable; @@ -99,10 +98,10 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (failure != null && ingestDocument != null) { builder.startObject("ignored_error"); - ElasticsearchException.renderException(builder, params, failure); + ElasticsearchException.generateFailureXContent(builder, params, failure, true); builder.endObject(); } else if (failure != null) { - ElasticsearchException.renderException(builder, params, failure); + ElasticsearchException.generateFailureXContent(builder, params, failure, true); } if (ingestDocument != null) { diff --git a/core/src/main/java/org/elasticsearch/action/main/MainResponse.java b/core/src/main/java/org/elasticsearch/action/main/MainResponse.java index c156dcfc98f50..39d4f31a1939a 100644 --- a/core/src/main/java/org/elasticsearch/action/main/MainResponse.java +++ b/core/src/main/java/org/elasticsearch/action/main/MainResponse.java @@ -23,14 +23,18 @@ import org.elasticsearch.Version; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.cluster.ClusterName; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; +import java.util.Objects; -public class MainResponse extends ActionResponse implements ToXContent { +public class MainResponse extends ActionResponse implements ToXContentObject { private String nodeName; private Version version; @@ -114,4 +118,46 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.endObject(); return builder; } + + private static final ObjectParser PARSER = new ObjectParser<>(MainResponse.class.getName(), true, + () -> new MainResponse()); + + static { + PARSER.declareString((response, value) -> response.nodeName = value, new ParseField("name")); + PARSER.declareString((response, value) -> response.clusterName = new ClusterName(value), new ParseField("cluster_name")); + PARSER.declareString((response, value) -> response.clusterUuid = value, new ParseField("cluster_uuid")); + PARSER.declareString((response, value) -> {}, new ParseField("tagline")); + PARSER.declareObject((response, value) -> { + response.build = new Build((String) value.get("build_hash"), (String) value.get("build_date"), + (boolean) value.get("build_snapshot")); + response.version = Version.fromString((String) value.get("number")); + response.available = true; + }, (parser, context) -> parser.map(), new ParseField("version")); + } + + public static MainResponse fromXContent(XContentParser parser) { + return PARSER.apply(parser, null); + } + + @Override + public boolean equals(Object o) { + if (this == o) { + return true; + } + if (o == null || getClass() != o.getClass()) { + return false; + } + MainResponse other = (MainResponse) o; + return Objects.equals(nodeName, other.nodeName) && + Objects.equals(version, other.version) && + Objects.equals(clusterUuid, other.clusterUuid) && + Objects.equals(build, other.build) && + Objects.equals(available, other.available) && + Objects.equals(clusterName, other.clusterName); + } + + @Override + public int hashCode() { + return Objects.hash(nodeName, version, clusterUuid, build, clusterName, available); + } } diff --git a/core/src/main/java/org/elasticsearch/action/search/AbstractAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/AbstractAsyncAction.java deleted file mode 100644 index 96db19d547269..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/search/AbstractAsyncAction.java +++ /dev/null @@ -1,52 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.action.search; - -/** - * Base implementation for an async action. - */ -abstract class AbstractAsyncAction { - - private final long startTime; - - protected AbstractAsyncAction() { this(System.currentTimeMillis());} - - protected AbstractAsyncAction(long startTime) { - this.startTime = startTime; - } - - /** - * Return the time when the action started. - */ - protected final long startTime() { - return startTime; - } - - /** - * Builds how long it took to execute the search. - */ - protected final long buildTookInMillis() { - // protect ourselves against time going backwards - // negative values don't make sense and we want to be able to serialize that thing as a vLong - return Math.max(1, System.currentTimeMillis() - startTime); - } - - abstract void start(); -} diff --git a/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java index 2479ff86750b9..d5ee044782bcc 100644 --- a/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/AbstractSearchAsyncAction.java @@ -19,261 +19,166 @@ package org.elasticsearch.action.search; -import com.carrotsearch.hppc.IntArrayList; import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; import org.apache.logging.log4j.util.Supplier; -import org.apache.lucene.search.ScoreDoc; +import org.apache.lucene.util.SetOnce; +import org.elasticsearch.ElasticsearchException; +import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.NoShardAvailableActionException; +import org.elasticsearch.action.ShardOperationFailedException; import org.elasticsearch.action.support.TransportActions; -import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.GroupShardsIterator; -import org.elasticsearch.cluster.routing.ShardIterator; -import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.util.concurrent.AtomicArray; import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.search.fetch.ShardFetchSearchRequest; import org.elasticsearch.search.internal.AliasFilter; import org.elasticsearch.search.internal.InternalSearchResponse; import org.elasticsearch.search.internal.ShardSearchTransportRequest; -import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.query.QuerySearchResultProvider; +import org.elasticsearch.transport.Transport; import java.util.List; import java.util.Map; import java.util.concurrent.Executor; +import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; -import java.util.function.Function; +import java.util.function.BiFunction; +import java.util.stream.Collectors; - -abstract class AbstractSearchAsyncAction extends AbstractAsyncAction { +abstract class AbstractSearchAsyncAction extends InitialSearchPhase + implements SearchPhaseContext { private static final float DEFAULT_INDEX_BOOST = 1.0f; - - protected final Logger logger; - protected final SearchTransportService searchTransportService; + private final Logger logger; + private final SearchTransportService searchTransportService; private final Executor executor; - protected final ActionListener listener; - private final GroupShardsIterator shardsIts; - protected final SearchRequest request; - /** Used by subclasses to resolve node ids to DiscoveryNodes. **/ - protected final Function nodeIdToDiscoveryNode; - protected final SearchTask task; - protected final int expectedSuccessfulOps; - private final int expectedTotalOps; - protected final AtomicInteger successfulOps = new AtomicInteger(); - private final AtomicInteger totalOps = new AtomicInteger(); - protected final AtomicArray firstResults; + private final ActionListener listener; + private final SearchRequest request; + /** + * Used by subclasses to resolve node ids to DiscoveryNodes. + **/ + private final BiFunction nodeIdToConnection; + private final SearchTask task; + private final SearchPhaseResults results; + private final long clusterStateVersion; private final Map aliasFilter; private final Map concreteIndexBoosts; - private final long clusterStateVersion; - private volatile AtomicArray shardFailures; + private final SetOnce> shardFailures = new SetOnce<>(); private final Object shardFailuresMutex = new Object(); - protected volatile ScoreDoc[] sortedShardDocs; + private final AtomicInteger successfulOps = new AtomicInteger(); + private final TransportSearchAction.SearchTimeProvider timeProvider; + - protected AbstractSearchAsyncAction(Logger logger, SearchTransportService searchTransportService, - Function nodeIdToDiscoveryNode, + protected AbstractSearchAsyncAction(String name, Logger logger, SearchTransportService searchTransportService, + BiFunction nodeIdToConnection, Map aliasFilter, Map concreteIndexBoosts, - Executor executor, SearchRequest request, ActionListener listener, - GroupShardsIterator shardsIts, long startTime, long clusterStateVersion, SearchTask task) { - super(startTime); + Executor executor, SearchRequest request, + ActionListener listener, GroupShardsIterator shardsIts, + TransportSearchAction.SearchTimeProvider timeProvider, long clusterStateVersion, + SearchTask task, SearchPhaseResults resultConsumer) { + super(name, request, shardsIts, logger); + this.timeProvider = timeProvider; this.logger = logger; this.searchTransportService = searchTransportService; this.executor = executor; this.request = request; this.task = task; this.listener = listener; - this.nodeIdToDiscoveryNode = nodeIdToDiscoveryNode; + this.nodeIdToConnection = nodeIdToConnection; this.clusterStateVersion = clusterStateVersion; - this.shardsIts = shardsIts; - expectedSuccessfulOps = shardsIts.size(); - // we need to add 1 for non active partition, since we count it in the total! - expectedTotalOps = shardsIts.totalSizeWith1ForEmpty(); - firstResults = new AtomicArray<>(shardsIts.size()); - this.aliasFilter = aliasFilter; this.concreteIndexBoosts = concreteIndexBoosts; + this.aliasFilter = aliasFilter; + this.results = resultConsumer; } - public void start() { - if (expectedSuccessfulOps == 0) { + /** + * Builds how long it took to execute the search. + */ + long buildTookInMillis() { + return TimeUnit.NANOSECONDS.toMillis( + timeProvider.getRelativeCurrentNanos() - timeProvider.getRelativeStartNanos()); + } + + /** + * This is the main entry point for a search. This method starts the search execution of the initial phase. + */ + public final void start() { + if (getNumShards() == 0) { //no search shards to search on, bail with empty response //(it happens with search across _all with no indices around and consistent with broadcast operations) listener.onResponse(new SearchResponse(InternalSearchResponse.empty(), null, 0, 0, buildTookInMillis(), ShardSearchFailure.EMPTY_ARRAY)); return; } - int shardIndex = -1; - for (final ShardIterator shardIt : shardsIts) { - shardIndex++; - final ShardRouting shard = shardIt.nextOrNull(); - if (shard != null) { - performFirstPhase(shardIndex, shardIt, shard); - } else { - // really, no shards active in this group - onFirstPhaseResult(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId())); - } - } + executePhase(this); } - void performFirstPhase(final int shardIndex, final ShardIterator shardIt, final ShardRouting shard) { - if (shard == null) { - // no more active shards... (we should not really get here, but just for safety) - onFirstPhaseResult(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId())); - } else { - final DiscoveryNode node = nodeIdToDiscoveryNode.apply(shard.currentNodeId()); - if (node == null) { - onFirstPhaseResult(shardIndex, shard, null, shardIt, new NoShardAvailableActionException(shardIt.shardId())); - } else { - AliasFilter filter = this.aliasFilter.get(shard.index().getUUID()); - assert filter != null; - - float indexBoost = concreteIndexBoosts.getOrDefault(shard.index().getUUID(), DEFAULT_INDEX_BOOST); - ShardSearchTransportRequest transportRequest = new ShardSearchTransportRequest(request, shardIt.shardId(), shardsIts.size(), - filter, indexBoost, startTime()); - sendExecuteFirstPhase(node, transportRequest , new ActionListener() { - @Override - public void onResponse(FirstResult result) { - onFirstPhaseResult(shardIndex, shard.currentNodeId(), result, shardIt); - } - - @Override - public void onFailure(Exception t) { - onFirstPhaseResult(shardIndex, shard, node.getId(), shardIt, t); - } - }); + @Override + public final void executeNextPhase(SearchPhase currentPhase, SearchPhase nextPhase) { + /* This is the main search phase transition where we move to the next phase. At this point we check if there is + * at least one successful operation left and if so we move to the next phase. If not we immediately fail the + * search phase as "all shards failed"*/ + if (successfulOps.get() == 0) { // we have 0 successful results that means we shortcut stuff and return a failure + if (logger.isDebugEnabled()) { + final ShardOperationFailedException[] shardSearchFailures = ExceptionsHelper.groupBy(buildShardFailures()); + Throwable cause = shardSearchFailures.length == 0 ? null : + ElasticsearchException.guessRootCauses(shardSearchFailures[0].getCause())[0]; + logger.debug((Supplier) () -> new ParameterizedMessage("All shards failed for phase: [{}]", getName()), + cause); } - } - } - - private void onFirstPhaseResult(int shardIndex, String nodeId, FirstResult result, ShardIterator shardIt) { - result.shardTarget(new SearchShardTarget(nodeId, shardIt.shardId())); - processFirstPhaseResult(shardIndex, result); - // we need to increment successful ops first before we compare the exit condition otherwise if we - // are fast we could concurrently update totalOps but then preempt one of the threads which can - // cause the successor to read a wrong value from successfulOps if second phase is very fast ie. count etc. - successfulOps.incrementAndGet(); - // increment all the "future" shards to update the total ops since we some may work and some may not... - // and when that happens, we break on total ops, so we must maintain them - final int xTotalOps = totalOps.addAndGet(shardIt.remaining() + 1); - if (xTotalOps == expectedTotalOps) { - try { - innerMoveToSecondPhase(); - } catch (Exception e) { - if (logger.isDebugEnabled()) { - logger.debug( - (Supplier) () -> new ParameterizedMessage( - "{}: Failed to execute [{}] while moving to second phase", - shardIt.shardId(), - request), - e); - } - raiseEarlyFailure(new ReduceSearchPhaseException(firstPhaseName(), "", e, buildShardFailures())); + onPhaseFailure(currentPhase, "all shards failed", null); + } else { + if (logger.isTraceEnabled()) { + final String resultsFrom = results.getSuccessfulResults() + .map(r -> r.getSearchShardTarget().toString()).collect(Collectors.joining(",")); + logger.trace("[{}] Moving to next phase: [{}], based on results from: {} (cluster state version: {})", + currentPhase.getName(), nextPhase.getName(), resultsFrom, clusterStateVersion); } - } else if (xTotalOps > expectedTotalOps) { - raiseEarlyFailure(new IllegalStateException("unexpected higher total ops [" + xTotalOps + "] compared " + - "to expected [" + expectedTotalOps + "]")); + executePhase(nextPhase); } } - private void onFirstPhaseResult(final int shardIndex, @Nullable ShardRouting shard, @Nullable String nodeId, - final ShardIterator shardIt, Exception e) { - // we always add the shard failure for a specific shard instance - // we do make sure to clean it on a successful response from a shard - SearchShardTarget shardTarget = new SearchShardTarget(nodeId, shardIt.shardId()); - addShardFailure(shardIndex, shardTarget, e); - - if (totalOps.incrementAndGet() == expectedTotalOps) { + private void executePhase(SearchPhase phase) { + try { + phase.run(); + } catch (Exception e) { if (logger.isDebugEnabled()) { - if (e != null && !TransportActions.isShardNotAvailableException(e)) { - logger.debug( - (Supplier) () -> new ParameterizedMessage( - "{}: Failed to execute [{}]", - shard != null ? shard.shortSummary() : - shardIt.shardId(), - request), - e); - } else if (logger.isTraceEnabled()) { - logger.trace((Supplier) () -> new ParameterizedMessage("{}: Failed to execute [{}]", shard, request), e); - } - } - final ShardSearchFailure[] shardSearchFailures = buildShardFailures(); - if (successfulOps.get() == 0) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("All shards failed for phase: [{}]", firstPhaseName()), e); - } - - // no successful ops, raise an exception - raiseEarlyFailure(new SearchPhaseExecutionException(firstPhaseName(), "all shards failed", e, shardSearchFailures)); - } else { - try { - innerMoveToSecondPhase(); - } catch (Exception inner) { - inner.addSuppressed(e); - raiseEarlyFailure(new ReduceSearchPhaseException(firstPhaseName(), "", inner, shardSearchFailures)); - } - } - } else { - final ShardRouting nextShard = shardIt.nextOrNull(); - final boolean lastShard = nextShard == null; - // trace log this exception - logger.trace( - (Supplier) () -> new ParameterizedMessage( - "{}: Failed to execute [{}] lastShard [{}]", - shard != null ? shard.shortSummary() : shardIt.shardId(), - request, - lastShard), - e); - if (!lastShard) { - try { - performFirstPhase(shardIndex, shardIt, nextShard); - } catch (Exception inner) { - inner.addSuppressed(e); - onFirstPhaseResult(shardIndex, shard, shard.currentNodeId(), shardIt, inner); - } - } else { - // no more shards active, add a failure - if (logger.isDebugEnabled() && !logger.isTraceEnabled()) { // do not double log this exception - if (e != null && !TransportActions.isShardNotAvailableException(e)) { - logger.debug( - (Supplier) () -> new ParameterizedMessage( - "{}: Failed to execute [{}] lastShard [{}]", - shard != null ? shard.shortSummary() : - shardIt.shardId(), - request, - lastShard), - e); - } - } + logger.debug( + (Supplier) () -> new ParameterizedMessage( + "Failed to execute [{}] while moving to [{}] phase", request, phase.getName()), + e); } + onPhaseFailure(phase, "", e); } } - protected final ShardSearchFailure[] buildShardFailures() { - AtomicArray shardFailures = this.shardFailures; + + private ShardSearchFailure[] buildShardFailures() { + AtomicArray shardFailures = this.shardFailures.get(); if (shardFailures == null) { return ShardSearchFailure.EMPTY_ARRAY; } - List> entries = shardFailures.asList(); + List entries = shardFailures.asList(); ShardSearchFailure[] failures = new ShardSearchFailure[entries.size()]; for (int i = 0; i < failures.length; i++) { - failures[i] = entries.get(i).value; + failures[i] = entries.get(i); } return failures; } - protected final void addShardFailure(final int shardIndex, @Nullable SearchShardTarget shardTarget, Exception e) { + public final void onShardFailure(final int shardIndex, @Nullable SearchShardTarget shardTarget, Exception e) { // we don't aggregate shard failures on non active shards (but do keep the header counts right) if (TransportActions.isShardNotAvailableException(e)) { return; } - + AtomicArray shardFailures = this.shardFailures.get(); // lazily create shard failures, so we can early build the empty shard failure list in most cases (no failures) - if (shardFailures == null) { + if (shardFailures == null) { // this is double checked locking but it's fine since SetOnce uses a volatile read internally synchronized (shardFailuresMutex) { - if (shardFailures == null) { - shardFailures = new AtomicArray<>(shardsIts.size()); + shardFailures = this.shardFailures.get(); // read again otherwise somebody else has created it? + if (shardFailures == null) { // still null so we are the first and create a new instance + shardFailures = new AtomicArray<>(getNumShards()); + this.shardFailures.set(shardFailures); } } } @@ -287,105 +192,123 @@ protected final void addShardFailure(final int shardIndex, @Nullable SearchShard shardFailures.set(shardIndex, new ShardSearchFailure(e, shardTarget)); } } + + if (results.hasResult(shardIndex)) { + assert failure == null : "shard failed before but shouldn't: " + failure; + successfulOps.decrementAndGet(); // if this shard was successful before (initial phase) we have to adjust the counter + } } - private void raiseEarlyFailure(Exception e) { - for (AtomicArray.Entry entry : firstResults.asList()) { + /** + * This method should be called if a search phase failed to ensure all relevant search contexts and resources are released. + * this method will also notify the listener and sends back a failure to the user. + * + * @param exception the exception explaining or causing the phase failure + */ + private void raisePhaseFailure(SearchPhaseExecutionException exception) { + results.getSuccessfulResults().forEach((entry) -> { try { - DiscoveryNode node = nodeIdToDiscoveryNode.apply(entry.value.shardTarget().nodeId()); - sendReleaseSearchContext(entry.value.id(), node); + SearchShardTarget searchShardTarget = entry.getSearchShardTarget(); + Transport.Connection connection = getConnection(null, searchShardTarget.getNodeId()); + sendReleaseSearchContext(entry.getRequestId(), connection, searchShardTarget.getOriginalIndices()); } catch (Exception inner) { - inner.addSuppressed(e); + inner.addSuppressed(exception); logger.trace("failed to release context", inner); } - } - listener.onFailure(e); + }); + listener.onFailure(exception); } - /** - * Releases shard targets that are not used in the docsIdsToLoad. - */ - protected void releaseIrrelevantSearchContexts(AtomicArray queryResults, - AtomicArray docIdsToLoad) { - if (docIdsToLoad == null) { - return; + @Override + public final void onShardSuccess(Result result) { + successfulOps.incrementAndGet(); + results.consumeResult(result); + if (logger.isTraceEnabled()) { + logger.trace("got first-phase result from {}", result != null ? result.getSearchShardTarget() : null); } - // we only release search context that we did not fetch from if we are not scrolling - if (request.scroll() == null) { - for (AtomicArray.Entry entry : queryResults.asList()) { - QuerySearchResult queryResult = entry.value.queryResult(); - if (queryResult.hasHits() - && docIdsToLoad.get(entry.index) == null) { // but none of them made it to the global top docs - try { - DiscoveryNode node = nodeIdToDiscoveryNode.apply(entry.value.queryResult().shardTarget().nodeId()); - sendReleaseSearchContext(entry.value.queryResult().id(), node); - } catch (Exception e) { - logger.trace("failed to release context", e); - } - } - } + // clean a previous error on this shard group (note, this code will be serialized on the same shardIndex value level + // so its ok concurrency wise to miss potentially the shard failures being created because of another failure + // in the #addShardFailure, because by definition, it will happen on *another* shardIndex + AtomicArray shardFailures = this.shardFailures.get(); + if (shardFailures != null) { + shardFailures.set(result.getShardIndex(), null); } } - protected void sendReleaseSearchContext(long contextId, DiscoveryNode node) { - if (node != null) { - searchTransportService.sendFreeContext(node, contextId, request); - } + @Override + public final void onPhaseDone() { + executeNextPhase(this, getNextPhase(results, this)); } - protected ShardFetchSearchRequest createFetchRequest(QuerySearchResult queryResult, AtomicArray.Entry entry, - ScoreDoc[] lastEmittedDocPerShard) { - final ScoreDoc lastEmittedDoc = (lastEmittedDocPerShard != null) ? lastEmittedDocPerShard[entry.index] : null; - return new ShardFetchSearchRequest(request, queryResult.id(), entry.value, lastEmittedDoc); + @Override + public final int getNumShards() { + return results.getNumShards(); } - protected abstract void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request, - ActionListener listener); + @Override + public final Logger getLogger() { + return logger; + } - protected final void processFirstPhaseResult(int shardIndex, FirstResult result) { - firstResults.set(shardIndex, result); + @Override + public final SearchTask getTask() { + return task; + } - if (logger.isTraceEnabled()) { - logger.trace("got first-phase result from {}", result != null ? result.shardTarget() : null); - } + @Override + public final SearchRequest getRequest() { + return request; + } - // clean a previous error on this shard group (note, this code will be serialized on the same shardIndex value level - // so its ok concurrency wise to miss potentially the shard failures being created because of another failure - // in the #addShardFailure, because by definition, it will happen on *another* shardIndex - AtomicArray shardFailures = this.shardFailures; - if (shardFailures != null) { - shardFailures.set(shardIndex, null); - } + @Override + public final SearchResponse buildSearchResponse(InternalSearchResponse internalSearchResponse, String scrollId) { + return new SearchResponse(internalSearchResponse, scrollId, getNumShards(), successfulOps.get(), + buildTookInMillis(), buildShardFailures()); } - final void innerMoveToSecondPhase() throws Exception { - if (logger.isTraceEnabled()) { - StringBuilder sb = new StringBuilder(); - boolean hadOne = false; - for (int i = 0; i < firstResults.length(); i++) { - FirstResult result = firstResults.get(i); - if (result == null) { - continue; // failure - } - if (hadOne) { - sb.append(","); - } else { - hadOne = true; - } - sb.append(result.shardTarget()); - } + @Override + public final void onPhaseFailure(SearchPhase phase, String msg, Throwable cause) { + raisePhaseFailure(new SearchPhaseExecutionException(phase.getName(), msg, cause, buildShardFailures())); + } - logger.trace("Moving to second phase, based on results from: {} (cluster state version: {})", sb, clusterStateVersion); - } - moveToSecondPhase(); + @Override + public final Transport.Connection getConnection(String clusterAlias, String nodeId) { + return nodeIdToConnection.apply(clusterAlias, nodeId); + } + + @Override + public final SearchTransportService getSearchTransport() { + return searchTransportService; + } + + @Override + public final void execute(Runnable command) { + executor.execute(command); } - protected abstract void moveToSecondPhase() throws Exception; + @Override + public final void onResponse(SearchResponse response) { + listener.onResponse(response); + } - protected abstract String firstPhaseName(); + @Override + public final void onFailure(Exception e) { + listener.onFailure(e); + } - protected Executor getExecutor() { - return executor; + public final ShardSearchTransportRequest buildShardSearchRequest(SearchShardIterator shardIt) { + AliasFilter filter = aliasFilter.get(shardIt.shardId().getIndex().getUUID()); + assert filter != null; + float indexBoost = concreteIndexBoosts.getOrDefault(shardIt.shardId().getIndex().getUUID(), DEFAULT_INDEX_BOOST); + return new ShardSearchTransportRequest(shardIt.getOriginalIndices(), request, shardIt.shardId(), getNumShards(), + filter, indexBoost, timeProvider.getAbsoluteStartMillis()); } + /** + * Returns the next phase based on the results of the initial search phase + * @param results the results of the initial search phase. Each non null element in the result array represent a successfully + * executed shard request + * @param context the search context for the next phase + */ + protected abstract SearchPhase getNextPhase(SearchPhaseResults results, SearchPhaseContext context); } diff --git a/core/src/main/java/org/elasticsearch/action/search/ClearScrollController.java b/core/src/main/java/org/elasticsearch/action/search/ClearScrollController.java new file mode 100644 index 0000000000000..d94fe1a2bbe6b --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/ClearScrollController.java @@ -0,0 +1,141 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.common.util.concurrent.CountDown; +import org.elasticsearch.transport.Transport; +import org.elasticsearch.transport.TransportResponse; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.atomic.AtomicBoolean; +import java.util.concurrent.atomic.AtomicInteger; + +import static org.elasticsearch.action.search.TransportSearchHelper.parseScrollId; + +final class ClearScrollController implements Runnable { + private final DiscoveryNodes nodes; + private final SearchTransportService searchTransportService; + private final CountDown expectedOps; + private final ActionListener listener; + private final AtomicBoolean hasFailed = new AtomicBoolean(false); + private final AtomicInteger freedSearchContexts = new AtomicInteger(0); + private final Logger logger; + private final Runnable runner; + + ClearScrollController(ClearScrollRequest request, ActionListener listener, DiscoveryNodes nodes, Logger logger, + SearchTransportService searchTransportService) { + this.nodes = nodes; + this.logger = logger; + this.searchTransportService = searchTransportService; + this.listener = listener; + List scrollIds = request.getScrollIds(); + final int expectedOps; + if (scrollIds.size() == 1 && "_all".equals(scrollIds.get(0))) { + expectedOps = nodes.getSize(); + runner = this::cleanAllScrolls; + } else { + List parsedScrollIds = new ArrayList<>(); + for (String parsedScrollId : request.getScrollIds()) { + ScrollIdForNode[] context = parseScrollId(parsedScrollId).getContext(); + for (ScrollIdForNode id : context) { + parsedScrollIds.add(id); + } + } + if (parsedScrollIds.isEmpty()) { + expectedOps = 0; + runner = () -> listener.onResponse(new ClearScrollResponse(true, 0)); + } else { + expectedOps = parsedScrollIds.size(); + runner = () -> cleanScrollIds(parsedScrollIds); + } + } + this.expectedOps = new CountDown(expectedOps); + + } + + @Override + public void run() { + runner.run(); + } + + void cleanAllScrolls() { + for (final DiscoveryNode node : nodes) { + try { + Transport.Connection connection = searchTransportService.getConnection(null, node); + searchTransportService.sendClearAllScrollContexts(connection, new ActionListener() { + @Override + public void onResponse(TransportResponse response) { + onFreedContext(true); + } + + @Override + public void onFailure(Exception e) { + onFailedFreedContext(e, node); + } + }); + } catch (Exception e) { + onFailedFreedContext(e, node); + } + } + } + + void cleanScrollIds(List parsedScrollIds) { + for (ScrollIdForNode target : parsedScrollIds) { + final DiscoveryNode node = nodes.get(target.getNode()); + if (node == null) { + onFreedContext(false); + } else { + try { + Transport.Connection connection = searchTransportService.getConnection(null, node); + searchTransportService.sendFreeContext(connection, target.getScrollId(), + ActionListener.wrap(freed -> onFreedContext(freed.isFreed()), + e -> onFailedFreedContext(e, node))); + } catch (Exception e) { + onFailedFreedContext(e, node); + } + } + } + } + + private void onFreedContext(boolean freed) { + if (freed) { + freedSearchContexts.incrementAndGet(); + } + if (expectedOps.countDown()) { + boolean succeeded = hasFailed.get() == false; + listener.onResponse(new ClearScrollResponse(succeeded, freedSearchContexts.get())); + } + } + + private void onFailedFreedContext(Throwable e, DiscoveryNode node) { + logger.warn((Supplier) () -> new ParameterizedMessage("Clear SC failed on node[{}]", node), e); + if (expectedOps.countDown()) { + listener.onResponse(new ClearScrollResponse(false, freedSearchContexts.get())); + } else { + hasFailed.set(true); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/ClearScrollRequest.java b/core/src/main/java/org/elasticsearch/action/search/ClearScrollRequest.java index 23c5c3747fbf4..4770818867c84 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ClearScrollRequest.java +++ b/core/src/main/java/org/elasticsearch/action/search/ClearScrollRequest.java @@ -23,6 +23,9 @@ import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import java.io.IOException; import java.util.ArrayList; @@ -31,7 +34,7 @@ import static org.elasticsearch.action.ValidateActions.addValidationError; -public class ClearScrollRequest extends ActionRequest { +public class ClearScrollRequest extends ActionRequest implements ToXContentObject { private List scrollIds; @@ -83,4 +86,47 @@ public void writeTo(StreamOutput out) throws IOException { } } + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.startArray("scroll_id"); + for (String scrollId : scrollIds) { + builder.value(scrollId); + } + builder.endArray(); + builder.endObject(); + return builder; + } + + public void fromXContent(XContentParser parser) throws IOException { + scrollIds = null; + if (parser.nextToken() != XContentParser.Token.START_OBJECT) { + throw new IllegalArgumentException("Malformed content, must start with an object"); + } else { + XContentParser.Token token; + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if ("scroll_id".equals(currentFieldName)){ + if (token == XContentParser.Token.START_ARRAY) { + while ((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + if (token.isValue() == false) { + throw new IllegalArgumentException("scroll_id array element should only contain scroll_id"); + } + addScrollId(parser.text()); + } + } else { + if (token.isValue() == false) { + throw new IllegalArgumentException("scroll_id element should only contain scroll_id"); + } + addScrollId(parser.text()); + } + } else { + throw new IllegalArgumentException("Unknown parameter [" + currentFieldName + + "] in request body or parameter is of the wrong type[" + token + "] "); + } + } + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/search/ClearScrollResponse.java b/core/src/main/java/org/elasticsearch/action/search/ClearScrollResponse.java index ff8314acce5c7..d5e2c754a2000 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ClearScrollResponse.java +++ b/core/src/main/java/org/elasticsearch/action/search/ClearScrollResponse.java @@ -20,18 +20,33 @@ package org.elasticsearch.action.search; import org.elasticsearch.action.ActionResponse; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.ConstructingObjectParser; +import org.elasticsearch.common.xcontent.ObjectParser; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.RestStatus; import java.io.IOException; +import static org.elasticsearch.common.xcontent.ConstructingObjectParser.constructorArg; import static org.elasticsearch.rest.RestStatus.NOT_FOUND; import static org.elasticsearch.rest.RestStatus.OK; -public class ClearScrollResponse extends ActionResponse implements StatusToXContent { +public class ClearScrollResponse extends ActionResponse implements StatusToXContentObject { + + private static final ParseField SUCCEEDED = new ParseField("succeeded"); + private static final ParseField NUMFREED = new ParseField("num_freed"); + + private static final ConstructingObjectParser PARSER = new ConstructingObjectParser<>("clear_scroll", + true, a -> new ClearScrollResponse((boolean)a[0], (int)a[1])); + static { + PARSER.declareField(constructorArg(), (parser, context) -> parser.booleanValue(), SUCCEEDED, ObjectParser.ValueType.BOOLEAN); + PARSER.declareField(constructorArg(), (parser, context) -> parser.intValue(), NUMFREED, ObjectParser.ValueType.INT); + } private boolean succeeded; private int numFreed; @@ -66,11 +81,20 @@ public RestStatus status() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.field(Fields.SUCCEEDED, succeeded); - builder.field(Fields.NUMFREED, numFreed); + builder.startObject(); + builder.field(SUCCEEDED.getPreferredName(), succeeded); + builder.field(NUMFREED.getPreferredName(), numFreed); + builder.endObject(); return builder; } + /** + * Parse the clear scroll response body into a new {@link ClearScrollResponse} object + */ + public static ClearScrollResponse fromXContent(XContentParser parser) throws IOException { + return PARSER.apply(parser, null); + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); @@ -84,10 +108,4 @@ public void writeTo(StreamOutput out) throws IOException { out.writeBoolean(succeeded); out.writeVInt(numFreed); } - - static final class Fields { - static final String SUCCEEDED = "succeeded"; - static final String NUMFREED = "num_freed"; - } - } diff --git a/core/src/main/java/org/elasticsearch/action/search/CountedCollector.java b/core/src/main/java/org/elasticsearch/action/search/CountedCollector.java new file mode 100644 index 0000000000000..2dd255aa14c69 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/CountedCollector.java @@ -0,0 +1,79 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.util.concurrent.CountDown; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; + +import java.util.function.Consumer; + +/** + * This is a simple base class to simplify fan out to shards and collect their results. Each results passed to + * {@link #onResult(SearchPhaseResult)} will be set to the provided result array + * where the given index is used to set the result on the array. + */ +final class CountedCollector { + private final Consumer resultConsumer; + private final CountDown counter; + private final Runnable onFinish; + private final SearchPhaseContext context; + + CountedCollector(Consumer resultConsumer, int expectedOps, Runnable onFinish, SearchPhaseContext context) { + this.resultConsumer = resultConsumer; + this.counter = new CountDown(expectedOps); + this.onFinish = onFinish; + this.context = context; + } + + /** + * Forcefully counts down an operation and executes the provided runnable + * if all expected operations where executed + */ + void countDown() { + assert counter.isCountedDown() == false : "more operations executed than specified"; + if (counter.countDown()) { + onFinish.run(); + } + } + + /** + * Sets the result to the given array index and then runs {@link #countDown()} + */ + void onResult(R result) { + try { + resultConsumer.accept(result); + } finally { + countDown(); + } + } + + /** + * Escalates the failure via {@link SearchPhaseContext#onShardFailure(int, SearchShardTarget, Exception)} + * and then runs {@link #countDown()} + */ + void onFailure(final int shardIndex, @Nullable SearchShardTarget shardTarget, Exception e) { + try { + context.onShardFailure(shardIndex, shardTarget, e); + } finally { + countDown(); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/DfsQueryPhase.java b/core/src/main/java/org/elasticsearch/action/search/DfsQueryPhase.java new file mode 100644 index 0000000000000..a72dcac4f241a --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/DfsQueryPhase.java @@ -0,0 +1,105 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; +import org.elasticsearch.search.dfs.AggregatedDfs; +import org.elasticsearch.search.dfs.DfsSearchResult; +import org.elasticsearch.search.query.QuerySearchRequest; +import org.elasticsearch.search.query.QuerySearchResult; +import org.elasticsearch.transport.Transport; + +import java.io.IOException; +import java.util.List; +import java.util.function.Function; + +/** + * This search phase fans out to every shards to execute a distributed search with a pre-collected distributed frequencies for all + * search terms used in the actual search query. This phase is very similar to a the default query-then-fetch search phase but it doesn't + * retry on another shard if any of the shards are failing. Failures are treated as shard failures and are counted as a non-successful + * operation. + * @see CountedCollector#onFailure(int, SearchShardTarget, Exception) + */ +final class DfsQueryPhase extends SearchPhase { + private final InitialSearchPhase.SearchPhaseResults queryResult; + private final SearchPhaseController searchPhaseController; + private final AtomicArray dfsSearchResults; + private final Function, SearchPhase> nextPhaseFactory; + private final SearchPhaseContext context; + private final SearchTransportService searchTransportService; + + DfsQueryPhase(AtomicArray dfsSearchResults, + SearchPhaseController searchPhaseController, + Function, SearchPhase> nextPhaseFactory, + SearchPhaseContext context) { + super("dfs_query"); + this.queryResult = searchPhaseController.newSearchPhaseResults(context.getRequest(), context.getNumShards()); + this.searchPhaseController = searchPhaseController; + this.dfsSearchResults = dfsSearchResults; + this.nextPhaseFactory = nextPhaseFactory; + this.context = context; + this.searchTransportService = context.getSearchTransport(); + } + + @Override + public void run() throws IOException { + // TODO we can potentially also consume the actual per shard results from the initial phase here in the aggregateDfs + // to free up memory early + final List resultList = dfsSearchResults.asList(); + final AggregatedDfs dfs = searchPhaseController.aggregateDfs(resultList); + final CountedCollector counter = new CountedCollector<>(queryResult::consumeResult, + resultList.size(), + () -> context.executeNextPhase(this, nextPhaseFactory.apply(queryResult)), context); + for (final DfsSearchResult dfsResult : resultList) { + final SearchShardTarget searchShardTarget = dfsResult.getSearchShardTarget(); + Transport.Connection connection = context.getConnection(searchShardTarget.getClusterAlias(), searchShardTarget.getNodeId()); + QuerySearchRequest querySearchRequest = new QuerySearchRequest(searchShardTarget.getOriginalIndices(), + dfsResult.getRequestId(), dfs); + final int shardIndex = dfsResult.getShardIndex(); + searchTransportService.sendExecuteQuery(connection, querySearchRequest, context.getTask(), + new SearchActionListener(searchShardTarget, shardIndex) { + + @Override + protected void innerOnResponse(QuerySearchResult response) { + counter.onResult(response); + } + + @Override + public void onFailure(Exception exception) { + try { + if (context.getLogger().isDebugEnabled()) { + context.getLogger().debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute query phase", + querySearchRequest.id()), exception); + } + counter.onFailure(shardIndex, searchShardTarget, exception); + } finally { + // the query might not have been executed at all (for example because thread pool rejected + // execution) and the search context that was created in dfs phase might not be released. + // release it again to be in the safe side + context.sendReleaseSearchContext(querySearchRequest.id(), connection, searchShardTarget.getOriginalIndices()); + } + } + }); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java new file mode 100644 index 0000000000000..bc673644a0683 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/ExpandSearchPhase.java @@ -0,0 +1,156 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.index.query.BoolQueryBuilder; +import org.elasticsearch.index.query.InnerHitBuilder; +import org.elasticsearch.index.query.QueryBuilder; +import org.elasticsearch.index.query.QueryBuilders; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.builder.SearchSourceBuilder; +import org.elasticsearch.search.collapse.CollapseBuilder; +import org.elasticsearch.search.internal.InternalSearchResponse; + +import java.io.IOException; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.function.Function; + +/** + * This search phase is an optional phase that will be executed once all hits are fetched from the shards that executes + * field-collapsing on the inner hits. This phase only executes if field collapsing is requested in the search request and otherwise + * forwards to the next phase immediately. + */ +final class ExpandSearchPhase extends SearchPhase { + private final SearchPhaseContext context; + private final InternalSearchResponse searchResponse; + private final Function nextPhaseFactory; + + ExpandSearchPhase(SearchPhaseContext context, InternalSearchResponse searchResponse, + Function nextPhaseFactory) { + super("expand"); + this.context = context; + this.searchResponse = searchResponse; + this.nextPhaseFactory = nextPhaseFactory; + } + + /** + * Returns true iff the search request has inner hits and needs field collapsing + */ + private boolean isCollapseRequest() { + final SearchRequest searchRequest = context.getRequest(); + return searchRequest.source() != null && + searchRequest.source().collapse() != null && + searchRequest.source().collapse().getInnerHits().isEmpty() == false; + } + + @Override + public void run() throws IOException { + if (isCollapseRequest() && searchResponse.hits().getHits().length > 0) { + SearchRequest searchRequest = context.getRequest(); + CollapseBuilder collapseBuilder = searchRequest.source().collapse(); + final List innerHitBuilders = collapseBuilder.getInnerHits(); + MultiSearchRequest multiRequest = new MultiSearchRequest(); + if (collapseBuilder.getMaxConcurrentGroupRequests() > 0) { + multiRequest.maxConcurrentSearchRequests(collapseBuilder.getMaxConcurrentGroupRequests()); + } + for (SearchHit hit : searchResponse.hits().getHits()) { + BoolQueryBuilder groupQuery = new BoolQueryBuilder(); + Object collapseValue = hit.field(collapseBuilder.getField()).getValue(); + if (collapseValue != null) { + groupQuery.filter(QueryBuilders.matchQuery(collapseBuilder.getField(), collapseValue)); + } else { + groupQuery.mustNot(QueryBuilders.existsQuery(collapseBuilder.getField())); + } + QueryBuilder origQuery = searchRequest.source().query(); + if (origQuery != null) { + groupQuery.must(origQuery); + } + for (InnerHitBuilder innerHitBuilder : innerHitBuilders) { + SearchSourceBuilder sourceBuilder = buildExpandSearchSourceBuilder(innerHitBuilder) + .query(groupQuery); + SearchRequest groupRequest = new SearchRequest(searchRequest.indices()) + .types(searchRequest.types()) + .source(sourceBuilder); + multiRequest.add(groupRequest); + } + } + context.getSearchTransport().sendExecuteMultiSearch(multiRequest, context.getTask(), + ActionListener.wrap(response -> { + Iterator it = response.iterator(); + for (SearchHit hit : searchResponse.hits.getHits()) { + for (InnerHitBuilder innerHitBuilder : innerHitBuilders) { + MultiSearchResponse.Item item = it.next(); + if (item.isFailure()) { + context.onPhaseFailure(this, "failed to expand hits", item.getFailure()); + return; + } + SearchHits innerHits = item.getResponse().getHits(); + if (hit.getInnerHits() == null) { + hit.setInnerHits(new HashMap<>(innerHitBuilders.size())); + } + hit.getInnerHits().put(innerHitBuilder.getName(), innerHits); + } + } + context.executeNextPhase(this, nextPhaseFactory.apply(searchResponse)); + }, context::onFailure) + ); + } else { + context.executeNextPhase(this, nextPhaseFactory.apply(searchResponse)); + } + } + + private SearchSourceBuilder buildExpandSearchSourceBuilder(InnerHitBuilder options) { + SearchSourceBuilder groupSource = new SearchSourceBuilder(); + groupSource.from(options.getFrom()); + groupSource.size(options.getSize()); + if (options.getSorts() != null) { + options.getSorts().forEach(groupSource::sort); + } + if (options.getFetchSourceContext() != null) { + if (options.getFetchSourceContext().includes() == null && options.getFetchSourceContext().excludes() == null) { + groupSource.fetchSource(options.getFetchSourceContext().fetchSource()); + } else { + groupSource.fetchSource(options.getFetchSourceContext().includes(), + options.getFetchSourceContext().excludes()); + } + } + if (options.getDocValueFields() != null) { + options.getDocValueFields().forEach(groupSource::docValueField); + } + if (options.getStoredFieldsContext() != null && options.getStoredFieldsContext().fieldNames() != null) { + options.getStoredFieldsContext().fieldNames().forEach(groupSource::storedField); + } + if (options.getScriptFields() != null) { + for (SearchSourceBuilder.ScriptField field : options.getScriptFields()) { + groupSource.scriptField(field.fieldName(), field.script()); + } + } + if (options.getHighlightBuilder() != null) { + groupSource.highlighter(options.getHighlightBuilder()); + } + groupSource.explain(options.isExplain()); + groupSource.trackScores(options.isTrackScores()); + return groupSource; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/FetchSearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/FetchSearchPhase.java new file mode 100644 index 0000000000000..c26fc63421d17 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/FetchSearchPhase.java @@ -0,0 +1,219 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import com.carrotsearch.hppc.IntArrayList; +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.apache.lucene.search.ScoreDoc; +import org.elasticsearch.action.ActionRunnable; +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; +import org.elasticsearch.search.fetch.FetchSearchResult; +import org.elasticsearch.search.fetch.ShardFetchSearchRequest; +import org.elasticsearch.search.internal.InternalSearchResponse; +import org.elasticsearch.search.query.QuerySearchResult; +import org.elasticsearch.transport.Transport; + +import java.io.IOException; +import java.util.List; +import java.util.function.BiFunction; + +/** + * This search phase merges the query results from the previous phase together and calculates the topN hits for this search. + * Then it reaches out to all relevant shards to fetch the topN hits. + */ +final class FetchSearchPhase extends SearchPhase { + private final AtomicArray fetchResults; + private final SearchPhaseController searchPhaseController; + private final AtomicArray queryResults; + private final BiFunction nextPhaseFactory; + private final SearchPhaseContext context; + private final Logger logger; + private final InitialSearchPhase.SearchPhaseResults resultConsumer; + + FetchSearchPhase(InitialSearchPhase.SearchPhaseResults resultConsumer, + SearchPhaseController searchPhaseController, + SearchPhaseContext context) { + this(resultConsumer, searchPhaseController, context, + (response, scrollId) -> new ExpandSearchPhase(context, response, // collapse only happens if the request has inner hits + (finalResponse) -> sendResponsePhase(finalResponse, scrollId, context))); + } + + FetchSearchPhase(InitialSearchPhase.SearchPhaseResults resultConsumer, + SearchPhaseController searchPhaseController, + SearchPhaseContext context, BiFunction nextPhaseFactory) { + super("fetch"); + if (context.getNumShards() != resultConsumer.getNumShards()) { + throw new IllegalStateException("number of shards must match the length of the query results but doesn't:" + + context.getNumShards() + "!=" + resultConsumer.getNumShards()); + } + this.fetchResults = new AtomicArray<>(resultConsumer.getNumShards()); + this.searchPhaseController = searchPhaseController; + this.queryResults = resultConsumer.results; + this.nextPhaseFactory = nextPhaseFactory; + this.context = context; + this.logger = context.getLogger(); + this.resultConsumer = resultConsumer; + } + + @Override + public void run() throws IOException { + context.execute(new ActionRunnable(context) { + @Override + public void doRun() throws IOException { + // we do the heavy lifting in this inner run method where we reduce aggs etc. that's why we fork this phase + // off immediately instead of forking when we send back the response to the user since there we only need + // to merge together the fetched results which is a linear operation. + innerRun(); + } + + @Override + public void onFailure(Exception e) { + context.onPhaseFailure(FetchSearchPhase.this, "", e); + } + }); + } + + private void innerRun() throws IOException { + final int numShards = context.getNumShards(); + final boolean isScrollSearch = context.getRequest().scroll() != null; + List phaseResults = queryResults.asList(); + String scrollId = isScrollSearch ? TransportSearchHelper.buildScrollId(queryResults) : null; + final SearchPhaseController.ReducedQueryPhase reducedQueryPhase = resultConsumer.reduce(); + final boolean queryAndFetchOptimization = queryResults.length() == 1; + final Runnable finishPhase = () + -> moveToNextPhase(searchPhaseController, scrollId, reducedQueryPhase, queryAndFetchOptimization ? + queryResults : fetchResults); + if (queryAndFetchOptimization) { + assert phaseResults.isEmpty() || phaseResults.get(0).fetchResult() != null; + // query AND fetch optimization + finishPhase.run(); + } else { + final IntArrayList[] docIdsToLoad = searchPhaseController.fillDocIdsToLoad(numShards, reducedQueryPhase.scoreDocs); + if (reducedQueryPhase.scoreDocs.length == 0) { // no docs to fetch -- sidestep everything and return + phaseResults.stream() + .map(SearchPhaseResult::queryResult) + .forEach(this::releaseIrrelevantSearchContext); // we have to release contexts here to free up resources + finishPhase.run(); + } else { + final ScoreDoc[] lastEmittedDocPerShard = isScrollSearch ? + searchPhaseController.getLastEmittedDocPerShard(reducedQueryPhase, numShards) + : null; + final CountedCollector counter = new CountedCollector<>(r -> fetchResults.set(r.getShardIndex(), r), + docIdsToLoad.length, // we count down every shard in the result no matter if we got any results or not + finishPhase, context); + for (int i = 0; i < docIdsToLoad.length; i++) { + IntArrayList entry = docIdsToLoad[i]; + SearchPhaseResult queryResult = queryResults.get(i); + if (entry == null) { // no results for this shard ID + if (queryResult != null) { + // if we got some hits from this shard we have to release the context there + // we do this as we go since it will free up resources and passing on the request on the + // transport layer is cheap. + releaseIrrelevantSearchContext(queryResult.queryResult()); + } + // in any case we count down this result since we don't talk to this shard anymore + counter.countDown(); + } else { + SearchShardTarget searchShardTarget = queryResult.getSearchShardTarget(); + Transport.Connection connection = context.getConnection(searchShardTarget.getClusterAlias(), + searchShardTarget.getNodeId()); + ShardFetchSearchRequest fetchSearchRequest = createFetchRequest(queryResult.queryResult().getRequestId(), i, entry, + lastEmittedDocPerShard, searchShardTarget.getOriginalIndices()); + executeFetch(i, searchShardTarget, counter, fetchSearchRequest, queryResult.queryResult(), + connection); + } + } + } + } + } + + protected ShardFetchSearchRequest createFetchRequest(long queryId, int index, IntArrayList entry, + ScoreDoc[] lastEmittedDocPerShard, OriginalIndices originalIndices) { + final ScoreDoc lastEmittedDoc = (lastEmittedDocPerShard != null) ? lastEmittedDocPerShard[index] : null; + return new ShardFetchSearchRequest(originalIndices, queryId, entry, lastEmittedDoc); + } + + private void executeFetch(final int shardIndex, final SearchShardTarget shardTarget, + final CountedCollector counter, + final ShardFetchSearchRequest fetchSearchRequest, final QuerySearchResult querySearchResult, + final Transport.Connection connection) { + context.getSearchTransport().sendExecuteFetch(connection, fetchSearchRequest, context.getTask(), + new SearchActionListener(shardTarget, shardIndex) { + @Override + public void innerOnResponse(FetchSearchResult result) { + counter.onResult(result); + } + + @Override + public void onFailure(Exception e) { + try { + if (logger.isDebugEnabled()) { + logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute fetch phase", + fetchSearchRequest.id()), e); + } + counter.onFailure(shardIndex, shardTarget, e); + } finally { + // the search context might not be cleared on the node where the fetch was executed for example + // because the action was rejected by the thread pool. in this case we need to send a dedicated + // request to clear the search context. + releaseIrrelevantSearchContext(querySearchResult); + } + } + }); + } + + /** + * Releases shard targets that are not used in the docsIdsToLoad. + */ + private void releaseIrrelevantSearchContext(QuerySearchResult queryResult) { + // we only release search context that we did not fetch from if we are not scrolling + // and if it has at lease one hit that didn't make it to the global topDocs + if (context.getRequest().scroll() == null && queryResult.hasSearchContext()) { + try { + SearchShardTarget searchShardTarget = queryResult.getSearchShardTarget(); + Transport.Connection connection = context.getConnection(searchShardTarget.getClusterAlias(), searchShardTarget.getNodeId()); + context.sendReleaseSearchContext(queryResult.getRequestId(), connection, searchShardTarget.getOriginalIndices()); + } catch (Exception e) { + context.getLogger().trace("failed to release context", e); + } + } + } + + private void moveToNextPhase(SearchPhaseController searchPhaseController, + String scrollId, SearchPhaseController.ReducedQueryPhase reducedQueryPhase, + AtomicArray fetchResultsArr) { + final InternalSearchResponse internalResponse = searchPhaseController.merge(context.getRequest().scroll() != null, + reducedQueryPhase, fetchResultsArr.asList(), fetchResultsArr::get); + context.executeNextPhase(this, nextPhaseFactory.apply(internalResponse, scrollId)); + } + + private static SearchPhase sendResponsePhase(InternalSearchResponse response, String scrollId, SearchPhaseContext context) { + return new SearchPhase("response") { + @Override + public void run() throws IOException { + context.onResponse(context.buildSearchResponse(response, scrollId)); + } + }; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java new file mode 100644 index 0000000000000..de58b1906427f --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/InitialSearchPhase.java @@ -0,0 +1,267 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.action.NoShardAvailableActionException; +import org.elasticsearch.action.support.TransportActions; +import org.elasticsearch.cluster.routing.GroupShardsIterator; +import org.elasticsearch.cluster.routing.ShardIterator; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; +import org.elasticsearch.transport.ConnectTransportException; + +import java.io.IOException; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.stream.Stream; + +/** + * This is an abstract base class that encapsulates the logic to fan out to all shards in provided {@link GroupShardsIterator} + * and collect the results. If a shard request returns a failure this class handles the advance to the next replica of the shard until + * the shards replica iterator is exhausted. Each shard is referenced by position in the {@link GroupShardsIterator} which is later + * referred to as the shardIndex. + * The fan out and collect algorithm is traditionally used as the initial phase which can either be a query execution or collection + * distributed frequencies + */ +abstract class InitialSearchPhase extends SearchPhase { + private final SearchRequest request; + private final GroupShardsIterator shardsIts; + private final Logger logger; + private final int expectedTotalOps; + private final AtomicInteger totalOps = new AtomicInteger(); + + InitialSearchPhase(String name, SearchRequest request, GroupShardsIterator shardsIts, Logger logger) { + super(name); + this.request = request; + this.shardsIts = shardsIts; + this.logger = logger; + // we need to add 1 for non active partition, since we count it in the total. This means for each shard in the iterator we sum up + // it's number of active shards but use 1 as the default if no replica of a shard is active at this point. + // on a per shards level we use shardIt.remaining() to increment the totalOps pointer but add 1 for the current shard result + // we process hence we add one for the non active partition here. + this.expectedTotalOps = shardsIts.totalSizeWith1ForEmpty(); + } + + private void onShardFailure(final int shardIndex, @Nullable ShardRouting shard, @Nullable String nodeId, + final SearchShardIterator shardIt, Exception e) { + // we always add the shard failure for a specific shard instance + // we do make sure to clean it on a successful response from a shard + SearchShardTarget shardTarget = new SearchShardTarget(nodeId, shardIt.shardId(), shardIt.getClusterAlias(), + shardIt.getOriginalIndices()); + onShardFailure(shardIndex, shardTarget, e); + + if (totalOps.incrementAndGet() == expectedTotalOps) { + if (logger.isDebugEnabled()) { + if (e != null && !TransportActions.isShardNotAvailableException(e)) { + logger.debug( + (Supplier) () -> new ParameterizedMessage( + "{}: Failed to execute [{}]", + shard != null ? shard.shortSummary() : + shardIt.shardId(), + request), + e); + } else if (logger.isTraceEnabled()) { + logger.trace((Supplier) () -> new ParameterizedMessage("{}: Failed to execute [{}]", shard, request), e); + } + } + onPhaseDone(); + } else { + final ShardRouting nextShard = shardIt.nextOrNull(); + final boolean lastShard = nextShard == null; + // trace log this exception + logger.trace( + (Supplier) () -> new ParameterizedMessage( + "{}: Failed to execute [{}] lastShard [{}]", + shard != null ? shard.shortSummary() : shardIt.shardId(), + request, + lastShard), + e); + if (!lastShard) { + try { + performPhaseOnShard(shardIndex, shardIt, nextShard); + } catch (Exception inner) { + inner.addSuppressed(e); + onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, inner); + } + } else { + // no more shards active, add a failure + if (logger.isDebugEnabled() && !logger.isTraceEnabled()) { // do not double log this exception + if (e != null && !TransportActions.isShardNotAvailableException(e)) { + logger.debug( + (Supplier) () -> new ParameterizedMessage( + "{}: Failed to execute [{}] lastShard [{}]", + shard != null ? shard.shortSummary() : + shardIt.shardId(), + request, + lastShard), + e); + } + } + } + } + } + + @Override + public final void run() throws IOException { + int shardIndex = -1; + for (final SearchShardIterator shardIt : shardsIts) { + shardIndex++; + final ShardRouting shard = shardIt.nextOrNull(); + if (shard != null) { + performPhaseOnShard(shardIndex, shardIt, shard); + } else { + // really, no shards active in this group + onShardFailure(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId())); + } + } + } + + private void performPhaseOnShard(final int shardIndex, final SearchShardIterator shardIt, final ShardRouting shard) { + if (shard == null) { + // TODO upgrade this to an assert... + // no more active shards... (we should not really get here, but just for safety) + onShardFailure(shardIndex, null, null, shardIt, new NoShardAvailableActionException(shardIt.shardId())); + } else { + try { + executePhaseOnShard(shardIt, shard, new SearchActionListener(new SearchShardTarget(shard.currentNodeId(), + shardIt.shardId(), shardIt.getClusterAlias(), shardIt.getOriginalIndices()), shardIndex) { + @Override + public void innerOnResponse(FirstResult result) { + onShardResult(result, shardIt); + } + + @Override + public void onFailure(Exception t) { + onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, t); + } + }); + } catch (ConnectTransportException | IllegalArgumentException ex) { + // we are getting the connection early here so we might run into nodes that are not connected. in that case we move on to + // the next shard. previously when using discovery nodes here we had a special case for null when a node was not connected + // at all which is not not needed anymore. + onShardFailure(shardIndex, shard, shard.currentNodeId(), shardIt, ex); + } + } + } + + private void onShardResult(FirstResult result, ShardIterator shardIt) { + assert result.getShardIndex() != -1 : "shard index is not set"; + assert result.getSearchShardTarget() != null : "search shard target must not be null"; + onShardSuccess(result); + // we need to increment successful ops first before we compare the exit condition otherwise if we + // are fast we could concurrently update totalOps but then preempt one of the threads which can + // cause the successor to read a wrong value from successfulOps if second phase is very fast ie. count etc. + // increment all the "future" shards to update the total ops since we some may work and some may not... + // and when that happens, we break on total ops, so we must maintain them + final int xTotalOps = totalOps.addAndGet(shardIt.remaining() + 1); + if (xTotalOps == expectedTotalOps) { + onPhaseDone(); + } else if (xTotalOps > expectedTotalOps) { + throw new AssertionError("unexpected higher total ops [" + xTotalOps + "] compared to expected [" + + expectedTotalOps + "]"); + } + } + + + /** + * Executed once all shard results have been received and processed + * @see #onShardFailure(int, SearchShardTarget, Exception) + * @see #onShardSuccess(SearchPhaseResult) + */ + abstract void onPhaseDone(); // as a tribute to @kimchy aka. finishHim() + + /** + * Executed once for every failed shard level request. This method is invoked before the next replica is tried for the given + * shard target. + * @param shardIndex the internal index for this shard. Each shard has an index / ordinal assigned that is used to reference + * it's results + * @param shardTarget the shard target for this failure + * @param ex the failure reason + */ + abstract void onShardFailure(int shardIndex, SearchShardTarget shardTarget, Exception ex); + + /** + * Executed once for every successful shard level request. + * @param result the result returned form the shard + * + */ + abstract void onShardSuccess(FirstResult result); + + /** + * Sends the request to the actual shard. + * @param shardIt the shards iterator + * @param shard the shard routing to send the request for + * @param listener the listener to notify on response + */ + protected abstract void executePhaseOnShard(SearchShardIterator shardIt, ShardRouting shard, + SearchActionListener listener); + + /** + * This class acts as a basic result collection that can be extended to do on-the-fly reduction or result processing + */ + static class SearchPhaseResults { + final AtomicArray results; + + SearchPhaseResults(int size) { + results = new AtomicArray<>(size); + } + + /** + * Returns the number of expected results this class should collect + */ + final int getNumShards() { + return results.length(); + } + + /** + * A stream of all non-null (successful) shard results + */ + final Stream getSuccessfulResults() { + return results.asList().stream(); + } + + /** + * Consumes a single shard result + * @param result the shards result + */ + void consumeResult(Result result) { + assert results.get(result.getShardIndex()) == null : "shardIndex: " + result.getShardIndex() + " is already set"; + results.set(result.getShardIndex(), result); + } + + /** + * Returns true iff a result if present for the given shard ID. + */ + final boolean hasResult(int shardIndex) { + return results.get(shardIndex) != null; + } + + /** + * Reduces the collected results + */ + SearchPhaseController.ReducedQueryPhase reduce() { + throw new UnsupportedOperationException("reduce is not supported"); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java b/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java index 317b775a40369..4d42ad334a9f0 100644 --- a/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java +++ b/core/src/main/java/org/elasticsearch/action/search/MultiSearchResponse.java @@ -23,12 +23,12 @@ import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; -import org.elasticsearch.common.xcontent.XContentFactory; import java.io.IOException; import java.util.Arrays; @@ -37,7 +37,7 @@ /** * A multi search response. */ -public class MultiSearchResponse extends ActionResponse implements Iterable, ToXContent { +public class MultiSearchResponse extends ActionResponse implements Iterable, ToXContentObject { /** * A search response item, holding the actual search response, or an error message if it failed. @@ -151,39 +151,31 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray(Fields.RESPONSES); for (Item item : items) { builder.startObject(); if (item.isFailure()) { - ElasticsearchException.renderException(builder, params, item.getFailure()); + ElasticsearchException.generateFailureXContent(builder, params, item.getFailure(), true); builder.field(Fields.STATUS, ExceptionsHelper.status(item.getFailure()).getStatus()); } else { - item.getResponse().toXContent(builder, params); + item.getResponse().innerToXContent(builder, params); builder.field(Fields.STATUS, item.getResponse().status().getStatus()); } builder.endObject(); } builder.endArray(); + builder.endObject(); return builder; } static final class Fields { static final String RESPONSES = "responses"; static final String STATUS = "status"; - static final String ERROR = "error"; - static final String ROOT_CAUSE = "root_cause"; } @Override public String toString() { - try { - XContentBuilder builder = XContentFactory.jsonBuilder().prettyPrint(); - builder.startObject(); - toXContent(builder, EMPTY_PARAMS); - builder.endObject(); - return builder.string(); - } catch (IOException e) { - return "{ \"error\" : \"" + e.getMessage() + "\"}"; - } + return Strings.toString(this); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/ParsedScrollId.java b/core/src/main/java/org/elasticsearch/action/search/ParsedScrollId.java index f2ea5356106f5..b588827867fbb 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ParsedScrollId.java +++ b/core/src/main/java/org/elasticsearch/action/search/ParsedScrollId.java @@ -31,7 +31,7 @@ class ParsedScrollId { private final ScrollIdForNode[] context; - public ParsedScrollId(String source, String type, ScrollIdForNode[] context) { + ParsedScrollId(String source, String type, ScrollIdForNode[] context) { this.source = source; this.type = type; this.context = context; diff --git a/core/src/main/java/org/elasticsearch/action/search/ScrollIdForNode.java b/core/src/main/java/org/elasticsearch/action/search/ScrollIdForNode.java index 488132fdda23d..76d4ac1141388 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ScrollIdForNode.java +++ b/core/src/main/java/org/elasticsearch/action/search/ScrollIdForNode.java @@ -23,7 +23,7 @@ class ScrollIdForNode { private final String node; private final long scrollId; - public ScrollIdForNode(String node, long scrollId) { + ScrollIdForNode(String node, long scrollId) { this.node = node; this.scrollId = scrollId; } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchActionListener.java b/core/src/main/java/org/elasticsearch/action/search/SearchActionListener.java new file mode 100644 index 0000000000000..67de87b1bb173 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchActionListener.java @@ -0,0 +1,52 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; + +/** + * An base action listener that ensures shard target and shard index is set on all responses + * received by this listener. + */ +abstract class SearchActionListener implements ActionListener { + private final int requestIndex; + private final SearchShardTarget searchShardTarget; + + protected SearchActionListener(SearchShardTarget searchShardTarget, + int shardIndex) { + assert shardIndex >= 0 : "shard index must be positive"; + this.searchShardTarget = searchShardTarget; + this.requestIndex = shardIndex; + } + + @Override + public final void onResponse(T response) { + response.setShardIndex(requestIndex); + setSearchShardTarget(response); + innerOnResponse(response); + } + + protected void setSearchShardTarget(T response) { // some impls need to override this + response.setSearchShardTarget(searchShardTarget); + } + + protected abstract void innerOnResponse(T response); +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryAndFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryAndFetchAsyncAction.java deleted file mode 100644 index 9db3a21c48549..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryAndFetchAsyncAction.java +++ /dev/null @@ -1,148 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.action.search; - -import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionRunnable; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.routing.GroupShardsIterator; -import org.elasticsearch.common.util.concurrent.AtomicArray; -import org.elasticsearch.search.dfs.AggregatedDfs; -import org.elasticsearch.search.dfs.DfsSearchResult; -import org.elasticsearch.search.fetch.QueryFetchSearchResult; -import org.elasticsearch.search.internal.AliasFilter; -import org.elasticsearch.search.internal.InternalSearchResponse; -import org.elasticsearch.search.internal.ShardSearchTransportRequest; -import org.elasticsearch.search.query.QuerySearchRequest; - -import java.io.IOException; -import java.util.Map; -import java.util.concurrent.Executor; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.function.Function; - -class SearchDfsQueryAndFetchAsyncAction extends AbstractSearchAsyncAction { - - private final AtomicArray queryFetchResults; - private final SearchPhaseController searchPhaseController; - SearchDfsQueryAndFetchAsyncAction(Logger logger, SearchTransportService searchTransportService, - Function nodeIdToDiscoveryNode, - Map aliasFilter, Map concreteIndexBoosts, - SearchPhaseController searchPhaseController, Executor executor, SearchRequest request, - ActionListener listener, GroupShardsIterator shardsIts, - long startTime, long clusterStateVersion, SearchTask task) { - super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, concreteIndexBoosts, executor, - request, listener, shardsIts, startTime, clusterStateVersion, task); - this.searchPhaseController = searchPhaseController; - queryFetchResults = new AtomicArray<>(firstResults.length()); - } - - @Override - protected String firstPhaseName() { - return "dfs"; - } - - @Override - protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request, - ActionListener listener) { - searchTransportService.sendExecuteDfs(node, request, task, listener); - } - - @Override - protected void moveToSecondPhase() { - final AggregatedDfs dfs = searchPhaseController.aggregateDfs(firstResults); - final AtomicInteger counter = new AtomicInteger(firstResults.asList().size()); - - for (final AtomicArray.Entry entry : firstResults.asList()) { - DfsSearchResult dfsResult = entry.value; - DiscoveryNode node = nodeIdToDiscoveryNode.apply(dfsResult.shardTarget().nodeId()); - QuerySearchRequest querySearchRequest = new QuerySearchRequest(request, dfsResult.id(), dfs); - executeSecondPhase(entry.index, dfsResult, counter, node, querySearchRequest); - } - } - - void executeSecondPhase(final int shardIndex, final DfsSearchResult dfsResult, final AtomicInteger counter, - final DiscoveryNode node, final QuerySearchRequest querySearchRequest) { - searchTransportService.sendExecuteFetch(node, querySearchRequest, task, new ActionListener() { - @Override - public void onResponse(QueryFetchSearchResult result) { - result.shardTarget(dfsResult.shardTarget()); - queryFetchResults.set(shardIndex, result); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - @Override - public void onFailure(Exception t) { - try { - onSecondPhaseFailure(t, querySearchRequest, shardIndex, dfsResult, counter); - } finally { - // the query might not have been executed at all (for example because thread pool rejected execution) - // and the search context that was created in dfs phase might not be released. - // release it again to be in the safe side - sendReleaseSearchContext(querySearchRequest.id(), node); - } - } - }); - } - - void onSecondPhaseFailure(Exception e, QuerySearchRequest querySearchRequest, int shardIndex, DfsSearchResult dfsResult, - AtomicInteger counter) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute query phase", querySearchRequest.id()), e); - } - this.addShardFailure(shardIndex, dfsResult.shardTarget(), e); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - private void finishHim() { - getExecutor().execute(new ActionRunnable(listener) { - @Override - public void doRun() throws IOException { - sortedShardDocs = searchPhaseController.sortDocs(true, queryFetchResults); - final InternalSearchResponse internalResponse = searchPhaseController.merge(true, sortedShardDocs, queryFetchResults, - queryFetchResults); - String scrollId = null; - if (request.scroll() != null) { - scrollId = TransportSearchHelper.buildScrollId(request.searchType(), firstResults); - } - listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), - buildTookInMillis(), buildShardFailures())); - } - - @Override - public void onFailure(Exception e) { - ReduceSearchPhaseException failure = new ReduceSearchPhaseException("query_fetch", "", e, buildShardFailures()); - if (logger.isDebugEnabled()) { - logger.debug("failed to reduce search", failure); - } - super.onFailure(e); - } - }); - - } -} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java index 3fe24cc991139..a87b58c4e67b1 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchDfsQueryThenFetchAsyncAction.java @@ -19,209 +19,43 @@ package org.elasticsearch.action.search; -import com.carrotsearch.hppc.IntArrayList; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.apache.lucene.search.ScoreDoc; import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionRunnable; -import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.GroupShardsIterator; -import org.elasticsearch.common.util.concurrent.AtomicArray; -import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.search.dfs.AggregatedDfs; +import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.search.dfs.DfsSearchResult; -import org.elasticsearch.search.fetch.FetchSearchResult; -import org.elasticsearch.search.fetch.ShardFetchSearchRequest; import org.elasticsearch.search.internal.AliasFilter; -import org.elasticsearch.search.internal.InternalSearchResponse; -import org.elasticsearch.search.internal.ShardSearchTransportRequest; -import org.elasticsearch.search.query.QuerySearchRequest; -import org.elasticsearch.search.query.QuerySearchResult; +import org.elasticsearch.transport.Transport; -import java.io.IOException; import java.util.Map; import java.util.concurrent.Executor; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.function.Function; +import java.util.function.BiFunction; -class SearchDfsQueryThenFetchAsyncAction extends AbstractSearchAsyncAction { +final class SearchDfsQueryThenFetchAsyncAction extends AbstractSearchAsyncAction { - final AtomicArray queryResults; - final AtomicArray fetchResults; - final AtomicArray docIdsToLoad; private final SearchPhaseController searchPhaseController; - SearchDfsQueryThenFetchAsyncAction(Logger logger, SearchTransportService searchTransportService, - Function nodeIdToDiscoveryNode, - Map aliasFilter, Map concreteIndexBoosts, - SearchPhaseController searchPhaseController, Executor executor, SearchRequest request, - ActionListener listener, GroupShardsIterator shardsIts, long startTime, - long clusterStateVersion, SearchTask task) { - super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, concreteIndexBoosts, executor, - request, listener, shardsIts, startTime, clusterStateVersion, task); + SearchDfsQueryThenFetchAsyncAction(final Logger logger, final SearchTransportService searchTransportService, + final BiFunction nodeIdToConnection, final Map aliasFilter, + final Map concreteIndexBoosts, final SearchPhaseController searchPhaseController, final Executor executor, + final SearchRequest request, final ActionListener listener, + final GroupShardsIterator shardsIts, final TransportSearchAction.SearchTimeProvider timeProvider, + final long clusterStateVersion, final SearchTask task) { + super("dfs", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, listener, + shardsIts, timeProvider, clusterStateVersion, task, new SearchPhaseResults<>(shardsIts.size())); this.searchPhaseController = searchPhaseController; - queryResults = new AtomicArray<>(firstResults.length()); - fetchResults = new AtomicArray<>(firstResults.length()); - docIdsToLoad = new AtomicArray<>(firstResults.length()); } @Override - protected String firstPhaseName() { - return "dfs"; + protected void executePhaseOnShard(final SearchShardIterator shardIt, final ShardRouting shard, + final SearchActionListener listener) { + getSearchTransport().sendExecuteDfs(getConnection(shardIt.getClusterAlias(), shard.currentNodeId()), + buildShardSearchRequest(shardIt) , getTask(), listener); } @Override - protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request, - ActionListener listener) { - searchTransportService.sendExecuteDfs(node, request, task, listener); - } - - @Override - protected void moveToSecondPhase() { - final AggregatedDfs dfs = searchPhaseController.aggregateDfs(firstResults); - final AtomicInteger counter = new AtomicInteger(firstResults.asList().size()); - for (final AtomicArray.Entry entry : firstResults.asList()) { - DfsSearchResult dfsResult = entry.value; - DiscoveryNode node = nodeIdToDiscoveryNode.apply(dfsResult.shardTarget().nodeId()); - QuerySearchRequest querySearchRequest = new QuerySearchRequest(request, dfsResult.id(), dfs); - executeQuery(entry.index, dfsResult, counter, querySearchRequest, node); - } - } - - void executeQuery(final int shardIndex, final DfsSearchResult dfsResult, final AtomicInteger counter, - final QuerySearchRequest querySearchRequest, final DiscoveryNode node) { - searchTransportService.sendExecuteQuery(node, querySearchRequest, task, new ActionListener() { - @Override - public void onResponse(QuerySearchResult result) { - result.shardTarget(dfsResult.shardTarget()); - queryResults.set(shardIndex, result); - if (counter.decrementAndGet() == 0) { - executeFetchPhase(); - } - } - - @Override - public void onFailure(Exception t) { - try { - onQueryFailure(t, querySearchRequest, shardIndex, dfsResult, counter); - } finally { - // the query might not have been executed at all (for example because thread pool rejected - // execution) and the search context that was created in dfs phase might not be released. - // release it again to be in the safe side - sendReleaseSearchContext(querySearchRequest.id(), node); - } - } - }); - } - - void onQueryFailure(Exception e, QuerySearchRequest querySearchRequest, int shardIndex, DfsSearchResult dfsResult, - AtomicInteger counter) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute query phase", querySearchRequest.id()), e); - } - this.addShardFailure(shardIndex, dfsResult.shardTarget(), e); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - if (successfulOps.get() == 0) { - listener.onFailure(new SearchPhaseExecutionException("query", "all shards failed", buildShardFailures())); - } else { - executeFetchPhase(); - } - } - } - - void executeFetchPhase() { - try { - innerExecuteFetchPhase(); - } catch (Exception e) { - listener.onFailure(new ReduceSearchPhaseException("query", "", e, buildShardFailures())); - } - } - - void innerExecuteFetchPhase() throws Exception { - final boolean isScrollRequest = request.scroll() != null; - sortedShardDocs = searchPhaseController.sortDocs(isScrollRequest, queryResults); - searchPhaseController.fillDocIdsToLoad(docIdsToLoad, sortedShardDocs); - - if (docIdsToLoad.asList().isEmpty()) { - finishHim(); - return; - } - - final ScoreDoc[] lastEmittedDocPerShard = (request.scroll() != null) ? - searchPhaseController.getLastEmittedDocPerShard(queryResults.asList(), sortedShardDocs, firstResults.length()) : null; - final AtomicInteger counter = new AtomicInteger(docIdsToLoad.asList().size()); - for (final AtomicArray.Entry entry : docIdsToLoad.asList()) { - QuerySearchResult queryResult = queryResults.get(entry.index); - DiscoveryNode node = nodeIdToDiscoveryNode.apply(queryResult.shardTarget().nodeId()); - ShardFetchSearchRequest fetchSearchRequest = createFetchRequest(queryResult, entry, lastEmittedDocPerShard); - executeFetch(entry.index, queryResult.shardTarget(), counter, fetchSearchRequest, node); - } - } - - void executeFetch(final int shardIndex, final SearchShardTarget shardTarget, final AtomicInteger counter, - final ShardFetchSearchRequest fetchSearchRequest, DiscoveryNode node) { - searchTransportService.sendExecuteFetch(node, fetchSearchRequest, task, new ActionListener() { - @Override - public void onResponse(FetchSearchResult result) { - result.shardTarget(shardTarget); - fetchResults.set(shardIndex, result); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - @Override - public void onFailure(Exception t) { - // the search context might not be cleared on the node where the fetch was executed for example - // because the action was rejected by the thread pool. in this case we need to send a dedicated - // request to clear the search context. by setting docIdsToLoad to null, the context will be cleared - // in TransportSearchTypeAction.releaseIrrelevantSearchContexts() after the search request is done. - docIdsToLoad.set(shardIndex, null); - onFetchFailure(t, fetchSearchRequest, shardIndex, shardTarget, counter); - } - }); - } - - void onFetchFailure(Exception e, ShardFetchSearchRequest fetchSearchRequest, int shardIndex, - SearchShardTarget shardTarget, AtomicInteger counter) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute fetch phase", fetchSearchRequest.id()), e); - } - this.addShardFailure(shardIndex, shardTarget, e); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - private void finishHim() { - getExecutor().execute(new ActionRunnable(listener) { - @Override - public void doRun() throws IOException { - final boolean isScrollRequest = request.scroll() != null; - final InternalSearchResponse internalResponse = searchPhaseController.merge(isScrollRequest, sortedShardDocs, queryResults, - fetchResults); - String scrollId = isScrollRequest ? TransportSearchHelper.buildScrollId(request.searchType(), firstResults) : null; - listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), - buildTookInMillis(), buildShardFailures())); - releaseIrrelevantSearchContexts(queryResults, docIdsToLoad); - } - - @Override - public void onFailure(Exception e) { - try { - ReduceSearchPhaseException failure = new ReduceSearchPhaseException("merge", "", e, buildShardFailures()); - if (logger.isDebugEnabled()) { - logger.debug("failed to reduce search", failure); - } - super.onFailure(failure); - } finally { - releaseIrrelevantSearchContexts(queryResults, docIdsToLoad); - } - } - }); + protected SearchPhase getNextPhase(final SearchPhaseResults results, final SearchPhaseContext context) { + return new DfsQueryPhase(results.results, searchPhaseController, (queryResults) -> + new FetchSearchPhase(queryResults, searchPhaseController, context), context); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchPhase.java b/core/src/main/java/org/elasticsearch/action/search/SearchPhase.java new file mode 100644 index 0000000000000..7bb9c2ba28a89 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchPhase.java @@ -0,0 +1,42 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.elasticsearch.common.CheckedRunnable; + +import java.io.IOException; +import java.util.Objects; + +/** + * Base class for all individual search phases like collecting distributed frequencies, fetching documents, querying shards. + */ +abstract class SearchPhase implements CheckedRunnable { + private final String name; + + protected SearchPhase(String name) { + this.name = Objects.requireNonNull(name, "name must not be null"); + } + + /** + * Returns the phases name. + */ + public String getName() { + return name; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchPhaseContext.java b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseContext.java new file mode 100644 index 0000000000000..9829ff6a98337 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseContext.java @@ -0,0 +1,117 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.search; + +import org.apache.logging.log4j.Logger; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.search.SearchShardTarget; +import org.elasticsearch.search.internal.InternalSearchResponse; +import org.elasticsearch.search.internal.ShardSearchTransportRequest; +import org.elasticsearch.transport.Transport; + +import java.util.concurrent.Executor; + +/** + * This class provide contextual state and access to resources across multiple search phases. + */ +interface SearchPhaseContext extends ActionListener, Executor { + // TODO maybe we can make this concrete later - for now we just implement this in the base class for all initial phases + + /** + * Returns the total number of shards to the current search across all indices + */ + int getNumShards(); + + /** + * Returns a logger for this context to prevent each individual phase to create their own logger. + */ + Logger getLogger(); + + /** + * Returns the currently executing search task + */ + SearchTask getTask(); + + /** + * Returns the currently executing search request + */ + SearchRequest getRequest(); + + /** + * Builds the final search response that should be send back to the user. + * @param internalSearchResponse the internal search response + * @param scrollId an optional scroll ID if this search is a scroll search + */ + SearchResponse buildSearchResponse(InternalSearchResponse internalSearchResponse, String scrollId); + + /** + * This method will communicate a fatal phase failure back to the user. In contrast to a shard failure + * will this method immediately fail the search request and return the failure to the issuer of the request + * @param phase the phase that failed + * @param msg an optional message + * @param cause the cause of the phase failure + */ + void onPhaseFailure(SearchPhase phase, String msg, Throwable cause); + + /** + * This method will record a shard failure for the given shard index. In contrast to a phase failure + * ({@link #onPhaseFailure(SearchPhase, String, Throwable)}) this method will immediately return to the user but will record + * a shard failure for the given shard index. This should be called if a shard failure happens after we successfully retrieved + * a result from that shard in a previous phase. + */ + void onShardFailure(int shardIndex, @Nullable SearchShardTarget shardTarget, Exception e); + + /** + * Returns a connection to the node if connected otherwise and {@link org.elasticsearch.transport.ConnectTransportException} will be + * thrown. + */ + Transport.Connection getConnection(String clusterAlias, String nodeId); + + /** + * Returns the {@link SearchTransportService} to send shard request to other nodes + */ + SearchTransportService getSearchTransport(); + + /** + * Releases a search context with the given context ID on the node the given connection is connected to. + * @see org.elasticsearch.search.query.QuerySearchResult#getRequestId() + * @see org.elasticsearch.search.fetch.FetchSearchResult#getRequestId() + * + */ + default void sendReleaseSearchContext(long contextId, Transport.Connection connection, OriginalIndices originalIndices) { + if (connection != null) { + getSearchTransport().sendFreeContext(connection, contextId, originalIndices); + } + } + + /** + * Builds an request for the initial search phase. + */ + ShardSearchTransportRequest buildShardSearchRequest(SearchShardIterator shardIt); + + /** + * Processes the phase transition from on phase to another. This method handles all errors that happen during the initial run execution + * of the next phase. If there are no successful operations in the context when this method is executed the search is aborted and + * a response is returned to the user indicating that all shards have failed. + */ + void executeNextPhase(SearchPhase currentPhase, SearchPhase nextPhase); + +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchPhaseController.java b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseController.java index 92270c6fe3680..879607d059e80 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchPhaseController.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseController.java @@ -30,53 +30,45 @@ import org.apache.lucene.search.TermStatistics; import org.apache.lucene.search.TopDocs; import org.apache.lucene.search.TopFieldDocs; +import org.apache.lucene.search.grouping.CollapseTopFieldDocs; import org.elasticsearch.common.collect.HppcMaps; import org.elasticsearch.common.component.AbstractComponent; -import org.elasticsearch.common.lucene.Lucene; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.util.BigArrays; -import org.elasticsearch.common.util.concurrent.AtomicArray; import org.elasticsearch.script.ScriptService; +import org.elasticsearch.search.DocValueFormat; +import org.elasticsearch.search.SearchHit; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.aggregations.InternalAggregation; import org.elasticsearch.search.aggregations.InternalAggregation.ReduceContext; import org.elasticsearch.search.aggregations.InternalAggregations; import org.elasticsearch.search.aggregations.pipeline.SiblingPipelineAggregator; +import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.dfs.AggregatedDfs; import org.elasticsearch.search.dfs.DfsSearchResult; import org.elasticsearch.search.fetch.FetchSearchResult; -import org.elasticsearch.search.fetch.FetchSearchResultProvider; -import org.elasticsearch.search.internal.InternalSearchHit; -import org.elasticsearch.search.internal.InternalSearchHits; import org.elasticsearch.search.internal.InternalSearchResponse; import org.elasticsearch.search.profile.ProfileShardResult; import org.elasticsearch.search.profile.SearchProfileShardResults; import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.query.QuerySearchResultProvider; import org.elasticsearch.search.suggest.Suggest; import org.elasticsearch.search.suggest.Suggest.Suggestion; import org.elasticsearch.search.suggest.Suggest.Suggestion.Entry; import org.elasticsearch.search.suggest.completion.CompletionSuggestion; -import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; +import java.util.Collection; import java.util.Collections; -import java.util.Comparator; import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.function.IntFunction; import java.util.stream.Collectors; import java.util.stream.StreamSupport; -public class SearchPhaseController extends AbstractComponent { - - private static final Comparator> QUERY_RESULT_ORDERING = (o1, o2) -> { - int i = o1.value.shardTarget().index().compareTo(o2.value.shardTarget().index()); - if (i == 0) { - i = o1.value.shardTarget().shardId().id() - o2.value.shardTarget().shardId().id(); - } - return i; - }; +public final class SearchPhaseController extends AbstractComponent { private static final ScoreDoc[] EMPTY_DOCS = new ScoreDoc[0]; @@ -89,13 +81,13 @@ public SearchPhaseController(Settings settings, BigArrays bigArrays, ScriptServi this.scriptService = scriptService; } - public AggregatedDfs aggregateDfs(AtomicArray results) { + public AggregatedDfs aggregateDfs(Collection results) { ObjectObjectHashMap termStatistics = HppcMaps.newNoNullKeysMap(); ObjectObjectHashMap fieldStatistics = HppcMaps.newNoNullKeysMap(); long aggMaxDoc = 0; - for (AtomicArray.Entry lEntry : results.asList()) { - final Term[] terms = lEntry.value.terms(); - final TermStatistics[] stats = lEntry.value.termStatistics(); + for (DfsSearchResult lEntry : results) { + final Term[] terms = lEntry.terms(); + final TermStatistics[] stats = lEntry.termStatistics(); assert terms.length == stats.length; for (int i = 0; i < terms.length; i++) { assert terms[i] != null; @@ -113,9 +105,9 @@ public AggregatedDfs aggregateDfs(AtomicArray results) { } - assert !lEntry.value.fieldStatistics().containsKey(null); - final Object[] keys = lEntry.value.fieldStatistics().keys; - final Object[] values = lEntry.value.fieldStatistics().values; + assert !lEntry.fieldStatistics().containsKey(null); + final Object[] keys = lEntry.fieldStatistics().keys; + final Object[] values = lEntry.fieldStatistics().values; for (int i = 0; i < keys.length; i++) { if (keys[i] != null) { String key = (String) keys[i]; @@ -135,7 +127,7 @@ public AggregatedDfs aggregateDfs(AtomicArray results) { } } } - aggMaxDoc += lEntry.value.maxDoc(); + aggMaxDoc += lEntry.maxDoc(); } return new AggregatedDfs(termStatistics, fieldStatistics, aggMaxDoc); } @@ -149,173 +141,139 @@ private static long optionalSum(long left, long right) { * named completion suggestion across all shards. If more than one named completion suggestion is specified in the * request, the suggest docs for a named suggestion are ordered by the suggestion name. * + * Note: The order of the sorted score docs depends on the shard index in the result array if the merge process needs to disambiguate + * the result. In oder to obtain stable results the shard index (index of the result in the result array) must be the same. + * * @param ignoreFrom Whether to ignore the from and sort all hits in each shard result. * Enabled only for scroll search, because that only retrieves hits of length 'size' in the query phase. - * @param resultsArr Shard result holder + * @param results the search phase results to obtain the sort docs from + * @param bufferedTopDocs the pre-consumed buffered top docs + * @param topDocsStats the top docs stats to fill + * @param from the offset into the search results top docs + * @param size the number of hits to return from the merged top docs */ - public ScoreDoc[] sortDocs(boolean ignoreFrom, AtomicArray resultsArr) throws IOException { - List> results = resultsArr.asList(); + public SortedTopDocs sortDocs(boolean ignoreFrom, Collection results, + final Collection bufferedTopDocs, final TopDocsStats topDocsStats, int from, int size) { if (results.isEmpty()) { - return EMPTY_DOCS; + return SortedTopDocs.EMPTY; } - - boolean canOptimize = false; - QuerySearchResult result = null; - int shardIndex = -1; - if (results.size() == 1) { - canOptimize = true; - result = results.get(0).value.queryResult(); - shardIndex = results.get(0).index; - } else { - // lets see if we only got hits from a single shard, if so, we can optimize... - for (AtomicArray.Entry entry : results) { - if (entry.value.queryResult().hasHits()) { - if (result != null) { // we already have one, can't really optimize - canOptimize = false; - break; - } - canOptimize = true; - result = entry.value.queryResult(); - shardIndex = entry.index; + final Collection topDocs = bufferedTopDocs == null ? new ArrayList<>() : bufferedTopDocs; + final Map>> groupedCompletionSuggestions = new HashMap<>(); + for (SearchPhaseResult sortedResult : results) { // TODO we can move this loop into the reduce call to only loop over this once + /* We loop over all results once, group together the completion suggestions if there are any and collect relevant + * top docs results. Each top docs gets it's shard index set on all top docs to simplify top docs merging down the road + * this allowed to remove a single shared optimization code here since now we don't materialized a dense array of + * top docs anymore but instead only pass relevant results / top docs to the merge method*/ + QuerySearchResult queryResult = sortedResult.queryResult(); + if (queryResult.hasConsumedTopDocs() == false) { // already consumed? + final TopDocs td = queryResult.consumeTopDocs(); + assert td != null; + topDocsStats.add(td); + if (td.scoreDocs.length > 0) { // make sure we set the shard index before we add it - the consumer didn't do that yet + setShardIndex(td, queryResult.getShardIndex()); + topDocs.add(td); } } - } - if (canOptimize) { - int offset = result.from(); - if (ignoreFrom) { - offset = 0; - } - ScoreDoc[] scoreDocs = result.topDocs().scoreDocs; - ScoreDoc[] docs; - int numSuggestDocs = 0; - final Suggest suggest = result.queryResult().suggest(); - final List completionSuggestions; - if (suggest != null) { - completionSuggestions = suggest.filter(CompletionSuggestion.class); - for (CompletionSuggestion suggestion : completionSuggestions) { - numSuggestDocs += suggestion.getOptions().size(); + if (queryResult.hasSuggestHits()) { + Suggest shardSuggest = queryResult.suggest(); + for (CompletionSuggestion suggestion : shardSuggest.filter(CompletionSuggestion.class)) { + suggestion.setShardIndex(sortedResult.getShardIndex()); + List> suggestions = + groupedCompletionSuggestions.computeIfAbsent(suggestion.getName(), s -> new ArrayList<>()); + suggestions.add(suggestion); } - } else { - completionSuggestions = Collections.emptyList(); } - int docsOffset = 0; - if (scoreDocs.length == 0 || scoreDocs.length < offset) { - docs = new ScoreDoc[numSuggestDocs]; - } else { - int resultDocsSize = result.size(); - if ((scoreDocs.length - offset) < resultDocsSize) { - resultDocsSize = scoreDocs.length - offset; + } + final boolean hasHits = (groupedCompletionSuggestions.isEmpty() && topDocs.isEmpty()) == false; + if (hasHits) { + final TopDocs mergedTopDocs = mergeTopDocs(topDocs, size, ignoreFrom ? 0 : from); + final ScoreDoc[] mergedScoreDocs = mergedTopDocs == null ? EMPTY_DOCS : mergedTopDocs.scoreDocs; + ScoreDoc[] scoreDocs = mergedScoreDocs; + if (groupedCompletionSuggestions.isEmpty() == false) { + int numSuggestDocs = 0; + List>> completionSuggestions = + new ArrayList<>(groupedCompletionSuggestions.size()); + for (List> groupedSuggestions : groupedCompletionSuggestions.values()) { + final CompletionSuggestion completionSuggestion = CompletionSuggestion.reduceTo(groupedSuggestions); + assert completionSuggestion != null; + numSuggestDocs += completionSuggestion.getOptions().size(); + completionSuggestions.add(completionSuggestion); } - docs = new ScoreDoc[resultDocsSize + numSuggestDocs]; - for (int i = 0; i < resultDocsSize; i++) { - ScoreDoc scoreDoc = scoreDocs[offset + i]; - scoreDoc.shardIndex = shardIndex; - docs[i] = scoreDoc; - docsOffset++; + scoreDocs = new ScoreDoc[mergedScoreDocs.length + numSuggestDocs]; + System.arraycopy(mergedScoreDocs, 0, scoreDocs, 0, mergedScoreDocs.length); + int offset = mergedScoreDocs.length; + Suggest suggestions = new Suggest(completionSuggestions); + for (CompletionSuggestion completionSuggestion : suggestions.filter(CompletionSuggestion.class)) { + for (CompletionSuggestion.Entry.Option option : completionSuggestion.getOptions()) { + scoreDocs[offset++] = option.getDoc(); + } } } - for (CompletionSuggestion suggestion: completionSuggestions) { - for (CompletionSuggestion.Entry.Option option : suggestion.getOptions()) { - ScoreDoc doc = option.getDoc(); - doc.shardIndex = shardIndex; - docs[docsOffset++] = doc; - } + final boolean isSortedByField; + final SortField[] sortFields; + if (mergedTopDocs != null && mergedTopDocs instanceof TopFieldDocs) { + TopFieldDocs fieldDocs = (TopFieldDocs) mergedTopDocs; + isSortedByField = (fieldDocs instanceof CollapseTopFieldDocs && + fieldDocs.fields.length == 1 && fieldDocs.fields[0].getType() == SortField.Type.SCORE) == false; + sortFields = fieldDocs.fields; + } else { + isSortedByField = false; + sortFields = null; } - return docs; + return new SortedTopDocs(scoreDocs, isSortedByField, sortFields); + } else { + // no relevant docs + return SortedTopDocs.EMPTY; } + } - @SuppressWarnings("unchecked") - AtomicArray.Entry[] sortedResults = results.toArray(new AtomicArray.Entry[results.size()]); - Arrays.sort(sortedResults, QUERY_RESULT_ORDERING); - QuerySearchResultProvider firstResult = sortedResults[0].value; - - int topN = topN(results); - int from = firstResult.queryResult().from(); - if (ignoreFrom) { - from = 0; + TopDocs mergeTopDocs(Collection results, int topN, int from) { + if (results.isEmpty()) { + return null; } - + assert results.isEmpty() == false; + final boolean setShardIndex = false; + final TopDocs topDocs = results.stream().findFirst().get(); final TopDocs mergedTopDocs; - if (firstResult.queryResult().topDocs() instanceof TopFieldDocs) { - TopFieldDocs firstTopDocs = (TopFieldDocs) firstResult.queryResult().topDocs(); + final int numShards = results.size(); + if (numShards == 1 && from == 0) { // only one shard and no pagination we can just return the topDocs as we got them. + return topDocs; + } else if (topDocs instanceof CollapseTopFieldDocs) { + CollapseTopFieldDocs firstTopDocs = (CollapseTopFieldDocs) topDocs; final Sort sort = new Sort(firstTopDocs.fields); - - final TopFieldDocs[] shardTopDocs = new TopFieldDocs[resultsArr.length()]; - for (AtomicArray.Entry sortedResult : sortedResults) { - TopDocs topDocs = sortedResult.value.queryResult().topDocs(); - // the 'index' field is the position in the resultsArr atomic array - shardTopDocs[sortedResult.index] = (TopFieldDocs) topDocs; - } - // TopDocs#merge can't deal with null shard TopDocs - for (int i = 0; i < shardTopDocs.length; ++i) { - if (shardTopDocs[i] == null) { - shardTopDocs[i] = new TopFieldDocs(0, new FieldDoc[0], sort.getSort(), Float.NaN); - } - } - mergedTopDocs = TopDocs.merge(sort, from, topN, shardTopDocs); + final CollapseTopFieldDocs[] shardTopDocs = results.toArray(new CollapseTopFieldDocs[numShards]); + mergedTopDocs = CollapseTopFieldDocs.merge(sort, from, topN, shardTopDocs, setShardIndex); + } else if (topDocs instanceof TopFieldDocs) { + TopFieldDocs firstTopDocs = (TopFieldDocs) topDocs; + final Sort sort = new Sort(firstTopDocs.fields); + final TopFieldDocs[] shardTopDocs = results.toArray(new TopFieldDocs[numShards]); + mergedTopDocs = TopDocs.merge(sort, from, topN, shardTopDocs, setShardIndex); } else { - final TopDocs[] shardTopDocs = new TopDocs[resultsArr.length()]; - for (AtomicArray.Entry sortedResult : sortedResults) { - TopDocs topDocs = sortedResult.value.queryResult().topDocs(); - // the 'index' field is the position in the resultsArr atomic array - shardTopDocs[sortedResult.index] = topDocs; - } - // TopDocs#merge can't deal with null shard TopDocs - for (int i = 0; i < shardTopDocs.length; ++i) { - if (shardTopDocs[i] == null) { - shardTopDocs[i] = Lucene.EMPTY_TOP_DOCS; - } - } - mergedTopDocs = TopDocs.merge(from, topN, shardTopDocs); + final TopDocs[] shardTopDocs = results.toArray(new TopDocs[numShards]); + mergedTopDocs = TopDocs.merge(from, topN, shardTopDocs, setShardIndex); } + return mergedTopDocs; + } - ScoreDoc[] scoreDocs = mergedTopDocs.scoreDocs; - final Map>> groupedCompletionSuggestions = new HashMap<>(); - // group suggestions and assign shard index - for (AtomicArray.Entry sortedResult : sortedResults) { - Suggest shardSuggest = sortedResult.value.queryResult().suggest(); - if (shardSuggest != null) { - for (CompletionSuggestion suggestion : shardSuggest.filter(CompletionSuggestion.class)) { - suggestion.setShardIndex(sortedResult.index); - List> suggestions = - groupedCompletionSuggestions.computeIfAbsent(suggestion.getName(), s -> new ArrayList<>()); - suggestions.add(suggestion); - } + private static void setShardIndex(TopDocs topDocs, int shardIndex) { + for (ScoreDoc doc : topDocs.scoreDocs) { + if (doc.shardIndex != -1) { + // once there is a single shard index initialized all others will be initialized too + // there are many asserts down in lucene land that this is actually true. we can shortcut it here. + return; } + doc.shardIndex = shardIndex; } - if (groupedCompletionSuggestions.isEmpty() == false) { - int numSuggestDocs = 0; - List>> completionSuggestions = - new ArrayList<>(groupedCompletionSuggestions.size()); - for (List> groupedSuggestions : groupedCompletionSuggestions.values()) { - final CompletionSuggestion completionSuggestion = CompletionSuggestion.reduceTo(groupedSuggestions); - assert completionSuggestion != null; - numSuggestDocs += completionSuggestion.getOptions().size(); - completionSuggestions.add(completionSuggestion); - } - scoreDocs = new ScoreDoc[mergedTopDocs.scoreDocs.length + numSuggestDocs]; - System.arraycopy(mergedTopDocs.scoreDocs, 0, scoreDocs, 0, mergedTopDocs.scoreDocs.length); - int offset = mergedTopDocs.scoreDocs.length; - Suggest suggestions = new Suggest(completionSuggestions); - for (CompletionSuggestion completionSuggestion : suggestions.filter(CompletionSuggestion.class)) { - for (CompletionSuggestion.Entry.Option option : completionSuggestion.getOptions()) { - scoreDocs[offset++] = option.getDoc(); - } - } - } - return scoreDocs; } - public ScoreDoc[] getLastEmittedDocPerShard(List> queryResults, - ScoreDoc[] sortedScoreDocs, int numShards) { - ScoreDoc[] lastEmittedDocPerShard = new ScoreDoc[numShards]; - if (queryResults.isEmpty() == false) { - long fetchHits = 0; - for (AtomicArray.Entry queryResult : queryResults) { - fetchHits += queryResult.value.queryResult().topDocs().scoreDocs.length; - } + public ScoreDoc[] getLastEmittedDocPerShard(ReducedQueryPhase reducedQueryPhase, int numShards) { + final ScoreDoc[] lastEmittedDocPerShard = new ScoreDoc[numShards]; + if (reducedQueryPhase.isEmptyResult == false) { + final ScoreDoc[] sortedScoreDocs = reducedQueryPhase.scoreDocs; // from is always zero as when we use scroll, we ignore from - long size = Math.min(fetchHits, topN(queryResults)); + long size = Math.min(reducedQueryPhase.fetchHits, reducedQueryPhase.size); + // with collapsing we can have more hits than sorted docs + size = Math.min(sortedScoreDocs.length, size); for (int sortedDocsIndex = 0; sortedDocsIndex < size; sortedDocsIndex++) { ScoreDoc scoreDoc = sortedScoreDocs[sortedDocsIndex]; lastEmittedDocPerShard[scoreDoc.shardIndex] = scoreDoc; @@ -328,15 +286,16 @@ public ScoreDoc[] getLastEmittedDocPerShard(List docIdsToLoad, ScoreDoc[] shardDocs) { + public IntArrayList[] fillDocIdsToLoad(int numShards, ScoreDoc[] shardDocs) { + IntArrayList[] docIdsToLoad = new IntArrayList[numShards]; for (ScoreDoc shardDoc : shardDocs) { - IntArrayList shardDocIdsToLoad = docIdsToLoad.get(shardDoc.shardIndex); + IntArrayList shardDocIdsToLoad = docIdsToLoad[shardDoc.shardIndex]; if (shardDocIdsToLoad == null) { - shardDocIdsToLoad = new IntArrayList(); // can't be shared!, uses unsafe on it later on - docIdsToLoad.set(shardDoc.shardIndex, shardDocIdsToLoad); + shardDocIdsToLoad = docIdsToLoad[shardDoc.shardIndex] = new IntArrayList(); } shardDocIdsToLoad.add(shardDoc.doc); } + return docIdsToLoad; } /** @@ -346,39 +305,170 @@ public void fillDocIdsToLoad(AtomicArray docIdsToLoad, ScoreDoc[] * Expects sortedDocs to have top search docs across all shards, optionally followed by top suggest docs for each named * completion suggestion ordered by suggestion name */ - public InternalSearchResponse merge(boolean ignoreFrom, ScoreDoc[] sortedDocs, - AtomicArray queryResultsArr, - AtomicArray fetchResultsArr) { - - List> queryResults = queryResultsArr.asList(); - List> fetchResults = fetchResultsArr.asList(); - - if (queryResults.isEmpty()) { + public InternalSearchResponse merge(boolean ignoreFrom, ReducedQueryPhase reducedQueryPhase, + Collection fetchResults, IntFunction resultsLookup) { + if (reducedQueryPhase.isEmptyResult) { return InternalSearchResponse.empty(); } + ScoreDoc[] sortedDocs = reducedQueryPhase.scoreDocs; + SearchHits hits = getHits(reducedQueryPhase, ignoreFrom, fetchResults, resultsLookup); + if (reducedQueryPhase.suggest != null) { + if (!fetchResults.isEmpty()) { + int currentOffset = hits.getHits().length; + for (CompletionSuggestion suggestion : reducedQueryPhase.suggest.filter(CompletionSuggestion.class)) { + final List suggestionOptions = suggestion.getOptions(); + for (int scoreDocIndex = currentOffset; scoreDocIndex < currentOffset + suggestionOptions.size(); scoreDocIndex++) { + ScoreDoc shardDoc = sortedDocs[scoreDocIndex]; + SearchPhaseResult searchResultProvider = resultsLookup.apply(shardDoc.shardIndex); + if (searchResultProvider == null) { + // this can happen if we are hitting a shard failure during the fetch phase + // in this case we referenced the shard result via teh ScoreDoc but never got a + // result from fetch. + // TODO it would be nice to assert this in the future + continue; + } + FetchSearchResult fetchResult = searchResultProvider.fetchResult(); + final int index = fetchResult.counterGetAndIncrement(); + assert index < fetchResult.hits().internalHits().length : "not enough hits fetched. index [" + index + "] length: " + + fetchResult.hits().internalHits().length; + SearchHit hit = fetchResult.hits().internalHits()[index]; + CompletionSuggestion.Entry.Option suggestOption = + suggestionOptions.get(scoreDocIndex - currentOffset); + hit.score(shardDoc.score); + hit.shard(fetchResult.getSearchShardTarget()); + suggestOption.setHit(hit); + } + currentOffset += suggestionOptions.size(); + } + assert currentOffset == sortedDocs.length : "expected no more score doc slices"; + } + } + return reducedQueryPhase.buildResponse(hits); + } - QuerySearchResult firstResult = queryResults.get(0).value.queryResult(); - - boolean sorted = false; + private SearchHits getHits(ReducedQueryPhase reducedQueryPhase, boolean ignoreFrom, + Collection fetchResults, IntFunction resultsLookup) { + final boolean sorted = reducedQueryPhase.isSortedByField; + ScoreDoc[] sortedDocs = reducedQueryPhase.scoreDocs; int sortScoreIndex = -1; - if (firstResult.topDocs() instanceof TopFieldDocs) { - sorted = true; - TopFieldDocs fieldDocs = (TopFieldDocs) firstResult.queryResult().topDocs(); - for (int i = 0; i < fieldDocs.fields.length; i++) { - if (fieldDocs.fields[i].getType() == SortField.Type.SCORE) { + if (sorted) { + for (int i = 0; i < reducedQueryPhase.sortField.length; i++) { + if (reducedQueryPhase.sortField[i].getType() == SortField.Type.SCORE) { sortScoreIndex = i; } } } + // clean the fetch counter + for (SearchPhaseResult entry : fetchResults) { + entry.fetchResult().initCounter(); + } + int from = ignoreFrom ? 0 : reducedQueryPhase.from; + int numSearchHits = (int) Math.min(reducedQueryPhase.fetchHits - from, reducedQueryPhase.size); + // with collapsing we can have more fetch hits than sorted docs + numSearchHits = Math.min(sortedDocs.length, numSearchHits); + // merge hits + List hits = new ArrayList<>(); + if (!fetchResults.isEmpty()) { + for (int i = 0; i < numSearchHits; i++) { + ScoreDoc shardDoc = sortedDocs[i]; + SearchPhaseResult fetchResultProvider = resultsLookup.apply(shardDoc.shardIndex); + if (fetchResultProvider == null) { + // this can happen if we are hitting a shard failure during the fetch phase + // in this case we referenced the shard result via teh ScoreDoc but never got a + // result from fetch. + // TODO it would be nice to assert this in the future + continue; + } + FetchSearchResult fetchResult = fetchResultProvider.fetchResult(); + final int index = fetchResult.counterGetAndIncrement(); + assert index < fetchResult.hits().internalHits().length : "not enough hits fetched. index [" + index + "] length: " + + fetchResult.hits().internalHits().length; + SearchHit searchHit = fetchResult.hits().internalHits()[index]; + searchHit.score(shardDoc.score); + searchHit.shard(fetchResult.getSearchShardTarget()); + if (sorted) { + FieldDoc fieldDoc = (FieldDoc) shardDoc; + searchHit.sortValues(fieldDoc.fields, reducedQueryPhase.sortValueFormats); + if (sortScoreIndex != -1) { + searchHit.score(((Number) fieldDoc.fields[sortScoreIndex]).floatValue()); + } + } + hits.add(searchHit); + } + } + return new SearchHits(hits.toArray(new SearchHit[hits.size()]), reducedQueryPhase.totalHits, + reducedQueryPhase.maxScore); + } - // count the total (we use the query result provider here, since we might not get any hits (we scrolled past them)) - long totalHits = 0; - long fetchHits = 0; - float maxScore = Float.NEGATIVE_INFINITY; + /** + * Reduces the given query results and consumes all aggregations and profile results. + * @param queryResults a list of non-null query shard results + */ + public ReducedQueryPhase reducedQueryPhase(Collection queryResults, boolean isScrollRequest) { + return reducedQueryPhase(queryResults, isScrollRequest, true); + } + + /** + * Reduces the given query results and consumes all aggregations and profile results. + * @param queryResults a list of non-null query shard results + */ + public ReducedQueryPhase reducedQueryPhase(Collection queryResults, boolean isScrollRequest, boolean trackTotalHits) { + return reducedQueryPhase(queryResults, null, new ArrayList<>(), new TopDocsStats(trackTotalHits), 0, isScrollRequest); + } + + + /** + * Reduces the given query results and consumes all aggregations and profile results. + * @param queryResults a list of non-null query shard results + * @param bufferedAggs a list of pre-collected / buffered aggregations. if this list is non-null all aggregations have been consumed + * from all non-null query results. + * @param bufferedTopDocs a list of pre-collected / buffered top docs. if this list is non-null all top docs have been consumed + * from all non-null query results. + * @param numReducePhases the number of non-final reduce phases applied to the query results. + * @see QuerySearchResult#consumeAggs() + * @see QuerySearchResult#consumeProfileResult() + */ + private ReducedQueryPhase reducedQueryPhase(Collection queryResults, + List bufferedAggs, List bufferedTopDocs, + TopDocsStats topDocsStats, int numReducePhases, boolean isScrollRequest) { + assert numReducePhases >= 0 : "num reduce phases must be >= 0 but was: " + numReducePhases; + numReducePhases++; // increment for this phase boolean timedOut = false; Boolean terminatedEarly = null; - for (AtomicArray.Entry entry : queryResults) { - QuerySearchResult result = entry.value.queryResult(); + if (queryResults.isEmpty()) { // early terminate we have nothing to reduce + return new ReducedQueryPhase(topDocsStats.totalHits, topDocsStats.fetchHits, topDocsStats.maxScore, + timedOut, terminatedEarly, null, null, null, EMPTY_DOCS, null, null, numReducePhases, false, 0, 0, true); + } + final QuerySearchResult firstResult = queryResults.stream().findFirst().get().queryResult(); + final boolean hasSuggest = firstResult.suggest() != null; + final boolean hasProfileResults = firstResult.hasProfileResults(); + final boolean consumeAggs; + final List aggregationsList; + if (bufferedAggs != null) { + consumeAggs = false; + // we already have results from intermediate reduces and just need to perform the final reduce + assert firstResult.hasAggs() : "firstResult has no aggs but we got non null buffered aggs?"; + aggregationsList = bufferedAggs; + } else if (firstResult.hasAggs()) { + // the number of shards was less than the buffer size so we reduce agg results directly + aggregationsList = new ArrayList<>(queryResults.size()); + consumeAggs = true; + } else { + // no aggregations + aggregationsList = Collections.emptyList(); + consumeAggs = false; + } + + // count the total (we use the query result provider here, since we might not get any hits (we scrolled past them)) + final Map> groupedSuggestions = hasSuggest ? new HashMap<>() : Collections.emptyMap(); + final Map profileResults = hasProfileResults ? new HashMap<>(queryResults.size()) + : Collections.emptyMap(); + int from = 0; + int size = 0; + for (SearchPhaseResult entry : queryResults) { + QuerySearchResult result = entry.queryResult(); + from = result.from(); + size = result.size(); if (result.searchTimedOut()) { timedOut = true; } @@ -389,141 +479,300 @@ public InternalSearchResponse merge(boolean ignoreFrom, ScoreDoc[] sortedDocs, terminatedEarly = true; } } - totalHits += result.topDocs().totalHits; - fetchHits += result.topDocs().scoreDocs.length; - if (!Float.isNaN(result.topDocs().getMaxScore())) { - maxScore = Math.max(maxScore, result.topDocs().getMaxScore()); + if (hasSuggest) { + assert result.suggest() != null; + for (Suggestion> suggestion : result.suggest()) { + List suggestionList = groupedSuggestions.computeIfAbsent(suggestion.getName(), s -> new ArrayList<>()); + suggestionList.add(suggestion); + } + } + if (consumeAggs) { + aggregationsList.add((InternalAggregations) result.consumeAggs()); + } + if (hasProfileResults) { + String key = result.getSearchShardTarget().toString(); + profileResults.put(key, result.consumeProfileResult()); } } - if (Float.isInfinite(maxScore)) { - maxScore = Float.NaN; - } + final Suggest suggest = groupedSuggestions.isEmpty() ? null : new Suggest(Suggest.reduce(groupedSuggestions)); + ReduceContext reduceContext = new ReduceContext(bigArrays, scriptService, true); + final InternalAggregations aggregations = aggregationsList.isEmpty() ? null : reduceAggs(aggregationsList, + firstResult.pipelineAggregators(), reduceContext); + final SearchProfileShardResults shardResults = profileResults.isEmpty() ? null : new SearchProfileShardResults(profileResults); + final SortedTopDocs scoreDocs = this.sortDocs(isScrollRequest, queryResults, bufferedTopDocs, topDocsStats, from, size); + return new ReducedQueryPhase(topDocsStats.totalHits, topDocsStats.fetchHits, topDocsStats.maxScore, + timedOut, terminatedEarly, suggest, aggregations, shardResults, scoreDocs.scoreDocs, scoreDocs.sortFields, + firstResult != null ? firstResult.sortValueFormats() : null, + numReducePhases, scoreDocs.isSortedByField, size, from, firstResult == null); + } - // clean the fetch counter - for (AtomicArray.Entry entry : fetchResults) { - entry.value.fetchResult().initCounter(); - } - int from = ignoreFrom ? 0 : firstResult.queryResult().from(); - int numSearchHits = (int) Math.min(fetchHits - from, topN(queryResults)); - // merge hits - List hits = new ArrayList<>(); - if (!fetchResults.isEmpty()) { - for (int i = 0; i < numSearchHits; i++) { - ScoreDoc shardDoc = sortedDocs[i]; - FetchSearchResultProvider fetchResultProvider = fetchResultsArr.get(shardDoc.shardIndex); - if (fetchResultProvider == null) { - continue; - } - FetchSearchResult fetchResult = fetchResultProvider.fetchResult(); - int index = fetchResult.counterGetAndIncrement(); - if (index < fetchResult.hits().internalHits().length) { - InternalSearchHit searchHit = fetchResult.hits().internalHits()[index]; - searchHit.score(shardDoc.score); - searchHit.shard(fetchResult.shardTarget()); - if (sorted) { - FieldDoc fieldDoc = (FieldDoc) shardDoc; - searchHit.sortValues(fieldDoc.fields, firstResult.sortValueFormats()); - if (sortScoreIndex != -1) { - searchHit.score(((Number) fieldDoc.fields[sortScoreIndex]).floatValue()); - } - } - hits.add(searchHit); - } + + /** + * Performs an intermediate reduce phase on the aggregations. For instance with this reduce phase never prune information + * that relevant for the final reduce step. For final reduce see {@link #reduceAggs(List, List, ReduceContext)} + */ + private InternalAggregations reduceAggsIncrementally(List aggregationsList) { + ReduceContext reduceContext = new ReduceContext(bigArrays, scriptService, false); + return aggregationsList.isEmpty() ? null : reduceAggs(aggregationsList, + null, reduceContext); + } + + private InternalAggregations reduceAggs(List aggregationsList, + List pipelineAggregators, ReduceContext reduceContext) { + InternalAggregations aggregations = InternalAggregations.reduce(aggregationsList, reduceContext); + if (pipelineAggregators != null) { + List newAggs = StreamSupport.stream(aggregations.spliterator(), false) + .map((p) -> (InternalAggregation) p) + .collect(Collectors.toList()); + for (SiblingPipelineAggregator pipelineAggregator : pipelineAggregators) { + InternalAggregation newAgg = pipelineAggregator.doReduce(new InternalAggregations(newAggs), reduceContext); + newAggs.add(newAgg); } + return new InternalAggregations(newAggs); } + return aggregations; + } - // merge suggest results - Suggest suggest = null; - if (firstResult.suggest() != null) { - final Map> groupedSuggestions = new HashMap<>(); - for (AtomicArray.Entry queryResult : queryResults) { - Suggest shardSuggest = queryResult.value.queryResult().suggest(); - if (shardSuggest != null) { - for (Suggestion> suggestion : shardSuggest) { - List suggestionList = groupedSuggestions.computeIfAbsent(suggestion.getName(), s -> new ArrayList<>()); - suggestionList.add(suggestion); - } - } + public static final class ReducedQueryPhase { + // the sum of all hits across all reduces shards + final long totalHits; + // the number of returned hits (doc IDs) across all reduces shards + final long fetchHits; + // the max score across all reduces hits or {@link Float#NaN} if no hits returned + final float maxScore; + // true if at least one reduced result timed out + final boolean timedOut; + // non null and true if at least one reduced result was terminated early + final Boolean terminatedEarly; + // the reduced suggest results + final Suggest suggest; + // the reduced internal aggregations + final InternalAggregations aggregations; + // the reduced profile results + final SearchProfileShardResults shardResults; + // the number of reduces phases + final int numReducePhases; + // the searches merged top docs + final ScoreDoc[] scoreDocs; + // the top docs sort fields used to sort the score docs, null if the results are not sorted + final SortField[] sortField; + // true iff the result score docs is sorted by a field (not score), this implies that sortField is set. + final boolean isSortedByField; + // the size of the top hits to return + final int size; + // true iff the query phase had no results. Otherwise false + final boolean isEmptyResult; + // the offset into the merged top hits + final int from; + // sort value formats used to sort / format the result + final DocValueFormat[] sortValueFormats; + + ReducedQueryPhase(long totalHits, long fetchHits, float maxScore, boolean timedOut, Boolean terminatedEarly, Suggest suggest, + InternalAggregations aggregations, SearchProfileShardResults shardResults, ScoreDoc[] scoreDocs, + SortField[] sortFields, DocValueFormat[] sortValueFormats, int numReducePhases, boolean isSortedByField, int size, + int from, boolean isEmptyResult) { + if (numReducePhases <= 0) { + throw new IllegalArgumentException("at least one reduce phase must have been applied but was: " + numReducePhases); } - if (groupedSuggestions.isEmpty() == false) { - suggest = new Suggest(Suggest.reduce(groupedSuggestions)); - if (!fetchResults.isEmpty()) { - int currentOffset = numSearchHits; - for (CompletionSuggestion suggestion : suggest.filter(CompletionSuggestion.class)) { - final List suggestionOptions = suggestion.getOptions(); - for (int scoreDocIndex = currentOffset; scoreDocIndex < currentOffset + suggestionOptions.size(); scoreDocIndex++) { - ScoreDoc shardDoc = sortedDocs[scoreDocIndex]; - FetchSearchResultProvider fetchSearchResultProvider = fetchResultsArr.get(shardDoc.shardIndex); - if (fetchSearchResultProvider == null) { - continue; - } - FetchSearchResult fetchResult = fetchSearchResultProvider.fetchResult(); - int fetchResultIndex = fetchResult.counterGetAndIncrement(); - if (fetchResultIndex < fetchResult.hits().internalHits().length) { - InternalSearchHit hit = fetchResult.hits().internalHits()[fetchResultIndex]; - CompletionSuggestion.Entry.Option suggestOption = - suggestionOptions.get(scoreDocIndex - currentOffset); - hit.score(shardDoc.score); - hit.shard(fetchResult.shardTarget()); - suggestOption.setHit(hit); - } - } - currentOffset += suggestionOptions.size(); - } - assert currentOffset == sortedDocs.length : "expected no more score doc slices"; - } + this.totalHits = totalHits; + this.fetchHits = fetchHits; + if (Float.isInfinite(maxScore)) { + this.maxScore = Float.NaN; + } else { + this.maxScore = maxScore; } + this.timedOut = timedOut; + this.terminatedEarly = terminatedEarly; + this.suggest = suggest; + this.aggregations = aggregations; + this.shardResults = shardResults; + this.numReducePhases = numReducePhases; + this.scoreDocs = scoreDocs; + this.sortField = sortFields; + this.isSortedByField = isSortedByField; + this.size = size; + this.from = from; + this.isEmptyResult = isEmptyResult; + this.sortValueFormats = sortValueFormats; + } + + /** + * Creates a new search response from the given merged hits. + * @see #merge(boolean, ReducedQueryPhase, Collection, IntFunction) + */ + public InternalSearchResponse buildResponse(SearchHits hits) { + return new InternalSearchResponse(hits, aggregations, suggest, shardResults, timedOut, terminatedEarly, numReducePhases); } + } - // merge Aggregation - InternalAggregations aggregations = null; - if (firstResult.aggregations() != null && firstResult.aggregations().asList() != null) { - List aggregationsList = new ArrayList<>(queryResults.size()); - for (AtomicArray.Entry entry : queryResults) { - aggregationsList.add((InternalAggregations) entry.value.queryResult().aggregations()); + /** + * A {@link org.elasticsearch.action.search.InitialSearchPhase.SearchPhaseResults} implementation + * that incrementally reduces aggregation results as shard results are consumed. + * This implementation can be configured to batch up a certain amount of results and only reduce them + * iff the buffer is exhausted. + */ + static final class QueryPhaseResultConsumer extends InitialSearchPhase.SearchPhaseResults { + private final InternalAggregations[] aggsBuffer; + private final TopDocs[] topDocsBuffer; + private final boolean hasAggs; + private final boolean hasTopDocs; + private final int bufferSize; + private int index; + private final SearchPhaseController controller; + private int numReducePhases = 0; + private final TopDocsStats topDocsStats = new TopDocsStats(); + + /** + * Creates a new {@link QueryPhaseResultConsumer} + * @param controller a controller instance to reduce the query response objects + * @param expectedResultSize the expected number of query results. Corresponds to the number of shards queried + * @param bufferSize the size of the reduce buffer. if the buffer size is smaller than the number of expected results + * the buffer is used to incrementally reduce aggregation results before all shards responded. + */ + private QueryPhaseResultConsumer(SearchPhaseController controller, int expectedResultSize, int bufferSize, + boolean hasTopDocs, boolean hasAggs) { + super(expectedResultSize); + if (expectedResultSize != 1 && bufferSize < 2) { + throw new IllegalArgumentException("buffer size must be >= 2 if there is more than one expected result"); } - ReduceContext reduceContext = new ReduceContext(bigArrays, scriptService); - aggregations = InternalAggregations.reduce(aggregationsList, reduceContext); - List pipelineAggregators = firstResult.pipelineAggregators(); - if (pipelineAggregators != null) { - List newAggs = StreamSupport.stream(aggregations.spliterator(), false) - .map((p) -> (InternalAggregation) p) - .collect(Collectors.toList()); - for (SiblingPipelineAggregator pipelineAggregator : pipelineAggregators) { - InternalAggregation newAgg = pipelineAggregator.doReduce(new InternalAggregations(newAggs), reduceContext); - newAggs.add(newAgg); - } - aggregations = new InternalAggregations(newAggs); + if (expectedResultSize <= bufferSize) { + throw new IllegalArgumentException("buffer size must be less than the expected result size"); } + if (hasAggs == false && hasTopDocs == false) { + throw new IllegalArgumentException("either aggs or top docs must be present"); + } + this.controller = controller; + // no need to buffer anything if we have less expected results. in this case we don't consume any results ahead of time. + this.aggsBuffer = new InternalAggregations[hasAggs ? bufferSize : 0]; + this.topDocsBuffer = new TopDocs[hasTopDocs ? bufferSize : 0]; + this.hasTopDocs = hasTopDocs; + this.hasAggs = hasAggs; + this.bufferSize = bufferSize; + } - //Collect profile results - SearchProfileShardResults shardResults = null; - if (firstResult.profileResults() != null) { - Map profileResults = new HashMap<>(queryResults.size()); - for (AtomicArray.Entry entry : queryResults) { - String key = entry.value.queryResult().shardTarget().toString(); - profileResults.put(key, entry.value.queryResult().profileResults()); + @Override + public void consumeResult(SearchPhaseResult result) { + super.consumeResult(result); + QuerySearchResult queryResult = result.queryResult(); + consumeInternal(queryResult); + } + + private synchronized void consumeInternal(QuerySearchResult querySearchResult) { + if (index == bufferSize) { + if (hasAggs) { + InternalAggregations reducedAggs = controller.reduceAggsIncrementally(Arrays.asList(aggsBuffer)); + Arrays.fill(aggsBuffer, null); + aggsBuffer[0] = reducedAggs; + } + if (hasTopDocs) { + TopDocs reducedTopDocs = controller.mergeTopDocs(Arrays.asList(topDocsBuffer), + querySearchResult.from() + querySearchResult.size() // we have to merge here in the same way we collect on a shard + , 0); + Arrays.fill(topDocsBuffer, null); + topDocsBuffer[0] = reducedTopDocs; + } + numReducePhases++; + index = 1; + } + final int i = index++; + if (hasAggs) { + aggsBuffer[i] = (InternalAggregations) querySearchResult.consumeAggs(); + } + if (hasTopDocs) { + final TopDocs topDocs = querySearchResult.consumeTopDocs(); // can't be null + topDocsStats.add(topDocs); + SearchPhaseController.setShardIndex(topDocs, querySearchResult.getShardIndex()); + topDocsBuffer[i] = topDocs; } - shardResults = new SearchProfileShardResults(profileResults); } - InternalSearchHits searchHits = new InternalSearchHits(hits.toArray(new InternalSearchHit[hits.size()]), totalHits, maxScore); + private synchronized List getRemainingAggs() { + return hasAggs ? Arrays.asList(aggsBuffer).subList(0, index) : null; + } + + private synchronized List getRemainingTopDocs() { + return hasTopDocs ? Arrays.asList(topDocsBuffer).subList(0, index) : null; + } + - return new InternalSearchResponse(searchHits, aggregations, suggest, shardResults, timedOut, terminatedEarly); + @Override + public ReducedQueryPhase reduce() { + return controller.reducedQueryPhase(results.asList(), getRemainingAggs(), getRemainingTopDocs(), topDocsStats, + numReducePhases, false); + } + + /** + * Returns the number of buffered results + */ + int getNumBuffered() { + return index; + } + + int getNumReducePhases() { return numReducePhases; } } /** - * returns the number of top results to be considered across all shards + * Returns a new SearchPhaseResults instance. This might return an instance that reduces search responses incrementally. */ - private static int topN(List> queryResults) { - QuerySearchResultProvider firstResult = queryResults.get(0).value; - int topN = firstResult.queryResult().size(); - if (firstResult.includeFetch()) { - // if we did both query and fetch on the same go, we have fetched all the docs from each shards already, use them... - // this is also important since we shortcut and fetch only docs from "from" and up to "size" - topN *= queryResults.size(); - } - return topN; + InitialSearchPhase.SearchPhaseResults newSearchPhaseResults(SearchRequest request, int numShards) { + SearchSourceBuilder source = request.source(); + boolean isScrollRequest = request.scroll() != null; + final boolean hasAggs = source != null && source.aggregations() != null; + final boolean hasTopDocs = source == null || source.size() != 0; + final boolean trackTotalHits = source == null || source.trackTotalHits(); + + if (isScrollRequest == false && (hasAggs || hasTopDocs)) { + // no incremental reduce if scroll is used - we only hit a single shard or sometimes more... + if (request.getBatchedReduceSize() < numShards) { + // only use this if there are aggs and if there are more shards than we should reduce at once + return new QueryPhaseResultConsumer(this, numShards, request.getBatchedReduceSize(), hasTopDocs, hasAggs); + } + } + return new InitialSearchPhase.SearchPhaseResults(numShards) { + @Override + public ReducedQueryPhase reduce() { + return reducedQueryPhase(results.asList(), isScrollRequest, trackTotalHits); + } + }; + } + + static final class TopDocsStats { + final boolean trackTotalHits; + long totalHits; + long fetchHits; + float maxScore = Float.NEGATIVE_INFINITY; + + TopDocsStats() { + this(true); + } + + TopDocsStats(boolean trackTotalHits) { + this.trackTotalHits = trackTotalHits; + this.totalHits = trackTotalHits ? 0 : -1; + } + + void add(TopDocs topDocs) { + if (trackTotalHits) { + totalHits += topDocs.totalHits; + } + fetchHits += topDocs.scoreDocs.length; + if (!Float.isNaN(topDocs.getMaxScore())) { + maxScore = Math.max(maxScore, topDocs.getMaxScore()); + } + } + } + + static final class SortedTopDocs { + static final SortedTopDocs EMPTY = new SortedTopDocs(EMPTY_DOCS, false, null); + final ScoreDoc[] scoreDocs; + final boolean isSortedByField; + final SortField[] sortFields; + + SortedTopDocs(ScoreDoc[] scoreDocs, boolean isSortedByField, SortField[] sortFields) { + this.scoreDocs = scoreDocs; + this.isSortedByField = isSortedByField; + this.sortFields = sortFields; + } } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchPhaseExecutionException.java b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseExecutionException.java index 515d3204fb6a8..c6e0b21dffd5d 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchPhaseExecutionException.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchPhaseExecutionException.java @@ -103,6 +103,7 @@ public ShardSearchFailure[] shardFailures() { return shardFailures; } + @Override public Throwable getCause() { Throwable cause = super.getCause(); if (cause == null) { @@ -131,28 +132,34 @@ private static String buildMessage(String phaseName, String msg, ShardSearchFail } @Override - protected void innerToXContent(XContentBuilder builder, Params params) throws IOException { + protected void metadataToXContent(XContentBuilder builder, Params params) throws IOException { builder.field("phase", phaseName); final boolean group = params.paramAsBoolean("group_shard_failures", true); // we group by default builder.field("grouped", group); // notify that it's grouped builder.field("failed_shards"); builder.startArray(); - ShardOperationFailedException[] failures = params.paramAsBoolean("group_shard_failures", true) ? ExceptionsHelper.groupBy(shardFailures) : shardFailures; + ShardOperationFailedException[] failures = params.paramAsBoolean("group_shard_failures", true) ? + ExceptionsHelper.groupBy(shardFailures) : shardFailures; for (ShardOperationFailedException failure : failures) { builder.startObject(); failure.toXContent(builder, params); builder.endObject(); } builder.endArray(); - super.innerToXContent(builder, params); } @Override - protected void causeToXContent(XContentBuilder builder, Params params) throws IOException { - if (super.getCause() != null) { - // if the cause is null we inject a guessed root cause that will then be rendered twice so wi disable it manually - super.causeToXContent(builder, params); + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + Throwable ex = ExceptionsHelper.unwrapCause(this); + if (ex != this) { + generateThrowableXContent(builder, params, this); + } else { + // We don't have a cause when all shards failed, but we do have shards failures so we can "guess" a cause + // (see {@link #getCause()}). Here, we use super.getCause() because we don't want the guessed exception to + // be rendered twice (one in the "cause" field, one in "failed_shards") + innerToXContent(builder, params, this, getExceptionName(), getMessage(), getHeaders(), getMetadata(), super.getCause()); } + return builder; } @Override diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchQueryAndFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchQueryAndFetchAsyncAction.java deleted file mode 100644 index f597ede64bc32..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/search/SearchQueryAndFetchAsyncAction.java +++ /dev/null @@ -1,89 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.action.search; - -import org.apache.logging.log4j.Logger; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionRunnable; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.routing.GroupShardsIterator; -import org.elasticsearch.search.fetch.QueryFetchSearchResult; -import org.elasticsearch.search.internal.AliasFilter; -import org.elasticsearch.search.internal.InternalSearchResponse; -import org.elasticsearch.search.internal.ShardSearchTransportRequest; - -import java.io.IOException; -import java.util.Map; -import java.util.concurrent.Executor; -import java.util.function.Function; - -class SearchQueryAndFetchAsyncAction extends AbstractSearchAsyncAction { - - private final SearchPhaseController searchPhaseController; - - SearchQueryAndFetchAsyncAction(Logger logger, SearchTransportService searchTransportService, - Function nodeIdToDiscoveryNode, - Map aliasFilter, Map concreteIndexBoosts, - SearchPhaseController searchPhaseController, Executor executor, - SearchRequest request, ActionListener listener, - GroupShardsIterator shardsIts, long startTime, long clusterStateVersion, - SearchTask task) { - super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, concreteIndexBoosts, executor, - request, listener, shardsIts, startTime, clusterStateVersion, task); - this.searchPhaseController = searchPhaseController; - - } - - @Override - protected String firstPhaseName() { - return "query_fetch"; - } - - @Override - protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request, - ActionListener listener) { - searchTransportService.sendExecuteFetch(node, request, task, listener); - } - - @Override - protected void moveToSecondPhase() throws Exception { - getExecutor().execute(new ActionRunnable(listener) { - @Override - public void doRun() throws IOException { - final boolean isScrollRequest = request.scroll() != null; - sortedShardDocs = searchPhaseController.sortDocs(isScrollRequest, firstResults); - final InternalSearchResponse internalResponse = searchPhaseController.merge(isScrollRequest, sortedShardDocs, firstResults, - firstResults); - String scrollId = isScrollRequest ? TransportSearchHelper.buildScrollId(request.searchType(), firstResults) : null; - listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, successfulOps.get(), - buildTookInMillis(), buildShardFailures())); - } - - @Override - public void onFailure(Exception e) { - ReduceSearchPhaseException failure = new ReduceSearchPhaseException("merge", "", e, buildShardFailures()); - if (logger.isDebugEnabled()) { - logger.debug("failed to reduce search", failure); - } - super.onFailure(failure); - } - }); - } -} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java index 7b30006329193..de8109aadd8fe 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchQueryThenFetchAsyncAction.java @@ -19,144 +19,41 @@ package org.elasticsearch.action.search; -import com.carrotsearch.hppc.IntArrayList; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; -import org.apache.lucene.search.ScoreDoc; import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ActionRunnable; -import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.GroupShardsIterator; -import org.elasticsearch.common.util.concurrent.AtomicArray; -import org.elasticsearch.search.SearchShardTarget; -import org.elasticsearch.search.fetch.FetchSearchResult; -import org.elasticsearch.search.fetch.ShardFetchSearchRequest; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.internal.AliasFilter; -import org.elasticsearch.search.internal.InternalSearchResponse; -import org.elasticsearch.search.internal.ShardSearchTransportRequest; -import org.elasticsearch.search.query.QuerySearchResultProvider; +import org.elasticsearch.transport.Transport; -import java.io.IOException; import java.util.Map; import java.util.concurrent.Executor; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.function.Function; +import java.util.function.BiFunction; -class SearchQueryThenFetchAsyncAction extends AbstractSearchAsyncAction { +final class SearchQueryThenFetchAsyncAction extends AbstractSearchAsyncAction { - final AtomicArray fetchResults; - final AtomicArray docIdsToLoad; private final SearchPhaseController searchPhaseController; - SearchQueryThenFetchAsyncAction(Logger logger, SearchTransportService searchTransportService, - Function nodeIdToDiscoveryNode, - Map aliasFilter, Map concreteIndexBoosts, - SearchPhaseController searchPhaseController, Executor executor, - SearchRequest request, ActionListener listener, - GroupShardsIterator shardsIts, long startTime, long clusterStateVersion, - SearchTask task) { - super(logger, searchTransportService, nodeIdToDiscoveryNode, aliasFilter, concreteIndexBoosts, executor, request, listener, - shardsIts, startTime, clusterStateVersion, task); + SearchQueryThenFetchAsyncAction(final Logger logger, final SearchTransportService searchTransportService, + final BiFunction nodeIdToConnection, final Map aliasFilter, + final Map concreteIndexBoosts, final SearchPhaseController searchPhaseController, final Executor executor, + final SearchRequest request, final ActionListener listener, + final GroupShardsIterator shardsIts, final TransportSearchAction.SearchTimeProvider timeProvider, + long clusterStateVersion, SearchTask task) { + super("query", logger, searchTransportService, nodeIdToConnection, aliasFilter, concreteIndexBoosts, executor, request, listener, + shardsIts, timeProvider, clusterStateVersion, task, searchPhaseController.newSearchPhaseResults(request, shardsIts.size())); this.searchPhaseController = searchPhaseController; - fetchResults = new AtomicArray<>(firstResults.length()); - docIdsToLoad = new AtomicArray<>(firstResults.length()); } - @Override - protected String firstPhaseName() { - return "query"; - } - - @Override - protected void sendExecuteFirstPhase(DiscoveryNode node, ShardSearchTransportRequest request, - ActionListener listener) { - searchTransportService.sendExecuteQuery(node, request, task, listener); + protected void executePhaseOnShard(final SearchShardIterator shardIt, final ShardRouting shard, + final SearchActionListener listener) { + getSearchTransport().sendExecuteQuery(getConnection(shardIt.getClusterAlias(), shard.currentNodeId()), + buildShardSearchRequest(shardIt), getTask(), listener); } @Override - protected void moveToSecondPhase() throws Exception { - final boolean isScrollRequest = request.scroll() != null; - sortedShardDocs = searchPhaseController.sortDocs(isScrollRequest, firstResults); - searchPhaseController.fillDocIdsToLoad(docIdsToLoad, sortedShardDocs); - - if (docIdsToLoad.asList().isEmpty()) { - finishHim(); - return; - } - - final ScoreDoc[] lastEmittedDocPerShard = isScrollRequest ? - searchPhaseController.getLastEmittedDocPerShard(firstResults.asList(), sortedShardDocs, firstResults.length()) : null; - final AtomicInteger counter = new AtomicInteger(docIdsToLoad.asList().size()); - for (AtomicArray.Entry entry : docIdsToLoad.asList()) { - QuerySearchResultProvider queryResult = firstResults.get(entry.index); - DiscoveryNode node = nodeIdToDiscoveryNode.apply(queryResult.shardTarget().nodeId()); - ShardFetchSearchRequest fetchSearchRequest = createFetchRequest(queryResult.queryResult(), entry, lastEmittedDocPerShard); - executeFetch(entry.index, queryResult.shardTarget(), counter, fetchSearchRequest, node); - } - } - - void executeFetch(final int shardIndex, final SearchShardTarget shardTarget, final AtomicInteger counter, - final ShardFetchSearchRequest fetchSearchRequest, DiscoveryNode node) { - searchTransportService.sendExecuteFetch(node, fetchSearchRequest, task, new ActionListener() { - @Override - public void onResponse(FetchSearchResult result) { - result.shardTarget(shardTarget); - fetchResults.set(shardIndex, result); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - @Override - public void onFailure(Exception t) { - // the search context might not be cleared on the node where the fetch was executed for example - // because the action was rejected by the thread pool. in this case we need to send a dedicated - // request to clear the search context. by setting docIdsToLoad to null, the context will be cleared - // in TransportSearchTypeAction.releaseIrrelevantSearchContexts() after the search request is done. - docIdsToLoad.set(shardIndex, null); - onFetchFailure(t, fetchSearchRequest, shardIndex, shardTarget, counter); - } - }); - } - - void onFetchFailure(Exception e, ShardFetchSearchRequest fetchSearchRequest, int shardIndex, SearchShardTarget shardTarget, - AtomicInteger counter) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute fetch phase", fetchSearchRequest.id()), e); - } - this.addShardFailure(shardIndex, shardTarget, e); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - private void finishHim() { - getExecutor().execute(new ActionRunnable(listener) { - @Override - public void doRun() throws IOException { - final boolean isScrollRequest = request.scroll() != null; - final InternalSearchResponse internalResponse = searchPhaseController.merge(isScrollRequest, sortedShardDocs, firstResults, - fetchResults); - String scrollId = isScrollRequest ? TransportSearchHelper.buildScrollId(request.searchType(), firstResults) : null; - listener.onResponse(new SearchResponse(internalResponse, scrollId, expectedSuccessfulOps, - successfulOps.get(), buildTookInMillis(), buildShardFailures())); - releaseIrrelevantSearchContexts(firstResults, docIdsToLoad); - } - - @Override - public void onFailure(Exception e) { - try { - ReduceSearchPhaseException failure = new ReduceSearchPhaseException("fetch", "", e, buildShardFailures()); - if (logger.isDebugEnabled()) { - logger.debug("failed to reduce search", failure); - } - super.onFailure(failure); - } finally { - releaseIrrelevantSearchContexts(firstResults, docIdsToLoad); - } - } - }); + protected SearchPhase getNextPhase(final SearchPhaseResults results, final SearchPhaseContext context) { + return new FetchSearchPhase(results, searchPhaseController, context); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java b/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java index 9c69f1a763f38..01a3e94620a46 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchRequest.java @@ -39,6 +39,8 @@ import java.util.Collections; import java.util.Objects; +import static org.elasticsearch.action.ValidateActions.addValidationError; + /** * A request to execute search against one or more indices (or all). Best created using * {@link org.elasticsearch.client.Requests#searchRequest(String...)}. @@ -70,6 +72,8 @@ public final class SearchRequest extends ActionRequest implements IndicesRequest private Scroll scroll; + private int batchedReduceSize = 512; + private String[] types = Strings.EMPTY_ARRAY; public static final IndicesOptions DEFAULT_INDICES_OPTIONS = IndicesOptions.strictExpandOpenAndForbidClosed(); @@ -100,7 +104,12 @@ public SearchRequest(String[] indices, SearchSourceBuilder source) { @Override public ActionRequestValidationException validate() { - return null; + ActionRequestValidationException validationException = null; + if (source != null && source.trackTotalHits() == false && scroll() != null) { + validationException = + addValidationError("disabling [track_total_hits] is not allowed in a scroll context", validationException); + } + return validationException; } /** @@ -274,6 +283,25 @@ public Boolean requestCache() { return this.requestCache; } + /** + * Sets the number of shard results that should be reduced at once on the coordinating node. This value should be used as a protection + * mechanism to reduce the memory overhead per search request if the potential number of shards in the request can be large. + */ + public void setBatchedReduceSize(int batchedReduceSize) { + if (batchedReduceSize <= 1) { + throw new IllegalArgumentException("batchedReduceSize must be >= 2"); + } + this.batchedReduceSize = batchedReduceSize; + } + + /** + * Returns the number of shard results that should be reduced at once on the coordinating node. This value should be used as a + * protection mechanism to reduce the memory overhead per search request if the potential number of shards in the request can be large. + */ + public int getBatchedReduceSize() { + return batchedReduceSize; + } + /** * @return true if the request only has suggest */ @@ -320,6 +348,7 @@ public void readFrom(StreamInput in) throws IOException { types = in.readStringArray(); indicesOptions = IndicesOptions.readIndicesOptions(in); requestCache = in.readOptionalBoolean(); + batchedReduceSize = in.readVInt(); } @Override @@ -337,6 +366,7 @@ public void writeTo(StreamOutput out) throws IOException { out.writeStringArray(types); indicesOptions.writeIndicesOptions(out); out.writeOptionalBoolean(requestCache); + out.writeVInt(batchedReduceSize); } @Override diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java b/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java index 3c320447fe833..0333092b91755 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchRequestBuilder.java @@ -26,13 +26,14 @@ import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.index.query.QueryBuilder; import org.elasticsearch.script.Script; +import org.elasticsearch.search.collapse.CollapseBuilder; import org.elasticsearch.search.Scroll; import org.elasticsearch.search.aggregations.AggregationBuilder; import org.elasticsearch.search.aggregations.PipelineAggregationBuilder; -import org.elasticsearch.search.slice.SliceBuilder; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder; import org.elasticsearch.search.rescore.RescoreBuilder; +import org.elasticsearch.search.slice.SliceBuilder; import org.elasticsearch.search.sort.SortBuilder; import org.elasticsearch.search.sort.SortOrder; import org.elasticsearch.search.suggest.SuggestBuilder; @@ -362,14 +363,21 @@ public SearchRequestBuilder slice(SliceBuilder builder) { } /** - * Applies when sorting, and controls if scores will be tracked as well. Defaults to - * false. + * Applies when sorting, and controls if scores will be tracked as well. Defaults to false. */ public SearchRequestBuilder setTrackScores(boolean trackScores) { sourceBuilder().trackScores(trackScores); return this; } + /** + * Indicates if the total hit count for the query should be tracked. Defaults to true + */ + public SearchRequestBuilder setTrackTotalHits(boolean trackTotalHits) { + sourceBuilder().trackTotalHits(trackTotalHits); + return this; + } + /** * Adds stored fields to load and return (note, it must be stored) as part of the search request. * To disable the stored fields entirely (source and metadata fields) use {@code storedField("_none_")}. @@ -503,6 +511,11 @@ public SearchRequestBuilder setProfile(boolean profile) { return this; } + public SearchRequestBuilder setCollapse(CollapseBuilder collapse) { + sourceBuilder().collapse(collapse); + return this; + } + @Override public String toString() { if (request.source() != null) { @@ -517,4 +530,13 @@ private SearchSourceBuilder sourceBuilder() { } return request.source(); } + + /** + * Sets the number of shard results that should be reduced at once on the coordinating node. This value should be used as a protection + * mechanism to reduce the memory overhead per search request if the potential number of shards in the request can be large. + */ + public SearchRequestBuilder setBatchedReduceSize(int batchedReduceSize) { + this.request.setBatchedReduceSize(batchedReduceSize); + return this; + } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchResponse.java b/core/src/main/java/org/elasticsearch/action/search/SearchResponse.java index 3135d2c8f53b0..3aa5e3a2adbc6 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchResponse.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchResponse.java @@ -21,32 +21,44 @@ import org.elasticsearch.action.ActionResponse; import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.ParseField; import org.elasticsearch.common.Strings; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; -import org.elasticsearch.common.xcontent.StatusToXContent; +import org.elasticsearch.common.xcontent.StatusToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.rest.action.RestActions; import org.elasticsearch.search.SearchHits; import org.elasticsearch.search.aggregations.Aggregations; import org.elasticsearch.search.internal.InternalSearchResponse; import org.elasticsearch.search.profile.ProfileShardResult; +import org.elasticsearch.search.profile.SearchProfileShardResults; import org.elasticsearch.search.suggest.Suggest; import java.io.IOException; +import java.util.ArrayList; +import java.util.List; import java.util.Map; import static org.elasticsearch.action.search.ShardSearchFailure.readShardSearchFailure; -import static org.elasticsearch.search.internal.InternalSearchResponse.readInternalSearchResponse; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * A response of a search request. */ -public class SearchResponse extends ActionResponse implements StatusToXContent { +public class SearchResponse extends ActionResponse implements StatusToXContentObject { + + private static final ParseField SCROLL_ID = new ParseField("_scroll_id"); + private static final ParseField TOOK = new ParseField("took"); + private static final ParseField TIMED_OUT = new ParseField("timed_out"); + private static final ParseField TERMINATED_EARLY = new ParseField("terminated_early"); + private static final ParseField NUM_REDUCE_PHASES = new ParseField("num_reduce_phases"); - private InternalSearchResponse internalResponse; + private SearchResponseSections internalResponse; private String scrollId; @@ -61,7 +73,8 @@ public class SearchResponse extends ActionResponse implements StatusToXContent { public SearchResponse() { } - public SearchResponse(InternalSearchResponse internalResponse, String scrollId, int totalShards, int successfulShards, long tookInMillis, ShardSearchFailure[] shardFailures) { + public SearchResponse(SearchResponseSections internalResponse, String scrollId, int totalShards, int successfulShards, + long tookInMillis, ShardSearchFailure[] shardFailures) { this.internalResponse = internalResponse; this.scrollId = scrollId; this.totalShards = totalShards; @@ -107,17 +120,17 @@ public Boolean isTerminatedEarly() { } /** - * How long the search took. + * Returns the number of reduce phases applied to obtain this search response */ - public TimeValue getTook() { - return new TimeValue(tookInMillis); + public int getNumReducePhases() { + return internalResponse.getNumReducePhases(); } /** - * How long the search took in milliseconds. + * How long the search took. */ - public long getTookInMillis() { - return tookInMillis; + public TimeValue getTook() { + return new TimeValue(tookInMillis); } /** @@ -168,36 +181,120 @@ public void scrollId(String scrollId) { * * @return The profile results or an empty map */ - @Nullable public Map getProfileResults() { + @Nullable + public Map getProfileResults() { return internalResponse.profile(); } - static final class Fields { - static final String _SCROLL_ID = "_scroll_id"; - static final String TOOK = "took"; - static final String TIMED_OUT = "timed_out"; - static final String TERMINATED_EARLY = "terminated_early"; - } - @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + innerToXContent(builder, params); + builder.endObject(); + return builder; + } + + public XContentBuilder innerToXContent(XContentBuilder builder, Params params) throws IOException { if (scrollId != null) { - builder.field(Fields._SCROLL_ID, scrollId); + builder.field(SCROLL_ID.getPreferredName(), scrollId); } - builder.field(Fields.TOOK, tookInMillis); - builder.field(Fields.TIMED_OUT, isTimedOut()); + builder.field(TOOK.getPreferredName(), tookInMillis); + builder.field(TIMED_OUT.getPreferredName(), isTimedOut()); if (isTerminatedEarly() != null) { - builder.field(Fields.TERMINATED_EARLY, isTerminatedEarly()); + builder.field(TERMINATED_EARLY.getPreferredName(), isTerminatedEarly()); + } + if (getNumReducePhases() != 1) { + builder.field(NUM_REDUCE_PHASES.getPreferredName(), getNumReducePhases()); } - RestActions.buildBroadcastShardsHeader(builder, params, getTotalShards(), getSuccessfulShards(), getFailedShards(), getShardFailures()); + RestActions.buildBroadcastShardsHeader(builder, params, getTotalShards(), getSuccessfulShards(), getFailedShards(), + getShardFailures()); internalResponse.toXContent(builder, params); return builder; } + public static SearchResponse fromXContent(XContentParser parser) throws IOException { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser::getTokenLocation); + XContentParser.Token token; + String currentFieldName = null; + SearchHits hits = null; + Aggregations aggs = null; + Suggest suggest = null; + SearchProfileShardResults profile = null; + boolean timedOut = false; + Boolean terminatedEarly = null; + int numReducePhases = 1; + long tookInMillis = -1; + int successfulShards = -1; + int totalShards = -1; + String scrollId = null; + List failures = new ArrayList<>(); + while((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (SCROLL_ID.match(currentFieldName)) { + scrollId = parser.text(); + } else if (TOOK.match(currentFieldName)) { + tookInMillis = parser.longValue(); + } else if (TIMED_OUT.match(currentFieldName)) { + timedOut = parser.booleanValue(); + } else if (TERMINATED_EARLY.match(currentFieldName)) { + terminatedEarly = parser.booleanValue(); + } else if (NUM_REDUCE_PHASES.match(currentFieldName)) { + numReducePhases = parser.intValue(); + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (SearchHits.Fields.HITS.equals(currentFieldName)) { + hits = SearchHits.fromXContent(parser); + } else if (Aggregations.AGGREGATIONS_FIELD.equals(currentFieldName)) { + aggs = Aggregations.fromXContent(parser); + } else if (Suggest.NAME.equals(currentFieldName)) { + suggest = Suggest.fromXContent(parser); + } else if (SearchProfileShardResults.PROFILE_FIELD.equals(currentFieldName)) { + profile = SearchProfileShardResults.fromXContent(parser); + } else if (RestActions._SHARDS_FIELD.match(currentFieldName)) { + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (RestActions.FAILED_FIELD.match(currentFieldName)) { + parser.intValue(); // we don't need it but need to consume it + } else if (RestActions.SUCCESSFUL_FIELD.match(currentFieldName)) { + successfulShards = parser.intValue(); + } else if (RestActions.TOTAL_FIELD.match(currentFieldName)) { + totalShards = parser.intValue(); + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_ARRAY) { + if (RestActions.FAILURES_FIELD.match(currentFieldName)) { + while((token = parser.nextToken()) != XContentParser.Token.END_ARRAY) { + failures.add(ShardSearchFailure.fromXContent(parser)); + } + } else { + parser.skipChildren(); + } + } else { + parser.skipChildren(); + } + } + } else { + parser.skipChildren(); + } + } + } + SearchResponseSections searchResponseSections = new SearchResponseSections(hits, aggs, suggest, timedOut, terminatedEarly, + profile, numReducePhases); + return new SearchResponse(searchResponseSections, scrollId, totalShards, successfulShards, tookInMillis, + failures.toArray(new ShardSearchFailure[failures.size()])); + } + @Override public void readFrom(StreamInput in) throws IOException { super.readFrom(in); - internalResponse = readInternalSearchResponse(in); + internalResponse = new InternalSearchResponse(in); totalShards = in.readVInt(); successfulShards = in.readVInt(); int size = in.readVInt(); @@ -231,6 +328,6 @@ public void writeTo(StreamOutput out) throws IOException { @Override public String toString() { - return Strings.toString(this, true); + return Strings.toString(this); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchResponseSections.java b/core/src/main/java/org/elasticsearch/action/search/SearchResponseSections.java new file mode 100644 index 0000000000000..1757acbfd6d93 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchResponseSections.java @@ -0,0 +1,122 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import org.elasticsearch.common.io.stream.StreamOutput; +import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.search.SearchHits; +import org.elasticsearch.search.aggregations.Aggregations; +import org.elasticsearch.search.profile.ProfileShardResult; +import org.elasticsearch.search.profile.SearchProfileShardResults; +import org.elasticsearch.search.suggest.Suggest; + +import java.io.IOException; +import java.util.Collections; +import java.util.Map; + +/** + * Base class that holds the various sections which a search response is + * composed of (hits, aggs, suggestions etc.) and allows to retrieve them. + * + * The reason why this class exists is that the high level REST client uses its own classes + * to parse aggregations into, which are not serializable. This is the common part that can be + * shared between core and client. + */ +public class SearchResponseSections implements ToXContent { + + protected final SearchHits hits; + protected final Aggregations aggregations; + protected final Suggest suggest; + protected final SearchProfileShardResults profileResults; + protected final boolean timedOut; + protected final Boolean terminatedEarly; + protected final int numReducePhases; + + public SearchResponseSections(SearchHits hits, Aggregations aggregations, Suggest suggest, boolean timedOut, Boolean terminatedEarly, + SearchProfileShardResults profileResults, int numReducePhases) { + this.hits = hits; + this.aggregations = aggregations; + this.suggest = suggest; + this.profileResults = profileResults; + this.timedOut = timedOut; + this.terminatedEarly = terminatedEarly; + this.numReducePhases = numReducePhases; + } + + public final boolean timedOut() { + return this.timedOut; + } + + public final Boolean terminatedEarly() { + return this.terminatedEarly; + } + + public final SearchHits hits() { + return hits; + } + + public final Aggregations aggregations() { + return aggregations; + } + + public final Suggest suggest() { + return suggest; + } + + /** + * Returns the number of reduce phases applied to obtain this search response + */ + public final int getNumReducePhases() { + return numReducePhases; + } + + /** + * Returns the profile results for this search response (including all shards). + * An empty map is returned if profiling was not enabled + * + * @return Profile results + */ + public final Map profile() { + if (profileResults == null) { + return Collections.emptyMap(); + } + return profileResults.getShardResults(); + } + + @Override + public final XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + hits.toXContent(builder, params); + if (aggregations != null) { + aggregations.toXContent(builder, params); + } + if (suggest != null) { + suggest.toXContent(builder, params); + } + if (profileResults != null) { + profileResults.toXContent(builder, params); + } + return builder; + } + + protected void writeTo(StreamOutput out) throws IOException { + throw new UnsupportedOperationException(); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchScrollAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchScrollAsyncAction.java new file mode 100644 index 0000000000000..5be511f558568 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchScrollAsyncAction.java @@ -0,0 +1,226 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.logging.log4j.util.Supplier; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; +import org.elasticsearch.common.Nullable; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.common.util.concurrent.CountDown; +import org.elasticsearch.search.SearchPhaseResult; +import org.elasticsearch.search.SearchShardTarget; +import org.elasticsearch.search.internal.InternalScrollSearchRequest; +import org.elasticsearch.search.internal.InternalSearchResponse; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; + +import static org.elasticsearch.action.search.TransportSearchHelper.internalScrollSearchRequest; + +/** + * Abstract base class for scroll execution modes. This class encapsulates the basic logic to + * fan out to nodes and execute the query part of the scroll request. Subclasses can for instance + * run separate fetch phases etc. + */ +abstract class SearchScrollAsyncAction implements Runnable { + /* + * Some random TODO: + * Today we still have a dedicated executing mode for scrolls while we could simplify this by implementing + * scroll like functionality (mainly syntactic sugar) as an ordinary search with search_after. We could even go further and + * make the scroll entirely stateless and encode the state per shard in the scroll ID. + * + * Today we also hold a context per shard but maybe + * we want the context per coordinating node such that we route the scroll to the same coordinator all the time and hold the context + * here? This would have the advantage that if we loose that node the entire scroll is deal not just one shard. + * + * Additionally there is the possibility to associate the scroll with a seq. id. such that we can talk to any replica as long as + * the shards engine hasn't advanced that seq. id yet. Such a resume is possible and best effort, it could be even a safety net since + * if you rely on indices being read-only things can change in-between without notification or it's hard to detect if there where any + * changes while scrolling. These are all options to improve the current situation which we can look into down the road + */ + protected final Logger logger; + protected final ActionListener listener; + protected final ParsedScrollId scrollId; + protected final DiscoveryNodes nodes; + protected final SearchPhaseController searchPhaseController; + protected final SearchScrollRequest request; + private final long startTime; + private final List shardFailures = new ArrayList<>(); + private final AtomicInteger successfulOps; + + protected SearchScrollAsyncAction(ParsedScrollId scrollId, Logger logger, DiscoveryNodes nodes, + ActionListener listener, SearchPhaseController searchPhaseController, + SearchScrollRequest request) { + this.startTime = System.currentTimeMillis(); + this.scrollId = scrollId; + this.successfulOps = new AtomicInteger(scrollId.getContext().length); + this.logger = logger; + this.listener = listener; + this.nodes = nodes; + this.searchPhaseController = searchPhaseController; + this.request = request; + } + + /** + * Builds how long it took to execute the search. + */ + private long buildTookInMillis() { + // protect ourselves against time going backwards + // negative values don't make sense and we want to be able to serialize that thing as a vLong + return Math.max(1, System.currentTimeMillis() - startTime); + } + + public final void run() { + final ScrollIdForNode[] context = scrollId.getContext(); + if (context.length == 0) { + listener.onFailure(new SearchPhaseExecutionException("query", "no nodes to search on", ShardSearchFailure.EMPTY_ARRAY)); + return; + } + final CountDown counter = new CountDown(scrollId.getContext().length); + for (int i = 0; i < context.length; i++) { + ScrollIdForNode target = context[i]; + DiscoveryNode node = nodes.get(target.getNode()); + final int shardIndex = i; + if (node != null) { // it might happen that a node is going down in-between scrolls... + InternalScrollSearchRequest internalRequest = internalScrollSearchRequest(target.getScrollId(), request); + // we can't create a SearchShardTarget here since we don't know the index and shard ID we are talking to + // we only know the node and the search context ID. Yet, the response will contain the SearchShardTarget + // from the target node instead...that's why we pass null here + SearchActionListener searchActionListener = new SearchActionListener(null, shardIndex) { + + @Override + protected void setSearchShardTarget(T response) { + // don't do this - it's part of the response... + assert response.getSearchShardTarget() != null : "search shard target must not be null"; + } + + @Override + protected void innerOnResponse(T result) { + assert shardIndex == result.getShardIndex() : "shard index mismatch: " + shardIndex + " but got: " + + result.getShardIndex(); + onFirstPhaseResult(shardIndex, result); + if (counter.countDown()) { + SearchPhase phase = moveToNextPhase(); + try { + phase.run(); + } catch (Exception e) { + // we need to fail the entire request here - the entire phase just blew up + // don't call onShardFailure or onFailure here since otherwise we'd countDown the counter + // again which would result in an exception + listener.onFailure(new SearchPhaseExecutionException(phase.getName(), "Phase failed", e, + ShardSearchFailure.EMPTY_ARRAY)); + } + } + } + + @Override + public void onFailure(Exception t) { + onShardFailure("query", shardIndex, counter, target.getScrollId(), t, null, + SearchScrollAsyncAction.this::moveToNextPhase); + } + }; + executeInitialPhase(node, internalRequest, searchActionListener); + } else { // the node is not available we treat this as a shard failure here + onShardFailure("query", shardIndex, counter, target.getScrollId(), + new IllegalStateException("node [" + target.getNode() + "] is not available"), null, + SearchScrollAsyncAction.this::moveToNextPhase); + } + } + } + + synchronized ShardSearchFailure[] buildShardFailures() { // pkg private for testing + if (shardFailures.isEmpty()) { + return ShardSearchFailure.EMPTY_ARRAY; + } + return shardFailures.toArray(new ShardSearchFailure[shardFailures.size()]); + } + + // we do our best to return the shard failures, but its ok if its not fully concurrently safe + // we simply try and return as much as possible + private synchronized void addShardFailure(ShardSearchFailure failure) { + shardFailures.add(failure); + } + + protected abstract void executeInitialPhase(DiscoveryNode node, InternalScrollSearchRequest internalRequest, + SearchActionListener searchActionListener); + + protected abstract SearchPhase moveToNextPhase(); + + protected abstract void onFirstPhaseResult(int shardId, T result); + + protected SearchPhase sendResponsePhase(SearchPhaseController.ReducedQueryPhase queryPhase, + final AtomicArray fetchResults) { + return new SearchPhase("fetch") { + @Override + public void run() throws IOException { + sendResponse(queryPhase, fetchResults); + } + }; + } + + protected final void sendResponse(SearchPhaseController.ReducedQueryPhase queryPhase, + final AtomicArray fetchResults) { + try { + final InternalSearchResponse internalResponse = searchPhaseController.merge(true, queryPhase, fetchResults.asList(), + fetchResults::get); + // the scroll ID never changes we always return the same ID. This ID contains all the shards and their context ids + // such that we can talk to them abgain in the next roundtrip. + String scrollId = null; + if (request.scroll() != null) { + scrollId = request.scrollId(); + } + listener.onResponse(new SearchResponse(internalResponse, scrollId, this.scrollId.getContext().length, successfulOps.get(), + buildTookInMillis(), buildShardFailures())); + } catch (Exception e) { + listener.onFailure(new ReduceSearchPhaseException("fetch", "inner finish failed", e, buildShardFailures())); + } + } + + protected void onShardFailure(String phaseName, final int shardIndex, final CountDown counter, final long searchId, Exception failure, + @Nullable SearchShardTarget searchShardTarget, + Supplier nextPhaseSupplier) { + if (logger.isDebugEnabled()) { + logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute {} phase", searchId, phaseName), failure); + } + addShardFailure(new ShardSearchFailure(failure, searchShardTarget)); + int successfulOperations = successfulOps.decrementAndGet(); + assert successfulOperations >= 0 : "successfulOperations must be >= 0 but was: " + successfulOperations; + if (counter.countDown()) { + if (successfulOps.get() == 0) { + listener.onFailure(new SearchPhaseExecutionException(phaseName, "all shards failed", failure, buildShardFailures())); + } else { + SearchPhase phase = nextPhaseSupplier.get(); + try { + phase.run(); + } catch (Exception e) { + e.addSuppressed(failure); + listener.onFailure(new SearchPhaseExecutionException(phase.getName(), "Phase failed", e, + ShardSearchFailure.EMPTY_ARRAY)); + } + } + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryAndFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryAndFetchAsyncAction.java index bf53fc719c6c3..9270dfdd82a4b 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryAndFetchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryAndFetchAsyncAction.java @@ -28,156 +28,47 @@ import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.common.util.concurrent.CountDown; +import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.fetch.QueryFetchSearchResult; import org.elasticsearch.search.fetch.ScrollQueryFetchSearchResult; import org.elasticsearch.search.internal.InternalScrollSearchRequest; import org.elasticsearch.search.internal.InternalSearchResponse; +import org.elasticsearch.search.query.ScrollQuerySearchResult; import java.util.List; import java.util.concurrent.atomic.AtomicInteger; import static org.elasticsearch.action.search.TransportSearchHelper.internalScrollSearchRequest; -class SearchScrollQueryAndFetchAsyncAction extends AbstractAsyncAction { +final class SearchScrollQueryAndFetchAsyncAction extends SearchScrollAsyncAction { - private final Logger logger; - private final SearchPhaseController searchPhaseController; private final SearchTransportService searchTransportService; - private final SearchScrollRequest request; private final SearchTask task; - private final ActionListener listener; - private final ParsedScrollId scrollId; - private final DiscoveryNodes nodes; - private volatile AtomicArray shardFailures; private final AtomicArray queryFetchResults; - private final AtomicInteger successfulOps; - private final AtomicInteger counter; SearchScrollQueryAndFetchAsyncAction(Logger logger, ClusterService clusterService, SearchTransportService searchTransportService, SearchPhaseController searchPhaseController, SearchScrollRequest request, SearchTask task, ParsedScrollId scrollId, ActionListener listener) { - this.logger = logger; - this.searchPhaseController = searchPhaseController; - this.searchTransportService = searchTransportService; - this.request = request; + super(scrollId, logger, clusterService.state().nodes(), listener, searchPhaseController, request); this.task = task; - this.listener = listener; - this.scrollId = scrollId; - this.nodes = clusterService.state().nodes(); - this.successfulOps = new AtomicInteger(scrollId.getContext().length); - this.counter = new AtomicInteger(scrollId.getContext().length); - + this.searchTransportService = searchTransportService; this.queryFetchResults = new AtomicArray<>(scrollId.getContext().length); } - protected final ShardSearchFailure[] buildShardFailures() { - if (shardFailures == null) { - return ShardSearchFailure.EMPTY_ARRAY; - } - List> entries = shardFailures.asList(); - ShardSearchFailure[] failures = new ShardSearchFailure[entries.size()]; - for (int i = 0; i < failures.length; i++) { - failures[i] = entries.get(i).value; - } - return failures; - } - - // we do our best to return the shard failures, but its ok if its not fully concurrently safe - // we simply try and return as much as possible - protected final void addShardFailure(final int shardIndex, ShardSearchFailure failure) { - if (shardFailures == null) { - shardFailures = new AtomicArray<>(scrollId.getContext().length); - } - shardFailures.set(shardIndex, failure); - } - - public void start() { - if (scrollId.getContext().length == 0) { - listener.onFailure(new SearchPhaseExecutionException("query", "no nodes to search on", ShardSearchFailure.EMPTY_ARRAY)); - return; - } - - ScrollIdForNode[] context = scrollId.getContext(); - for (int i = 0; i < context.length; i++) { - ScrollIdForNode target = context[i]; - DiscoveryNode node = nodes.get(target.getNode()); - if (node != null) { - executePhase(i, node, target.getScrollId()); - } else { - if (logger.isDebugEnabled()) { - logger.debug("Node [{}] not available for scroll request [{}]", target.getNode(), scrollId.getSource()); - } - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - } - - for (ScrollIdForNode target : scrollId.getContext()) { - DiscoveryNode node = nodes.get(target.getNode()); - if (node == null) { - if (logger.isDebugEnabled()) { - logger.debug("Node [{}] not available for scroll request [{}]", target.getNode(), scrollId.getSource()); - } - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - } - } - - void executePhase(final int shardIndex, DiscoveryNode node, final long searchId) { - InternalScrollSearchRequest internalRequest = internalScrollSearchRequest(searchId, request); - searchTransportService.sendExecuteFetch(node, internalRequest, task, new ActionListener() { - @Override - public void onResponse(ScrollQueryFetchSearchResult result) { - queryFetchResults.set(shardIndex, result.result()); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - - @Override - public void onFailure(Exception t) { - onPhaseFailure(t, searchId, shardIndex); - } - }); - } - - private void onPhaseFailure(Exception e, long searchId, int shardIndex) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute query phase", searchId), e); - } - addShardFailure(shardIndex, new ShardSearchFailure(e)); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - if (successfulOps.get() == 0) { - listener.onFailure(new SearchPhaseExecutionException("query_fetch", "all shards failed", e, buildShardFailures())); - } else { - finishHim(); - } - } + @Override + protected void executeInitialPhase(DiscoveryNode node, InternalScrollSearchRequest internalRequest, + SearchActionListener searchActionListener) { + searchTransportService.sendExecuteScrollFetch(node, internalRequest, task, searchActionListener); } - private void finishHim() { - try { - innerFinishHim(); - } catch (Exception e) { - listener.onFailure(new ReduceSearchPhaseException("fetch", "", e, buildShardFailures())); - } + @Override + protected SearchPhase moveToNextPhase() { + return sendResponsePhase(searchPhaseController.reducedQueryPhase(queryFetchResults.asList(), true), queryFetchResults); } - private void innerFinishHim() throws Exception { - ScoreDoc[] sortedShardDocs = searchPhaseController.sortDocs(true, queryFetchResults); - final InternalSearchResponse internalResponse = searchPhaseController.merge(true, sortedShardDocs, queryFetchResults, - queryFetchResults); - String scrollId = null; - if (request.scroll() != null) { - scrollId = request.scrollId(); - } - listener.onResponse(new SearchResponse(internalResponse, scrollId, this.scrollId.getContext().length, successfulOps.get(), - buildTookInMillis(), buildShardFailures())); + @Override + protected void onFirstPhaseResult(int shardId, ScrollQueryFetchSearchResult result) { + queryFetchResults.setOnce(shardId, result.result()); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryThenFetchAsyncAction.java b/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryThenFetchAsyncAction.java index 851e3343bc2ed..963838b7a0acd 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryThenFetchAsyncAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchScrollQueryThenFetchAsyncAction.java @@ -21,210 +21,102 @@ import com.carrotsearch.hppc.IntArrayList; import org.apache.logging.log4j.Logger; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; import org.apache.lucene.search.ScoreDoc; import org.elasticsearch.action.ActionListener; import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.common.util.concurrent.CountDown; import org.elasticsearch.search.fetch.FetchSearchResult; import org.elasticsearch.search.fetch.ShardFetchRequest; import org.elasticsearch.search.internal.InternalScrollSearchRequest; -import org.elasticsearch.search.internal.InternalSearchResponse; import org.elasticsearch.search.query.QuerySearchResult; import org.elasticsearch.search.query.ScrollQuerySearchResult; -import java.util.List; -import java.util.concurrent.atomic.AtomicInteger; +import java.io.IOException; import static org.elasticsearch.action.search.TransportSearchHelper.internalScrollSearchRequest; -class SearchScrollQueryThenFetchAsyncAction extends AbstractAsyncAction { +final class SearchScrollQueryThenFetchAsyncAction extends SearchScrollAsyncAction { - private final Logger logger; private final SearchTask task; private final SearchTransportService searchTransportService; - private final SearchPhaseController searchPhaseController; - private final SearchScrollRequest request; - private final ActionListener listener; - private final ParsedScrollId scrollId; - private final DiscoveryNodes nodes; - private volatile AtomicArray shardFailures; - final AtomicArray queryResults; - final AtomicArray fetchResults; - private volatile ScoreDoc[] sortedShardDocs; - private final AtomicInteger successfulOps; + private final AtomicArray fetchResults; + private final AtomicArray queryResults; SearchScrollQueryThenFetchAsyncAction(Logger logger, ClusterService clusterService, SearchTransportService searchTransportService, SearchPhaseController searchPhaseController, SearchScrollRequest request, SearchTask task, ParsedScrollId scrollId, ActionListener listener) { - this.logger = logger; + super(scrollId, logger, clusterService.state().nodes(), listener, searchPhaseController, request); this.searchTransportService = searchTransportService; - this.searchPhaseController = searchPhaseController; - this.request = request; this.task = task; - this.listener = listener; - this.scrollId = scrollId; - this.nodes = clusterService.state().nodes(); - this.successfulOps = new AtomicInteger(scrollId.getContext().length); - this.queryResults = new AtomicArray<>(scrollId.getContext().length); this.fetchResults = new AtomicArray<>(scrollId.getContext().length); + this.queryResults = new AtomicArray<>(scrollId.getContext().length); } - protected final ShardSearchFailure[] buildShardFailures() { - if (shardFailures == null) { - return ShardSearchFailure.EMPTY_ARRAY; - } - List> entries = shardFailures.asList(); - ShardSearchFailure[] failures = new ShardSearchFailure[entries.size()]; - for (int i = 0; i < failures.length; i++) { - failures[i] = entries.get(i).value; - } - return failures; + protected void onFirstPhaseResult(int shardId, ScrollQuerySearchResult result) { + queryResults.setOnce(shardId, result.queryResult()); } - // we do our best to return the shard failures, but its ok if its not fully concurrently safe - // we simply try and return as much as possible - protected final void addShardFailure(final int shardIndex, ShardSearchFailure failure) { - if (shardFailures == null) { - shardFailures = new AtomicArray<>(scrollId.getContext().length); - } - shardFailures.set(shardIndex, failure); + @Override + protected void executeInitialPhase(DiscoveryNode node, InternalScrollSearchRequest internalRequest, + SearchActionListener searchActionListener) { + searchTransportService.sendExecuteScrollQuery(node, internalRequest, task, searchActionListener); } - public void start() { - if (scrollId.getContext().length == 0) { - listener.onFailure(new SearchPhaseExecutionException("query", "no nodes to search on", ShardSearchFailure.EMPTY_ARRAY)); - return; - } - final AtomicInteger counter = new AtomicInteger(scrollId.getContext().length); - - ScrollIdForNode[] context = scrollId.getContext(); - for (int i = 0; i < context.length; i++) { - ScrollIdForNode target = context[i]; - DiscoveryNode node = nodes.get(target.getNode()); - if (node != null) { - executeQueryPhase(i, counter, node, target.getScrollId()); - } else { - if (logger.isDebugEnabled()) { - logger.debug("Node [{}] not available for scroll request [{}]", target.getNode(), scrollId.getSource()); - } - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - try { - executeFetchPhase(); - } catch (Exception e) { - listener.onFailure(new SearchPhaseExecutionException("query", "Fetch failed", e, ShardSearchFailure.EMPTY_ARRAY)); - return; - } - } - } - } - } - - private void executeQueryPhase(final int shardIndex, final AtomicInteger counter, DiscoveryNode node, final long searchId) { - InternalScrollSearchRequest internalRequest = internalScrollSearchRequest(searchId, request); - searchTransportService.sendExecuteQuery(node, internalRequest, task, new ActionListener() { + @Override + protected SearchPhase moveToNextPhase() { + return new SearchPhase("fetch") { @Override - public void onResponse(ScrollQuerySearchResult result) { - queryResults.set(shardIndex, result.queryResult()); - if (counter.decrementAndGet() == 0) { - try { - executeFetchPhase(); - } catch (Exception e) { - onFailure(e); - } - } - } - - @Override - public void onFailure(Exception t) { - onQueryPhaseFailure(shardIndex, counter, searchId, t); - } - }); - } - - void onQueryPhaseFailure(final int shardIndex, final AtomicInteger counter, final long searchId, Exception failure) { - if (logger.isDebugEnabled()) { - logger.debug((Supplier) () -> new ParameterizedMessage("[{}] Failed to execute query phase", searchId), failure); - } - addShardFailure(shardIndex, new ShardSearchFailure(failure)); - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - if (successfulOps.get() == 0) { - listener.onFailure(new SearchPhaseExecutionException("query", "all shards failed", failure, buildShardFailures())); - } else { - try { - executeFetchPhase(); - } catch (Exception e) { - e.addSuppressed(failure); - listener.onFailure(new SearchPhaseExecutionException("query", "Fetch failed", e, ShardSearchFailure.EMPTY_ARRAY)); + public void run() throws IOException { + final SearchPhaseController.ReducedQueryPhase reducedQueryPhase = searchPhaseController.reducedQueryPhase( + queryResults.asList(), true); + if (reducedQueryPhase.scoreDocs.length == 0) { + sendResponse(reducedQueryPhase, fetchResults); + return; } - } - } - } - private void executeFetchPhase() throws Exception { - sortedShardDocs = searchPhaseController.sortDocs(true, queryResults); - AtomicArray docIdsToLoad = new AtomicArray<>(queryResults.length()); - searchPhaseController.fillDocIdsToLoad(docIdsToLoad, sortedShardDocs); - - if (docIdsToLoad.asList().isEmpty()) { - finishHim(); - return; - } - - - final ScoreDoc[] lastEmittedDocPerShard = searchPhaseController.getLastEmittedDocPerShard(queryResults.asList(), - sortedShardDocs, queryResults.length()); - final AtomicInteger counter = new AtomicInteger(docIdsToLoad.asList().size()); - for (final AtomicArray.Entry entry : docIdsToLoad.asList()) { - IntArrayList docIds = entry.value; - final QuerySearchResult querySearchResult = queryResults.get(entry.index); - ScoreDoc lastEmittedDoc = lastEmittedDocPerShard[entry.index]; - ShardFetchRequest shardFetchRequest = new ShardFetchRequest(querySearchResult.id(), docIds, lastEmittedDoc); - DiscoveryNode node = nodes.get(querySearchResult.shardTarget().nodeId()); - searchTransportService.sendExecuteFetchScroll(node, shardFetchRequest, task, new ActionListener() { - @Override - public void onResponse(FetchSearchResult result) { - result.shardTarget(querySearchResult.shardTarget()); - fetchResults.set(entry.index, result); - if (counter.decrementAndGet() == 0) { - finishHim(); + final IntArrayList[] docIdsToLoad = searchPhaseController.fillDocIdsToLoad(queryResults.length(), + reducedQueryPhase.scoreDocs); + final ScoreDoc[] lastEmittedDocPerShard = searchPhaseController.getLastEmittedDocPerShard(reducedQueryPhase, + queryResults.length()); + final CountDown counter = new CountDown(docIdsToLoad.length); + for (int i = 0; i < docIdsToLoad.length; i++) { + final int index = i; + final IntArrayList docIds = docIdsToLoad[index]; + if (docIds != null) { + final QuerySearchResult querySearchResult = queryResults.get(index); + ScoreDoc lastEmittedDoc = lastEmittedDocPerShard[index]; + ShardFetchRequest shardFetchRequest = new ShardFetchRequest(querySearchResult.getRequestId(), docIds, + lastEmittedDoc); + DiscoveryNode node = nodes.get(querySearchResult.getSearchShardTarget().getNodeId()); + searchTransportService.sendExecuteFetchScroll(node, shardFetchRequest, task, + new SearchActionListener(querySearchResult.getSearchShardTarget(), index) { + @Override + protected void innerOnResponse(FetchSearchResult response) { + fetchResults.setOnce(response.getShardIndex(), response); + if (counter.countDown()) { + sendResponse(reducedQueryPhase, fetchResults); + } + } + + @Override + public void onFailure(Exception t) { + onShardFailure(getName(), querySearchResult.getShardIndex(), counter, querySearchResult.getRequestId(), + t, querySearchResult.getSearchShardTarget(), + () -> sendResponsePhase(reducedQueryPhase, fetchResults)); + } + }); + } else { + // the counter is set to the total size of docIdsToLoad + // which can have null values so we have to count them down too + if (counter.countDown()) { + sendResponse(reducedQueryPhase, fetchResults); + } } } - - @Override - public void onFailure(Exception t) { - if (logger.isDebugEnabled()) { - logger.debug("Failed to execute fetch phase", t); - } - successfulOps.decrementAndGet(); - if (counter.decrementAndGet() == 0) { - finishHim(); - } - } - }); - } - } - - private void finishHim() { - try { - innerFinishHim(); - } catch (Exception e) { - listener.onFailure(new ReduceSearchPhaseException("fetch", "inner finish failed", e, buildShardFailures())); - } + } + }; } - private void innerFinishHim() { - InternalSearchResponse internalResponse = searchPhaseController.merge(true, sortedShardDocs, queryResults, fetchResults); - String scrollId = null; - if (request.scroll() != null) { - scrollId = request.scrollId(); - } - listener.onResponse(new SearchResponse(internalResponse, scrollId, this.scrollId.getContext().length, successfulOps.get(), - buildTookInMillis(), buildShardFailures())); - } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java b/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java index 03a40dc8b3e9a..fbe648cceaa80 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchScrollRequest.java @@ -24,6 +24,9 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.unit.TimeValue; +import org.elasticsearch.common.xcontent.ToXContentObject; +import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.search.Scroll; import org.elasticsearch.tasks.Task; import org.elasticsearch.tasks.TaskId; @@ -33,7 +36,7 @@ import static org.elasticsearch.action.ValidateActions.addValidationError; -public class SearchScrollRequest extends ActionRequest { +public class SearchScrollRequest extends ActionRequest implements ToXContentObject { private String scrollId; private Scroll scroll; @@ -145,4 +148,39 @@ public String getDescription() { return "scrollId[" + scrollId + "], scroll[" + scroll + "]"; } + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field("scroll_id", scrollId); + if (scroll != null) { + builder.field("scroll", scroll.keepAlive().getStringRep()); + } + builder.endObject(); + return builder; + } + + /** + * Parse a search scroll request from a request body provided through the REST layer. + * Values that are already be set and are also found while parsing will be overridden. + */ + public void fromXContent(XContentParser parser) throws IOException { + if (parser.nextToken() != XContentParser.Token.START_OBJECT) { + throw new IllegalArgumentException("Malformed content, must start with an object"); + } else { + XContentParser.Token token; + String currentFieldName = null; + while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if ("scroll_id".equals(currentFieldName) && token == XContentParser.Token.VALUE_STRING) { + scrollId(parser.text()); + } else if ("scroll".equals(currentFieldName) && token == XContentParser.Token.VALUE_STRING) { + scroll(new Scroll(TimeValue.parseTimeValue(parser.text(), null, "scroll"))); + } else { + throw new IllegalArgumentException("Unknown parameter [" + currentFieldName + + "] in request body or parameter is of the wrong type[" + token + "] "); + } + } + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchShardIterator.java b/core/src/main/java/org/elasticsearch/action/search/SearchShardIterator.java new file mode 100644 index 0000000000000..d3d707771b8db --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/search/SearchShardIterator.java @@ -0,0 +1,61 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.elasticsearch.action.search; + +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.cluster.routing.PlainShardIterator; +import org.elasticsearch.cluster.routing.ShardRouting; +import org.elasticsearch.index.shard.ShardId; + +import java.util.List; + +/** + * Extension of {@link PlainShardIterator} used in the search api, which also holds the {@link OriginalIndices} + * of the search request. Useful especially with cross cluster search, as each cluster has its own set of original indices. + */ +public final class SearchShardIterator extends PlainShardIterator { + + private final OriginalIndices originalIndices; + private String clusterAlias; + + /** + * Creates a {@link PlainShardIterator} instance that iterates over a subset of the given shards + * this the a given shardId. + * + * @param shardId shard id of the group + * @param shards shards to iterate + */ + public SearchShardIterator(String clusterAlias, ShardId shardId, List shards, OriginalIndices originalIndices) { + super(shardId, shards); + this.originalIndices = originalIndices; + this.clusterAlias = clusterAlias; + } + + /** + * Returns the original indices associated with this shard iterator, specifically with the cluster that this shard belongs to. + */ + public OriginalIndices getOriginalIndices() { + return originalIndices; + } + + public String getClusterAlias() { + return clusterAlias; + } +} diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchTask.java b/core/src/main/java/org/elasticsearch/action/search/SearchTask.java index 24f94a4331909..d0a1cdd456f47 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchTask.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchTask.java @@ -31,4 +31,9 @@ public SearchTask(long id, String type, String action, String description, TaskI super(id, type, action, description, parentTaskId); } + @Override + public boolean shouldCancelChildrenOnCancellation() { + return true; + } + } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java b/core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java index 5b05213256618..2d20d383288f4 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchTransportService.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.search; +import org.elasticsearch.Version; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.ActionListenerResponseHandler; import org.elasticsearch.action.IndicesRequest; @@ -29,6 +30,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.SearchService; import org.elasticsearch.search.dfs.DfsSearchResult; import org.elasticsearch.search.fetch.FetchSearchResult; @@ -40,17 +42,21 @@ import org.elasticsearch.search.internal.ShardSearchTransportRequest; import org.elasticsearch.search.query.QuerySearchRequest; import org.elasticsearch.search.query.QuerySearchResult; -import org.elasticsearch.search.query.QuerySearchResultProvider; import org.elasticsearch.search.query.ScrollQuerySearchResult; import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.RemoteClusterService; +import org.elasticsearch.transport.Transport; +import org.elasticsearch.transport.TransportActionProxy; import org.elasticsearch.transport.TaskAwareTransportRequestHandler; import org.elasticsearch.transport.TransportChannel; import org.elasticsearch.transport.TransportRequest; +import org.elasticsearch.transport.TransportRequestOptions; import org.elasticsearch.transport.TransportResponse; import org.elasticsearch.transport.TransportService; import java.io.IOException; +import java.util.function.Supplier; /** * An encapsulation of {@link org.elasticsearch.search.SearchService} operations exposed through @@ -66,7 +72,6 @@ public class SearchTransportService extends AbstractComponent { public static final String QUERY_ID_ACTION_NAME = "indices:data/read/search[phase/query/id]"; public static final String QUERY_SCROLL_ACTION_NAME = "indices:data/read/search[phase/query/scroll]"; public static final String QUERY_FETCH_ACTION_NAME = "indices:data/read/search[phase/query+fetch]"; - public static final String QUERY_QUERY_FETCH_ACTION_NAME = "indices:data/read/search[phase/query/query+fetch]"; public static final String QUERY_FETCH_SCROLL_ACTION_NAME = "indices:data/read/search[phase/query+fetch/scroll]"; public static final String FETCH_ID_SCROLL_ACTION_NAME = "indices:data/read/search[phase/fetch/id/scroll]"; public static final String FETCH_ID_ACTION_NAME = "indices:data/read/search[phase/fetch/id]"; @@ -78,9 +83,9 @@ public SearchTransportService(Settings settings, TransportService transportServi this.transportService = transportService; } - public void sendFreeContext(DiscoveryNode node, final long contextId, SearchRequest request) { - transportService.sendRequest(node, FREE_CONTEXT_ACTION_NAME, new SearchFreeContextRequest(request, contextId), - new ActionListenerResponseHandler<>(new ActionListener() { + public void sendFreeContext(Transport.Connection connection, final long contextId, OriginalIndices originalIndices) { + transportService.sendRequest(connection, FREE_CONTEXT_ACTION_NAME, new SearchFreeContextRequest(originalIndices, contextId), + TransportRequestOptions.EMPTY, new ActionListenerResponseHandler<>(new ActionListener() { @Override public void onResponse(SearchFreeContextResponse response) { // no need to respond if it was freed or not @@ -93,72 +98,92 @@ public void onFailure(Exception e) { }, SearchFreeContextResponse::new)); } - public void sendFreeContext(DiscoveryNode node, long contextId, final ActionListener listener) { - transportService.sendRequest(node, FREE_CONTEXT_SCROLL_ACTION_NAME, new ScrollFreeContextRequest(contextId), - new ActionListenerResponseHandler<>(listener, SearchFreeContextResponse::new)); + public void sendFreeContext(Transport.Connection connection, long contextId, final ActionListener listener) { + transportService.sendRequest(connection, FREE_CONTEXT_SCROLL_ACTION_NAME, new ScrollFreeContextRequest(contextId), + TransportRequestOptions.EMPTY, new ActionListenerResponseHandler<>(listener, SearchFreeContextResponse::new)); } - public void sendClearAllScrollContexts(DiscoveryNode node, final ActionListener listener) { - transportService.sendRequest(node, CLEAR_SCROLL_CONTEXTS_ACTION_NAME, TransportRequest.Empty.INSTANCE, - new ActionListenerResponseHandler<>(listener, () -> TransportResponse.Empty.INSTANCE)); + public void sendClearAllScrollContexts(Transport.Connection connection, final ActionListener listener) { + transportService.sendRequest(connection, CLEAR_SCROLL_CONTEXTS_ACTION_NAME, TransportRequest.Empty.INSTANCE, + TransportRequestOptions.EMPTY, new ActionListenerResponseHandler<>(listener, () -> TransportResponse.Empty.INSTANCE)); } - public void sendExecuteDfs(DiscoveryNode node, final ShardSearchTransportRequest request, SearchTask task, - final ActionListener listener) { - transportService.sendChildRequest(node, DFS_ACTION_NAME, request, task, + public void sendExecuteDfs(Transport.Connection connection, final ShardSearchTransportRequest request, SearchTask task, + final SearchActionListener listener) { + transportService.sendChildRequest(connection, DFS_ACTION_NAME, request, task, new ActionListenerResponseHandler<>(listener, DfsSearchResult::new)); } - public void sendExecuteQuery(DiscoveryNode node, final ShardSearchTransportRequest request, SearchTask task, - final ActionListener listener) { - transportService.sendChildRequest(node, QUERY_ACTION_NAME, request, task, - new ActionListenerResponseHandler<>(listener, QuerySearchResult::new)); + public void sendExecuteQuery(Transport.Connection connection, final ShardSearchTransportRequest request, SearchTask task, + final SearchActionListener listener) { + // we optimize this and expect a QueryFetchSearchResult if we only have a single shard in the search request + // this used to be the QUERY_AND_FETCH which doesn't exists anymore. + final boolean fetchDocuments = request.numberOfShards() == 1; + Supplier supplier = fetchDocuments ? QueryFetchSearchResult::new : QuerySearchResult::new; + if (connection.getVersion().before(Version.V_5_3_0) && fetchDocuments) { + // this is a BWC layer for pre 5.3 indices + if (request.scroll() != null) { + /** + * This is needed for nodes pre 5.3 when the single shard optimization is used. + * These nodes will set the last emitted doc only if the removed `query_and_fetch` search type is set + * in the request. See {@link SearchType}. + */ + request.searchType(SearchType.QUERY_AND_FETCH); + } + // TODO this BWC layer can be removed once this is back-ported to 5.3 + transportService.sendChildRequest(connection, QUERY_FETCH_ACTION_NAME, request, task, + new ActionListenerResponseHandler<>(listener, supplier)); + } else { + transportService.sendChildRequest(connection, QUERY_ACTION_NAME, request, task, + new ActionListenerResponseHandler<>(listener, supplier)); + } } - public void sendExecuteQuery(DiscoveryNode node, final QuerySearchRequest request, SearchTask task, - final ActionListener listener) { - transportService.sendChildRequest(node, QUERY_ID_ACTION_NAME, request, task, + public void sendExecuteQuery(Transport.Connection connection, final QuerySearchRequest request, SearchTask task, + final SearchActionListener listener) { + transportService.sendChildRequest(connection, QUERY_ID_ACTION_NAME, request, task, new ActionListenerResponseHandler<>(listener, QuerySearchResult::new)); } - public void sendExecuteQuery(DiscoveryNode node, final InternalScrollSearchRequest request, SearchTask task, - final ActionListener listener) { - transportService.sendChildRequest(node, QUERY_SCROLL_ACTION_NAME, request, task, + public void sendExecuteScrollQuery(DiscoveryNode node, final InternalScrollSearchRequest request, SearchTask task, + final SearchActionListener listener) { + transportService.sendChildRequest(transportService.getConnection(node), QUERY_SCROLL_ACTION_NAME, request, task, new ActionListenerResponseHandler<>(listener, ScrollQuerySearchResult::new)); } - public void sendExecuteFetch(DiscoveryNode node, final ShardSearchTransportRequest request, SearchTask task, - final ActionListener listener) { - transportService.sendChildRequest(node, QUERY_FETCH_ACTION_NAME, request, task, - new ActionListenerResponseHandler<>(listener, QueryFetchSearchResult::new)); + public void sendExecuteScrollFetch(DiscoveryNode node, final InternalScrollSearchRequest request, SearchTask task, + final SearchActionListener listener) { + transportService.sendChildRequest(transportService.getConnection(node), QUERY_FETCH_SCROLL_ACTION_NAME, request, task, + new ActionListenerResponseHandler<>(listener, ScrollQueryFetchSearchResult::new)); } - public void sendExecuteFetch(DiscoveryNode node, final QuerySearchRequest request, SearchTask task, - final ActionListener listener) { - transportService.sendChildRequest(node, QUERY_QUERY_FETCH_ACTION_NAME, request, task, - new ActionListenerResponseHandler<>(listener, QueryFetchSearchResult::new)); + public void sendExecuteFetch(Transport.Connection connection, final ShardFetchSearchRequest request, SearchTask task, + final SearchActionListener listener) { + sendExecuteFetch(connection, FETCH_ID_ACTION_NAME, request, task, listener); } - public void sendExecuteFetch(DiscoveryNode node, final InternalScrollSearchRequest request, SearchTask task, - final ActionListener listener) { - transportService.sendChildRequest(node, QUERY_FETCH_SCROLL_ACTION_NAME, request, task, - new ActionListenerResponseHandler<>(listener, ScrollQueryFetchSearchResult::new)); + public void sendExecuteFetchScroll(DiscoveryNode node, final ShardFetchRequest request, SearchTask task, + final SearchActionListener listener) { + sendExecuteFetch(transportService.getConnection(node), FETCH_ID_SCROLL_ACTION_NAME, request, task, listener); } - public void sendExecuteFetch(DiscoveryNode node, final ShardFetchSearchRequest request, SearchTask task, - final ActionListener listener) { - sendExecuteFetch(node, FETCH_ID_ACTION_NAME, request, task, listener); + private void sendExecuteFetch(Transport.Connection connection, String action, final ShardFetchRequest request, SearchTask task, + final SearchActionListener listener) { + transportService.sendChildRequest(connection, action, request, task, + new ActionListenerResponseHandler<>(listener, FetchSearchResult::new)); } - public void sendExecuteFetchScroll(DiscoveryNode node, final ShardFetchRequest request, SearchTask task, - final ActionListener listener) { - sendExecuteFetch(node, FETCH_ID_SCROLL_ACTION_NAME, request, task, listener); + /** + * Used by {@link TransportSearchAction} to send the expand queries (field collapsing). + */ + void sendExecuteMultiSearch(final MultiSearchRequest request, SearchTask task, + final ActionListener listener) { + transportService.sendChildRequest(transportService.getConnection(transportService.getLocalNode()), MultiSearchAction.NAME, request, + task, new ActionListenerResponseHandler<>(listener, MultiSearchResponse::new)); } - private void sendExecuteFetch(DiscoveryNode node, String action, final ShardFetchRequest request, SearchTask task, - final ActionListener listener) { - transportService.sendChildRequest(node, action, request, task, - new ActionListenerResponseHandler<>(listener, FetchSearchResult::new)); + public RemoteClusterService getRemoteClusterService() { + return transportService.getRemoteClusterService(); } static class ScrollFreeContextRequest extends TransportRequest { @@ -191,12 +216,12 @@ public void writeTo(StreamOutput out) throws IOException { static class SearchFreeContextRequest extends ScrollFreeContextRequest implements IndicesRequest { private OriginalIndices originalIndices; - public SearchFreeContextRequest() { + SearchFreeContextRequest() { } - SearchFreeContextRequest(SearchRequest request, long id) { + SearchFreeContextRequest(OriginalIndices originalIndices, long id) { super(id); - this.originalIndices = new OriginalIndices(request); + this.originalIndices = originalIndices; } @Override @@ -265,6 +290,7 @@ public void messageReceived(ScrollFreeContextRequest request, TransportChannel c channel.sendResponse(new SearchFreeContextResponse(freed)); } }); + TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_SCROLL_ACTION_NAME, SearchFreeContextResponse::new); transportService.registerRequestHandler(FREE_CONTEXT_ACTION_NAME, SearchFreeContextRequest::new, ThreadPool.Names.SAME, new TaskAwareTransportRequestHandler() { @Override @@ -273,6 +299,7 @@ public void messageReceived(SearchFreeContextRequest request, TransportChannel c channel.sendResponse(new SearchFreeContextResponse(freed)); } }); + TransportActionProxy.registerProxyAction(transportService, FREE_CONTEXT_ACTION_NAME, SearchFreeContextResponse::new); transportService.registerRequestHandler(CLEAR_SCROLL_CONTEXTS_ACTION_NAME, () -> TransportRequest.Empty.INSTANCE, ThreadPool.Names.SAME, new TaskAwareTransportRequestHandler() { @@ -282,6 +309,9 @@ public void messageReceived(TransportRequest.Empty request, TransportChannel cha channel.sendResponse(TransportResponse.Empty.INSTANCE); } }); + TransportActionProxy.registerProxyAction(transportService, CLEAR_SCROLL_CONTEXTS_ACTION_NAME, + () -> TransportResponse.Empty.INSTANCE); + transportService.registerRequestHandler(DFS_ACTION_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SEARCH, new TaskAwareTransportRequestHandler() { @Override @@ -291,14 +321,18 @@ public void messageReceived(ShardSearchTransportRequest request, TransportChanne } }); + TransportActionProxy.registerProxyAction(transportService, DFS_ACTION_NAME, DfsSearchResult::new); + transportService.registerRequestHandler(QUERY_ACTION_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SEARCH, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel, Task task) throws Exception { - QuerySearchResultProvider result = searchService.executeQueryPhase(request, (SearchTask)task); + SearchPhaseResult result = searchService.executeQueryPhase(request, (SearchTask)task); channel.sendResponse(result); } }); + TransportActionProxy.registerProxyAction(transportService, QUERY_ACTION_NAME, QuerySearchResult::new); + transportService.registerRequestHandler(QUERY_ID_ACTION_NAME, QuerySearchRequest::new, ThreadPool.Names.SEARCH, new TaskAwareTransportRequestHandler() { @Override @@ -307,6 +341,8 @@ public void messageReceived(QuerySearchRequest request, TransportChannel channel channel.sendResponse(result); } }); + TransportActionProxy.registerProxyAction(transportService, QUERY_ID_ACTION_NAME, QuerySearchResult::new); + transportService.registerRequestHandler(QUERY_SCROLL_ACTION_NAME, InternalScrollSearchRequest::new, ThreadPool.Names.SEARCH, new TaskAwareTransportRequestHandler() { @Override @@ -315,22 +351,22 @@ public void messageReceived(InternalScrollSearchRequest request, TransportChanne channel.sendResponse(result); } }); + TransportActionProxy.registerProxyAction(transportService, QUERY_SCROLL_ACTION_NAME, ScrollQuerySearchResult::new); + + // this is for BWC with 5.3 until the QUERY_AND_FETCH removal change has been back-ported to 5.x + // in 5.3 we will only execute a `indices:data/read/search[phase/query+fetch]` if the node is pre 5.3 + // such that we can remove this after the back-port. transportService.registerRequestHandler(QUERY_FETCH_ACTION_NAME, ShardSearchTransportRequest::new, ThreadPool.Names.SEARCH, new TaskAwareTransportRequestHandler() { @Override public void messageReceived(ShardSearchTransportRequest request, TransportChannel channel, Task task) throws Exception { - QueryFetchSearchResult result = searchService.executeFetchPhase(request, (SearchTask)task); - channel.sendResponse(result); - } - }); - transportService.registerRequestHandler(QUERY_QUERY_FETCH_ACTION_NAME, QuerySearchRequest::new, ThreadPool.Names.SEARCH, - new TaskAwareTransportRequestHandler() { - @Override - public void messageReceived(QuerySearchRequest request, TransportChannel channel, Task task) throws Exception { - QueryFetchSearchResult result = searchService.executeFetchPhase(request, (SearchTask)task); + assert request.numberOfShards() == 1 : "expected single shard request but got: " + request.numberOfShards(); + SearchPhaseResult result = searchService.executeQueryPhase(request, (SearchTask)task); channel.sendResponse(result); } }); + TransportActionProxy.registerProxyAction(transportService, QUERY_FETCH_ACTION_NAME, QueryFetchSearchResult::new); + transportService.registerRequestHandler(QUERY_FETCH_SCROLL_ACTION_NAME, InternalScrollSearchRequest::new, ThreadPool.Names.SEARCH, new TaskAwareTransportRequestHandler() { @Override @@ -339,6 +375,8 @@ public void messageReceived(InternalScrollSearchRequest request, TransportChanne channel.sendResponse(result); } }); + TransportActionProxy.registerProxyAction(transportService, QUERY_FETCH_SCROLL_ACTION_NAME, ScrollQueryFetchSearchResult::new); + transportService.registerRequestHandler(FETCH_ID_SCROLL_ACTION_NAME, ShardFetchRequest::new, ThreadPool.Names.SEARCH, new TaskAwareTransportRequestHandler() { @Override @@ -347,6 +385,8 @@ public void messageReceived(ShardFetchRequest request, TransportChannel channel, channel.sendResponse(result); } }); + TransportActionProxy.registerProxyAction(transportService, FETCH_ID_SCROLL_ACTION_NAME, FetchSearchResult::new); + transportService.registerRequestHandler(FETCH_ID_ACTION_NAME, ShardFetchSearchRequest::new, ThreadPool.Names.SEARCH, new TaskAwareTransportRequestHandler() { @Override @@ -355,6 +395,21 @@ public void messageReceived(ShardFetchSearchRequest request, TransportChannel ch channel.sendResponse(result); } }); + TransportActionProxy.registerProxyAction(transportService, FETCH_ID_ACTION_NAME, FetchSearchResult::new); + } + /** + * Returns a connection to the given node on the provided cluster. If the cluster alias is null the node will be resolved + * against the local cluster. + * @param clusterAlias the cluster alias the node should be resolve against + * @param node the node to resolve + * @return a connection to the given node belonging to the cluster with the provided alias. + */ + Transport.Connection getConnection(String clusterAlias, DiscoveryNode node) { + if (clusterAlias == null) { + return transportService.getConnection(node); + } else { + return transportService.getRemoteClusterService().getConnection(node, clusterAlias); + } } } diff --git a/core/src/main/java/org/elasticsearch/action/search/SearchType.java b/core/src/main/java/org/elasticsearch/action/search/SearchType.java index 93b7815616185..b800120408739 100644 --- a/core/src/main/java/org/elasticsearch/action/search/SearchType.java +++ b/core/src/main/java/org/elasticsearch/action/search/SearchType.java @@ -37,16 +37,12 @@ public enum SearchType { * are fetched. This is very handy when the index has a lot of shards (not replicas, shard id groups). */ QUERY_THEN_FETCH((byte) 1), + // 2 used to be DFS_QUERY_AND_FETCH + /** - * Same as {@link #QUERY_AND_FETCH}, except for an initial scatter phase which goes and computes the distributed - * term frequencies for more accurate scoring. - */ - DFS_QUERY_AND_FETCH((byte) 2), - /** - * The most naive (and possibly fastest) implementation is to simply execute the query on all relevant shards - * and return the results. Each shard returns size results. Since each shard already returns size hits, this - * type actually returns size times number of shards results back to the caller. + * Only used for pre 5.3 request where this type is still needed */ + @Deprecated QUERY_AND_FETCH((byte) 3); /** @@ -73,12 +69,9 @@ public byte id() { public static SearchType fromId(byte id) { if (id == 0) { return DFS_QUERY_THEN_FETCH; - } else if (id == 1) { + } else if (id == 1 + || id == 3) { // TODO this bwc layer can be removed once this is back-ported to 5.3 QUERY_AND_FETCH is removed now return QUERY_THEN_FETCH; - } else if (id == 2) { - return DFS_QUERY_AND_FETCH; - } else if (id == 3) { - return QUERY_AND_FETCH; } else { throw new IllegalArgumentException("No search type for [" + id + "]"); } @@ -95,12 +88,8 @@ public static SearchType fromString(String searchType) { } if ("dfs_query_then_fetch".equals(searchType)) { return SearchType.DFS_QUERY_THEN_FETCH; - } else if ("dfs_query_and_fetch".equals(searchType)) { - return SearchType.DFS_QUERY_AND_FETCH; } else if ("query_then_fetch".equals(searchType)) { return SearchType.QUERY_THEN_FETCH; - } else if ("query_and_fetch".equals(searchType)) { - return SearchType.QUERY_AND_FETCH; } else { throw new IllegalArgumentException("No search type for [" + searchType + "]"); } diff --git a/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java b/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java index 8070081dcd865..7eb939ca8274e 100644 --- a/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java +++ b/core/src/main/java/org/elasticsearch/action/search/ShardSearchFailure.java @@ -21,22 +21,34 @@ import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; +import org.elasticsearch.action.OriginalIndices; import org.elasticsearch.action.ShardOperationFailedException; +import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.xcontent.XContentBuilder; +import org.elasticsearch.common.xcontent.XContentParser; +import org.elasticsearch.index.Index; +import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; import org.elasticsearch.search.SearchException; import org.elasticsearch.search.SearchShardTarget; import java.io.IOException; +import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; + /** * Represents a failure to search on a specific shard. */ public class ShardSearchFailure implements ShardOperationFailedException { + private static final String REASON_FIELD = "reason"; + private static final String NODE_FIELD = "node"; + private static final String INDEX_FIELD = "index"; + private static final String SHARD_FIELD = "shard"; + public static final ShardSearchFailure[] EMPTY_ARRAY = new ShardSearchFailure[0]; private SearchShardTarget shardTarget; @@ -68,7 +80,7 @@ public ShardSearchFailure(String reason, SearchShardTarget shardTarget) { this(reason, shardTarget, RestStatus.INTERNAL_SERVER_ERROR); } - public ShardSearchFailure(String reason, SearchShardTarget shardTarget, RestStatus status) { + private ShardSearchFailure(String reason, SearchShardTarget shardTarget, RestStatus status) { this.shardTarget = shardTarget; this.reason = reason; this.status = status; @@ -93,7 +105,7 @@ public RestStatus status() { @Override public String index() { if (shardTarget != null) { - return shardTarget.index(); + return shardTarget.getIndex(); } return null; } @@ -104,7 +116,7 @@ public String index() { @Override public int shardId() { if (shardTarget != null) { - return shardTarget.shardId().id(); + return shardTarget.getShardId().id(); } return -1; } @@ -153,20 +165,56 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.field("shard", shardId()); - builder.field("index", index()); + builder.field(SHARD_FIELD, shardId()); + builder.field(INDEX_FIELD, index()); if (shardTarget != null) { - builder.field("node", shardTarget.nodeId()); + builder.field(NODE_FIELD, shardTarget.getNodeId()); } if (cause != null) { - builder.field("reason"); + builder.field(REASON_FIELD); builder.startObject(); - ElasticsearchException.toXContent(builder, params, cause); + ElasticsearchException.generateThrowableXContent(builder, params, cause); builder.endObject(); } return builder; } + public static ShardSearchFailure fromXContent(XContentParser parser) throws IOException { + XContentParser.Token token; + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.currentToken(), parser::getTokenLocation); + String currentFieldName = null; + int shardId = -1; + String indexName = null; + String nodeId = null; + ElasticsearchException exception = null; + while((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { + if (token == XContentParser.Token.FIELD_NAME) { + currentFieldName = parser.currentName(); + } else if (token.isValue()) { + if (SHARD_FIELD.equals(currentFieldName)) { + shardId = parser.intValue(); + } else if (INDEX_FIELD.equals(currentFieldName)) { + indexName = parser.text(); + } else if (NODE_FIELD.equals(currentFieldName)) { + nodeId = parser.text(); + } else { + parser.skipChildren(); + } + } else if (token == XContentParser.Token.START_OBJECT) { + if (REASON_FIELD.equals(currentFieldName)) { + exception = ElasticsearchException.fromXContent(parser); + } else { + parser.skipChildren(); + } + } else { + parser.skipChildren(); + } + } + return new ShardSearchFailure(exception, + new SearchShardTarget(nodeId, + new ShardId(new Index(indexName, IndexMetaData.INDEX_UUID_NA_VALUE), shardId), null, OriginalIndices.NONE)); + } + @Override public Throwable getCause() { return cause; diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportClearScrollAction.java b/core/src/main/java/org/elasticsearch/action/search/TransportClearScrollAction.java index 716077c915d6b..d9afbdacafe3c 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportClearScrollAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportClearScrollAction.java @@ -19,30 +19,16 @@ package org.elasticsearch.action.search; -import org.apache.logging.log4j.message.ParameterizedMessage; -import org.apache.logging.log4j.util.Supplier; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.HandledTransportAction; -import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.node.DiscoveryNode; -import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Settings; -import org.elasticsearch.common.util.concurrent.CountDown; import org.elasticsearch.threadpool.ThreadPool; -import org.elasticsearch.transport.TransportResponse; import org.elasticsearch.transport.TransportService; -import java.util.ArrayList; -import java.util.List; -import java.util.concurrent.atomic.AtomicInteger; -import java.util.concurrent.atomic.AtomicReference; - -import static org.elasticsearch.action.search.TransportSearchHelper.parseScrollId; - public class TransportClearScrollAction extends HandledTransportAction { private final ClusterService clusterService; @@ -53,105 +39,16 @@ public TransportClearScrollAction(Settings settings, TransportService transportS ClusterService clusterService, ActionFilters actionFilters, IndexNameExpressionResolver indexNameExpressionResolver, SearchTransportService searchTransportService) { - super(settings, ClearScrollAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, ClearScrollRequest::new); + super(settings, ClearScrollAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, + ClearScrollRequest::new); this.clusterService = clusterService; this.searchTransportService = searchTransportService; } @Override protected void doExecute(ClearScrollRequest request, final ActionListener listener) { - new Async(request, listener, clusterService.state()).run(); - } - - private class Async { - final DiscoveryNodes nodes; - final CountDown expectedOps; - final List contexts = new ArrayList<>(); - final ActionListener listener; - final AtomicReference expHolder; - final AtomicInteger numberOfFreedSearchContexts = new AtomicInteger(0); - - private Async(ClearScrollRequest request, ActionListener listener, ClusterState clusterState) { - int expectedOps = 0; - this.nodes = clusterState.nodes(); - if (request.getScrollIds().size() == 1 && "_all".equals(request.getScrollIds().get(0))) { - expectedOps = nodes.getSize(); - } else { - for (String parsedScrollId : request.getScrollIds()) { - ScrollIdForNode[] context = parseScrollId(parsedScrollId).getContext(); - expectedOps += context.length; - this.contexts.add(context); - } - } - this.listener = listener; - this.expHolder = new AtomicReference<>(); - this.expectedOps = new CountDown(expectedOps); - } - - public void run() { - if (expectedOps.isCountedDown()) { - listener.onResponse(new ClearScrollResponse(true, 0)); - return; - } - - if (contexts.isEmpty()) { - for (final DiscoveryNode node : nodes) { - searchTransportService.sendClearAllScrollContexts(node, new ActionListener() { - @Override - public void onResponse(TransportResponse response) { - onFreedContext(true); - } - - @Override - public void onFailure(Exception e) { - onFailedFreedContext(e, node); - } - }); - } - } else { - for (ScrollIdForNode[] context : contexts) { - for (ScrollIdForNode target : context) { - final DiscoveryNode node = nodes.get(target.getNode()); - if (node == null) { - onFreedContext(false); - continue; - } - - searchTransportService.sendFreeContext(node, target.getScrollId(), new ActionListener() { - @Override - public void onResponse(SearchTransportService.SearchFreeContextResponse freed) { - onFreedContext(freed.isFreed()); - } - - @Override - public void onFailure(Exception e) { - onFailedFreedContext(e, node); - } - }); - } - } - } - } - - void onFreedContext(boolean freed) { - if (freed) { - numberOfFreedSearchContexts.incrementAndGet(); - } - if (expectedOps.countDown()) { - boolean succeeded = expHolder.get() == null; - listener.onResponse(new ClearScrollResponse(succeeded, numberOfFreedSearchContexts.get())); - } - } - - void onFailedFreedContext(Throwable e, DiscoveryNode node) { - logger.warn((Supplier) () -> new ParameterizedMessage("Clear SC failed on node[{}]", node), e); - if (expectedOps.countDown()) { - listener.onResponse(new ClearScrollResponse(false, numberOfFreedSearchContexts.get())); - } else { - expHolder.set(e); - } - } - + Runnable runnable = new ClearScrollController(request, listener, clusterService.state().nodes(), logger, searchTransportService); + runnable.run(); } } diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java b/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java index 2bceccce385dc..db5a21edb2bea 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportMultiSearchAction.java @@ -46,19 +46,18 @@ public class TransportMultiSearchAction extends HandledTransportAction searchAction, - IndexNameExpressionResolver indexNameExpressionResolver, int availableProcessors) { - super(Settings.EMPTY, MultiSearchAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, MultiSearchRequest::new); + IndexNameExpressionResolver resolver, int availableProcessors) { + super(Settings.EMPTY, MultiSearchAction.NAME, threadPool, transportService, actionFilters, resolver, MultiSearchRequest::new); this.clusterService = clusterService; this.searchAction = searchAction; this.availableProcessors = availableProcessors; @@ -90,10 +89,9 @@ protected void doExecute(MultiSearchRequest request, ActionListener requests, AtomicArray responses, - AtomicInteger responseCounter, ActionListener listener) { + /** + * Executes a single request from the queue of requests. When a request finishes, another request is taken from the queue. When a + * request is executed, a permit is taken on the specified semaphore, and released as each request completes. + * + * @param requests the queue of multi-search requests to execute + * @param responses atomic array to hold the responses corresponding to each search request slot + * @param responseCounter incremented on each response + * @param listener the listener attached to the multi-search request + */ + private void executeSearch( + final Queue requests, + final AtomicArray responses, + final AtomicInteger responseCounter, + final ActionListener listener) { SearchRequestSlot request = requests.poll(); if (request == null) { - // Ok... so there're no more requests then this is ok, we're then waiting for running requests to complete + /* + * The number of times that we poll an item from the queue here is the minimum of the number of requests and the maximum number + * of concurrent requests. At first glance, it appears that we should never poll from the queue and not obtain a request given + * that we only poll here no more times than the number of requests. However, this is not the only consumer of this queue as + * earlier requests that have already completed will poll from the queue too and they could complete before later polls are + * invoked here. Thus, it can be the case that we poll here and and the queue was empty. + */ return; } + + /* + * With a request in hand, we are now prepared to execute the search request. There are two possibilities, either we go asynchronous + * or we do not (this can happen if the request does not resolve to any shards). If we do not go asynchronous, we are going to come + * back on the same thread that attempted to execute the search request. At this point, or any other point where we come back on the + * same thread as when the request was submitted, we should not recurse lest we might descend into a stack overflow. To avoid this, + * when we handle the response rather than going recursive, we fork to another thread, otherwise we recurse. + */ + final Thread thread = Thread.currentThread(); searchAction.execute(request.request, new ActionListener() { @Override - public void onResponse(SearchResponse searchResponse) { - responses.set(request.responseSlot, new MultiSearchResponse.Item(searchResponse, null)); - handleResponse(); + public void onResponse(final SearchResponse searchResponse) { + handleResponse(request.responseSlot, new MultiSearchResponse.Item(searchResponse, null)); } @Override - public void onFailure(Exception e) { - responses.set(request.responseSlot, new MultiSearchResponse.Item(null, e)); - handleResponse(); + public void onFailure(final Exception e) { + handleResponse(request.responseSlot, new MultiSearchResponse.Item(null, e)); } - private void handleResponse() { + private void handleResponse(final int responseSlot, final MultiSearchResponse.Item item) { + responses.set(responseSlot, item); if (responseCounter.decrementAndGet() == 0) { - listener.onResponse(new MultiSearchResponse(responses.toArray(new MultiSearchResponse.Item[responses.length()]))); + assert requests.isEmpty(); + finish(); } else { - executeSearch(requests, responses, responseCounter, listener); + if (thread == Thread.currentThread()) { + // we are on the same thread, we need to fork to another thread to avoid recursive stack overflow on a single thread + threadPool.generic().execute(() -> executeSearch(requests, responses, responseCounter, listener)); + } else { + // we are on a different thread (we went asynchronous), it's safe to recurse + executeSearch(requests, responses, responseCounter, listener); + } } } + + private void finish() { + listener.onResponse(new MultiSearchResponse(responses.toArray(new MultiSearchResponse.Item[responses.length()]))); + } }); } @@ -142,5 +177,7 @@ static final class SearchRequestSlot { this.request = request; this.responseSlot = responseSlot; } + } + } diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java b/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java index 48ee5cc288bb3..720fb17ae948b 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java @@ -20,44 +20,58 @@ package org.elasticsearch.action.search; import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.OriginalIndices; +import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsGroup; +import org.elasticsearch.action.admin.cluster.shards.ClusterSearchShardsResponse; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.HandledTransportAction; import org.elasticsearch.cluster.ClusterState; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; import org.elasticsearch.cluster.node.DiscoveryNode; +import org.elasticsearch.cluster.node.DiscoveryNodes; import org.elasticsearch.cluster.routing.GroupShardsIterator; +import org.elasticsearch.cluster.routing.ShardIterator; import org.elasticsearch.cluster.service.ClusterService; +import org.elasticsearch.common.Strings; import org.elasticsearch.common.inject.Inject; import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.Index; +import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.search.SearchService; import org.elasticsearch.search.builder.SearchSourceBuilder; import org.elasticsearch.search.internal.AliasFilter; import org.elasticsearch.tasks.Task; import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.RemoteClusterAware; +import org.elasticsearch.transport.RemoteClusterService; +import org.elasticsearch.transport.Transport; import org.elasticsearch.transport.TransportService; +import java.util.ArrayList; +import java.util.Arrays; import java.util.Collections; import java.util.HashMap; +import java.util.List; import java.util.Map; import java.util.Set; import java.util.concurrent.Executor; -import java.util.function.Function; +import java.util.function.BiFunction; +import java.util.function.LongSupplier; -import static org.elasticsearch.action.search.SearchType.QUERY_AND_FETCH; import static org.elasticsearch.action.search.SearchType.QUERY_THEN_FETCH; public class TransportSearchAction extends HandledTransportAction { /** The maximum number of shards for a single search request. */ public static final Setting SHARD_COUNT_LIMIT_SETTING = Setting.longSetting( - "action.search.shard_count.limit", 1000L, 1L, Property.Dynamic, Property.NodeScope); + "action.search.shard_count.limit", Long.MAX_VALUE, 1L, Property.Dynamic, Property.NodeScope); private final ClusterService clusterService; private final SearchTransportService searchTransportService; + private final RemoteClusterService remoteClusterService; private final SearchPhaseController searchPhaseController; private final SearchService searchService; @@ -69,12 +83,14 @@ public TransportSearchAction(Settings settings, ThreadPool threadPool, Transport super(settings, SearchAction.NAME, threadPool, transportService, actionFilters, indexNameExpressionResolver, SearchRequest::new); this.searchPhaseController = searchPhaseController; this.searchTransportService = searchTransportService; + this.remoteClusterService = searchTransportService.getRemoteClusterService(); SearchTransportService.registerRequestHandler(transportService, searchService); this.clusterService = clusterService; this.searchService = searchService; } - private Map buildPerIndexAliasFilter(SearchRequest request, ClusterState clusterState, Index[] concreteIndices) { + private Map buildPerIndexAliasFilter(SearchRequest request, ClusterState clusterState, + Index[] concreteIndices, Map remoteAliasMap) { final Map aliasFilterMap = new HashMap<>(); for (Index index : concreteIndices) { clusterState.blocks().indexBlockedRaiseException(ClusterBlockLevel.READ, index.getName()); @@ -82,6 +98,7 @@ private Map buildPerIndexAliasFilter(SearchRequest request, assert aliasFilter != null; aliasFilterMap.put(index.getUUID(), aliasFilter); } + aliasFilterMap.putAll(remoteAliasMap); return aliasFilterMap; } @@ -104,31 +121,161 @@ private Map resolveIndexBoosts(SearchRequest searchRequest, Clust concreteIndexBoosts.putIfAbsent(concreteIndex.getUUID(), ib.getBoost()); } } - return Collections.unmodifiableMap(concreteIndexBoosts); } + /** + * Search operations need two clocks. One clock is to fulfill real clock needs (e.g., resolving + * "now" to an index name). Another clock is needed for measuring how long a search operation + * took. These two uses are at odds with each other. There are many issues with using a real + * clock for measuring how long an operation took (they often lack precision, they are subject + * to moving backwards due to NTP and other such complexities, etc.). There are also issues with + * using a relative clock for reporting real time. Thus, we simply separate these two uses. + */ + static class SearchTimeProvider { + + private final long absoluteStartMillis; + private final long relativeStartNanos; + private final LongSupplier relativeCurrentNanosProvider; + + /** + * Instantiates a new search time provider. The absolute start time is the real clock time + * used for resolving index expressions that include dates. The relative start time is the + * start of the search operation according to a relative clock. The total time the search + * operation took can be measured against the provided relative clock and the relative start + * time. + * + * @param absoluteStartMillis the absolute start time in milliseconds since the epoch + * @param relativeStartNanos the relative start time in nanoseconds + * @param relativeCurrentNanosProvider provides the current relative time + */ + SearchTimeProvider( + final long absoluteStartMillis, + final long relativeStartNanos, + final LongSupplier relativeCurrentNanosProvider) { + this.absoluteStartMillis = absoluteStartMillis; + this.relativeStartNanos = relativeStartNanos; + this.relativeCurrentNanosProvider = relativeCurrentNanosProvider; + } + + long getAbsoluteStartMillis() { + return absoluteStartMillis; + } + + long getRelativeStartNanos() { + return relativeStartNanos; + } + + long getRelativeCurrentNanos() { + return relativeCurrentNanosProvider.getAsLong(); + } + } + @Override protected void doExecute(Task task, SearchRequest searchRequest, ActionListener listener) { - // pure paranoia if time goes backwards we are at least positive - final long startTimeInMillis = Math.max(0, System.currentTimeMillis()); - ClusterState clusterState = clusterService.state(); - clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ); + final long absoluteStartMillis = System.currentTimeMillis(); + final long relativeStartNanos = System.nanoTime(); + final SearchTimeProvider timeProvider = + new SearchTimeProvider(absoluteStartMillis, relativeStartNanos, System::nanoTime); + + + final ClusterState clusterState = clusterService.state(); + final Map remoteClusterIndices = remoteClusterService.groupIndices(searchRequest.indicesOptions(), + searchRequest.indices(), idx -> indexNameExpressionResolver.hasIndexOrAlias(idx, clusterState)); + OriginalIndices localIndices = remoteClusterIndices.remove(RemoteClusterAware.LOCAL_CLUSTER_GROUP_KEY); + if (remoteClusterIndices.isEmpty()) { + executeSearch((SearchTask)task, timeProvider, searchRequest, localIndices, Collections.emptyList(), + (clusterName, nodeId) -> null, clusterState, Collections.emptyMap(), listener); + } else { + remoteClusterService.collectSearchShards(searchRequest.indicesOptions(), searchRequest.preference(), searchRequest.routing(), + remoteClusterIndices, ActionListener.wrap((searchShardsResponses) -> { + List remoteShardIterators = new ArrayList<>(); + Map remoteAliasFilters = new HashMap<>(); + BiFunction clusterNodeLookup = processRemoteShards(searchShardsResponses, + remoteClusterIndices, remoteShardIterators, remoteAliasFilters); + executeSearch((SearchTask)task, timeProvider, searchRequest, localIndices, remoteShardIterators, + clusterNodeLookup, clusterState, remoteAliasFilters, listener); + }, listener::onFailure)); + } + } + + static BiFunction processRemoteShards(Map searchShardsResponses, + Map remoteIndicesByCluster, + List remoteShardIterators, + Map aliasFilterMap) { + Map> clusterToNode = new HashMap<>(); + for (Map.Entry entry : searchShardsResponses.entrySet()) { + String clusterAlias = entry.getKey(); + ClusterSearchShardsResponse searchShardsResponse = entry.getValue(); + HashMap idToDiscoveryNode = new HashMap<>(); + clusterToNode.put(clusterAlias, idToDiscoveryNode); + for (DiscoveryNode remoteNode : searchShardsResponse.getNodes()) { + idToDiscoveryNode.put(remoteNode.getId(), remoteNode); + } + final Map indicesAndFilters = searchShardsResponse.getIndicesAndFilters(); + for (ClusterSearchShardsGroup clusterSearchShardsGroup : searchShardsResponse.getGroups()) { + //add the cluster name to the remote index names for indices disambiguation + //this ends up in the hits returned with the search response + ShardId shardId = clusterSearchShardsGroup.getShardId(); + Index remoteIndex = shardId.getIndex(); + Index index = new Index(RemoteClusterAware.buildRemoteIndexName(clusterAlias, remoteIndex.getName()), + remoteIndex.getUUID()); + final AliasFilter aliasFilter; + if (indicesAndFilters == null) { + aliasFilter = AliasFilter.EMPTY; + } else { + aliasFilter = indicesAndFilters.get(shardId.getIndexName()); + assert aliasFilter != null : "alias filter must not be null for index: " + shardId.getIndex(); + } + String[] aliases = aliasFilter.getAliases(); + String[] finalIndices = aliases.length == 0 ? new String[] {shardId.getIndexName()} : aliases; + // here we have to map the filters to the UUID since from now on we use the uuid for the lookup + aliasFilterMap.put(remoteIndex.getUUID(), aliasFilter); + final OriginalIndices originalIndices = remoteIndicesByCluster.get(clusterAlias); + assert originalIndices != null : "original indices are null for clusterAlias: " + clusterAlias; + SearchShardIterator shardIterator = new SearchShardIterator(clusterAlias, new ShardId(index, shardId.getId()), + Arrays.asList(clusterSearchShardsGroup.getShards()), new OriginalIndices(finalIndices, + originalIndices.indicesOptions())); + remoteShardIterators.add(shardIterator); + } + } + return (clusterAlias, nodeId) -> { + Map clusterNodes = clusterToNode.get(clusterAlias); + if (clusterNodes == null) { + throw new IllegalArgumentException("unknown remote cluster: " + clusterAlias); + } + return clusterNodes.get(nodeId); + }; + } + + private void executeSearch(SearchTask task, SearchTimeProvider timeProvider, SearchRequest searchRequest, OriginalIndices localIndices, + List remoteShardIterators, BiFunction remoteConnections, + ClusterState clusterState, Map remoteAliasMap, + ActionListener listener) { + clusterState.blocks().globalBlockedRaiseException(ClusterBlockLevel.READ); // TODO: I think startTime() should become part of ActionRequest and that should be used both for index name // date math expressions and $now in scripts. This way all apis will deal with now in the same way instead // of just for the _search api - Index[] indices = indexNameExpressionResolver.concreteIndices(clusterState, searchRequest.indicesOptions(), - startTimeInMillis, searchRequest.indices()); - Map aliasFilter = buildPerIndexAliasFilter(searchRequest, clusterState, indices); + final Index[] indices; + if (localIndices.indices().length == 0 && remoteShardIterators.size() > 0) { + indices = Index.EMPTY_ARRAY; // don't search on _all if only remote indices were specified + } else { + indices = indexNameExpressionResolver.concreteIndices(clusterState, searchRequest.indicesOptions(), + timeProvider.getAbsoluteStartMillis(), localIndices.indices()); + } + Map aliasFilter = buildPerIndexAliasFilter(searchRequest, clusterState, indices, remoteAliasMap); Map> routingMap = indexNameExpressionResolver.resolveSearchRouting(clusterState, searchRequest.routing(), searchRequest.indices()); String[] concreteIndices = new String[indices.length]; for (int i = 0; i < indices.length; i++) { concreteIndices[i] = indices[i].getName(); } - GroupShardsIterator shardIterators = clusterService.operationRouting().searchShards(clusterState, concreteIndices, routingMap, - searchRequest.preference()); + GroupShardsIterator localShardsIterator = clusterService.operationRouting().searchShards(clusterState, + concreteIndices, routingMap, searchRequest.preference()); + GroupShardsIterator shardIterators = mergeShardsIterators(localShardsIterator, localIndices, + remoteShardIterators); + failIfOverShardCountLimit(clusterService, shardIterators.size()); Map concreteIndexBoosts = resolveIndexBoosts(searchRequest, clusterState); @@ -136,13 +283,12 @@ protected void doExecute(Task task, SearchRequest searchRequest, ActionListener< // optimize search type for cases where there is only one shard group to search on if (shardIterators.size() == 1) { // if we only have one group, then we always want Q_A_F, no need for DFS, and no need to do THEN since we hit one shard - searchRequest.searchType(QUERY_AND_FETCH); + searchRequest.searchType(QUERY_THEN_FETCH); } if (searchRequest.isSuggestOnly()) { // disable request cache if we have only suggest searchRequest.requestCache(false); switch (searchRequest.searchType()) { - case DFS_QUERY_AND_FETCH: case DFS_QUERY_THEN_FETCH: // convert to Q_T_F if we have only suggest searchRequest.searchType(QUERY_THEN_FETCH); @@ -150,43 +296,57 @@ protected void doExecute(Task task, SearchRequest searchRequest, ActionListener< } } - searchAsyncAction((SearchTask)task, searchRequest, shardIterators, startTimeInMillis, clusterState, + final DiscoveryNodes nodes = clusterState.nodes(); + BiFunction connectionLookup = (clusterName, nodeId) -> { + final DiscoveryNode discoveryNode = clusterName == null ? nodes.get(nodeId) : remoteConnections.apply(clusterName, nodeId); + if (discoveryNode == null) { + throw new IllegalStateException("no node found for id: " + nodeId); + } + return searchTransportService.getConnection(clusterName, discoveryNode); + }; + + searchAsyncAction(task, searchRequest, shardIterators, timeProvider, connectionLookup, clusterState.version(), Collections.unmodifiableMap(aliasFilter), concreteIndexBoosts, listener).start(); } + static GroupShardsIterator mergeShardsIterators(GroupShardsIterator localShardsIterator, + OriginalIndices localIndices, + List remoteShardIterators) { + List shards = new ArrayList<>(); + for (SearchShardIterator shardIterator : remoteShardIterators) { + shards.add(shardIterator); + } + for (ShardIterator shardIterator : localShardsIterator) { + shards.add(new SearchShardIterator(null, shardIterator.shardId(), shardIterator.getShardRoutings(), localIndices)); + } + return new GroupShardsIterator<>(shards); + } + @Override protected final void doExecute(SearchRequest searchRequest, ActionListener listener) { throw new UnsupportedOperationException("the task parameter is required"); } - private AbstractSearchAsyncAction searchAsyncAction(SearchTask task, SearchRequest searchRequest, GroupShardsIterator shardIterators, - long startTime, ClusterState state, Map aliasFilter, + private AbstractSearchAsyncAction searchAsyncAction(SearchTask task, SearchRequest searchRequest, + GroupShardsIterator shardIterators, + SearchTimeProvider timeProvider, + BiFunction connectionLookup, + long clusterStateVersion, Map aliasFilter, Map concreteIndexBoosts, ActionListener listener) { - final Function nodesLookup = state.nodes()::get; - final long clusterStateVersion = state.version(); Executor executor = threadPool.executor(ThreadPool.Names.SEARCH); AbstractSearchAsyncAction searchAsyncAction; switch(searchRequest.searchType()) { case DFS_QUERY_THEN_FETCH: - searchAsyncAction = new SearchDfsQueryThenFetchAsyncAction(logger, searchTransportService, nodesLookup, - aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime, - clusterStateVersion, task); - break; - case QUERY_THEN_FETCH: - searchAsyncAction = new SearchQueryThenFetchAsyncAction(logger, searchTransportService, nodesLookup, - aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime, - clusterStateVersion, task); - break; - case DFS_QUERY_AND_FETCH: - searchAsyncAction = new SearchDfsQueryAndFetchAsyncAction(logger, searchTransportService, nodesLookup, - aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime, - clusterStateVersion, task); + searchAsyncAction = new SearchDfsQueryThenFetchAsyncAction(logger, searchTransportService, connectionLookup, + aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, + timeProvider, clusterStateVersion, task); break; case QUERY_AND_FETCH: - searchAsyncAction = new SearchQueryAndFetchAsyncAction(logger, searchTransportService, nodesLookup, - aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, startTime, - clusterStateVersion, task); + case QUERY_THEN_FETCH: + searchAsyncAction = new SearchQueryThenFetchAsyncAction(logger, searchTransportService, connectionLookup, + aliasFilter, concreteIndexBoosts, searchPhaseController, executor, searchRequest, listener, shardIterators, + timeProvider, clusterStateVersion, task); break; default: throw new IllegalStateException("Unknown search type: [" + searchRequest.searchType() + "]"); @@ -194,7 +354,7 @@ private AbstractSearchAsyncAction searchAsyncAction(SearchTask task, SearchReque return searchAsyncAction; } - private void failIfOverShardCountLimit(ClusterService clusterService, int shardCount) { + private static void failIfOverShardCountLimit(ClusterService clusterService, int shardCount) { final long shardCountLimit = clusterService.getClusterSettings().get(SHARD_COUNT_LIMIT_SETTING); if (shardCount > shardCountLimit) { throw new IllegalArgumentException("Trying to query " + shardCount + " shards, which is over the limit of " diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportSearchHelper.java b/core/src/main/java/org/elasticsearch/action/search/TransportSearchHelper.java index a09a651086bed..e494bb6768d65 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportSearchHelper.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportSearchHelper.java @@ -21,11 +21,9 @@ import org.apache.lucene.store.ByteArrayDataInput; import org.apache.lucene.store.RAMOutputStream; -import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.common.util.concurrent.AtomicArray; import org.elasticsearch.search.SearchPhaseResult; import org.elasticsearch.search.internal.InternalScrollSearchRequest; -import org.elasticsearch.search.internal.ShardSearchTransportRequest; import java.io.IOException; import java.util.Base64; @@ -36,24 +34,13 @@ static InternalScrollSearchRequest internalScrollSearchRequest(long id, SearchSc return new InternalScrollSearchRequest(request, id); } - static String buildScrollId(SearchType searchType, AtomicArray searchPhaseResults) throws IOException { - if (searchType == SearchType.DFS_QUERY_THEN_FETCH || searchType == SearchType.QUERY_THEN_FETCH) { - return buildScrollId(ParsedScrollId.QUERY_THEN_FETCH_TYPE, searchPhaseResults); - } else if (searchType == SearchType.QUERY_AND_FETCH || searchType == SearchType.DFS_QUERY_AND_FETCH) { - return buildScrollId(ParsedScrollId.QUERY_AND_FETCH_TYPE, searchPhaseResults); - } else { - throw new IllegalStateException("search_type [" + searchType + "] not supported"); - } - } - - static String buildScrollId(String type, AtomicArray searchPhaseResults) throws IOException { + static String buildScrollId(AtomicArray searchPhaseResults) throws IOException { try (RAMOutputStream out = new RAMOutputStream()) { - out.writeString(type); + out.writeString(searchPhaseResults.length() == 1 ? ParsedScrollId.QUERY_AND_FETCH_TYPE : ParsedScrollId.QUERY_THEN_FETCH_TYPE); out.writeVInt(searchPhaseResults.asList().size()); - for (AtomicArray.Entry entry : searchPhaseResults.asList()) { - SearchPhaseResult searchPhaseResult = entry.value; - out.writeLong(searchPhaseResult.id()); - out.writeString(searchPhaseResult.shardTarget().nodeId()); + for (SearchPhaseResult searchPhaseResult : searchPhaseResults.asList()) { + out.writeLong(searchPhaseResult.getRequestId()); + out.writeString(searchPhaseResult.getSearchShardTarget().getNodeId()); } byte[] bytes = new byte[(int) out.getFilePointer()]; out.writeTo(bytes, 0); diff --git a/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java b/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java index fdd4fc2e9e186..e334b95180122 100644 --- a/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java +++ b/core/src/main/java/org/elasticsearch/action/search/TransportSearchScrollAction.java @@ -60,20 +60,20 @@ protected final void doExecute(SearchScrollRequest request, ActionListener listener) { try { ParsedScrollId scrollId = parseScrollId(request.scrollId()); - AbstractAsyncAction action; + Runnable action; switch (scrollId.getType()) { case QUERY_THEN_FETCH_TYPE: action = new SearchScrollQueryThenFetchAsyncAction(logger, clusterService, searchTransportService, searchPhaseController, request, (SearchTask)task, scrollId, listener); break; - case QUERY_AND_FETCH_TYPE: + case QUERY_AND_FETCH_TYPE: // TODO can we get rid of this? action = new SearchScrollQueryAndFetchAsyncAction(logger, clusterService, searchTransportService, searchPhaseController, request, (SearchTask)task, scrollId, listener); break; default: throw new IllegalArgumentException("Scroll id type [" + scrollId.getType() + "] unrecognized"); } - action.start(); + action.run(); } catch (Exception e) { listener.onFailure(e); } diff --git a/core/src/main/java/org/elasticsearch/action/support/AbstractListenableActionFuture.java b/core/src/main/java/org/elasticsearch/action/support/AbstractListenableActionFuture.java deleted file mode 100644 index d6e06613d59b4..0000000000000 --- a/core/src/main/java/org/elasticsearch/action/support/AbstractListenableActionFuture.java +++ /dev/null @@ -1,106 +0,0 @@ -/* - * Licensed to Elasticsearch under one or more contributor - * license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright - * ownership. Elasticsearch licenses this file to you under - * the Apache License, Version 2.0 (the "License"); you may - * not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, - * software distributed under the License is distributed on an - * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY - * KIND, either express or implied. See the License for the - * specific language governing permissions and limitations - * under the License. - */ - -package org.elasticsearch.action.support; - -import org.apache.logging.log4j.Logger; -import org.elasticsearch.action.ActionListener; -import org.elasticsearch.action.ListenableActionFuture; -import org.elasticsearch.common.logging.Loggers; -import org.elasticsearch.threadpool.ThreadPool; - -import java.util.ArrayList; -import java.util.List; - -public abstract class AbstractListenableActionFuture extends AdapterActionFuture implements ListenableActionFuture { - - private static final Logger logger = Loggers.getLogger(AbstractListenableActionFuture.class); - - final ThreadPool threadPool; - volatile Object listeners; - boolean executedListeners = false; - - protected AbstractListenableActionFuture(ThreadPool threadPool) { - this.threadPool = threadPool; - } - - public ThreadPool threadPool() { - return threadPool; - } - - @Override - public void addListener(final ActionListener listener) { - internalAddListener(listener); - } - - public void internalAddListener(ActionListener listener) { - listener = new ThreadedActionListener<>(logger, threadPool, ThreadPool.Names.LISTENER, listener, false); - boolean executeImmediate = false; - synchronized (this) { - if (executedListeners) { - executeImmediate = true; - } else { - Object listeners = this.listeners; - if (listeners == null) { - listeners = listener; - } else if (listeners instanceof List) { - ((List) this.listeners).add(listener); - } else { - Object orig = listeners; - listeners = new ArrayList<>(2); - ((List) listeners).add(orig); - ((List) listeners).add(listener); - } - this.listeners = listeners; - } - } - if (executeImmediate) { - executeListener(listener); - } - } - - @Override - protected void done() { - super.done(); - synchronized (this) { - executedListeners = true; - } - Object listeners = this.listeners; - if (listeners != null) { - if (listeners instanceof List) { - List list = (List) listeners; - for (Object listener : list) { - executeListener((ActionListener) listener); - } - } else { - executeListener((ActionListener) listeners); - } - } - } - - private void executeListener(final ActionListener listener) { - try { - // we use a timeout of 0 to by pass assertion forbidding to call actionGet() (blocking) on a network thread. - // here we know we will never block - listener.onResponse(actionGet(0)); - } catch (Exception e) { - listener.onFailure(e); - } - } -} diff --git a/core/src/main/java/org/elasticsearch/action/support/ActionFilter.java b/core/src/main/java/org/elasticsearch/action/support/ActionFilter.java index 880d173b2fe6b..3e12d0cc84223 100644 --- a/core/src/main/java/org/elasticsearch/action/support/ActionFilter.java +++ b/core/src/main/java/org/elasticsearch/action/support/ActionFilter.java @@ -47,7 +47,7 @@ void apply(Task * filter chain. This base class should serve any action filter implementations that doesn't require * to apply async filtering logic. */ - public abstract static class Simple extends AbstractComponent implements ActionFilter { + abstract class Simple extends AbstractComponent implements ActionFilter { protected Simple(Settings settings) { super(settings); diff --git a/core/src/main/java/org/elasticsearch/action/support/ActionFilterChain.java b/core/src/main/java/org/elasticsearch/action/support/ActionFilterChain.java index 56ba070b1aa2c..97e0c535bffdf 100644 --- a/core/src/main/java/org/elasticsearch/action/support/ActionFilterChain.java +++ b/core/src/main/java/org/elasticsearch/action/support/ActionFilterChain.java @@ -33,5 +33,5 @@ public interface ActionFilterChain listener); + void proceed(Task task, String action, Request request, ActionListener listener); } diff --git a/core/src/main/java/org/elasticsearch/action/support/AdapterActionFuture.java b/core/src/main/java/org/elasticsearch/action/support/AdapterActionFuture.java index b2167c3051bcb..4c7698e82e04d 100644 --- a/core/src/main/java/org/elasticsearch/action/support/AdapterActionFuture.java +++ b/core/src/main/java/org/elasticsearch/action/support/AdapterActionFuture.java @@ -38,6 +38,7 @@ public T actionGet() { try { return get(); } catch (InterruptedException e) { + Thread.currentThread().interrupt(); throw new IllegalStateException("Future got interrupted", e); } catch (ExecutionException e) { throw rethrowExecutionException(e); @@ -66,6 +67,7 @@ public T actionGet(long timeout, TimeUnit unit) { } catch (TimeoutException e) { throw new ElasticsearchTimeoutException(e); } catch (InterruptedException e) { + Thread.currentThread().interrupt(); throw new IllegalStateException("Future got interrupted", e); } catch (ExecutionException e) { throw rethrowExecutionException(e); @@ -100,4 +102,5 @@ public void onFailure(Exception e) { } protected abstract T convert(L listenerResponse); + } diff --git a/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java b/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java index a9a5afed9f315..2e442e2cc141c 100644 --- a/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java +++ b/core/src/main/java/org/elasticsearch/action/support/AutoCreateIndex.java @@ -29,6 +29,7 @@ import org.elasticsearch.common.settings.Setting; import org.elasticsearch.common.settings.Setting.Property; import org.elasticsearch.common.settings.Settings; +import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.mapper.MapperService; import java.util.ArrayList; @@ -63,18 +64,20 @@ public boolean needToCheck() { /** * Should the index be auto created? + * @throws IndexNotFoundException if the the index doesn't exist and shouldn't be auto created */ public boolean shouldAutoCreate(String index, ClusterState state) { + if (resolver.hasIndexOrAlias(index, state)) { + return false; + } // One volatile read, so that all checks are done against the same instance: final AutoCreate autoCreate = this.autoCreate; if (autoCreate.autoCreateIndex == false) { - return false; + throw new IndexNotFoundException("no such index and [" + AUTO_CREATE_INDEX_SETTING.getKey() + "] is [false]", index); } if (dynamicMappingDisabled) { - return false; - } - if (resolver.hasIndexOrAlias(index, state)) { - return false; + throw new IndexNotFoundException("no such index and [" + MapperService.INDEX_MAPPER_DYNAMIC_SETTING.getKey() + "] is [false]", + index); } // matches not set, default value of "true" if (autoCreate.expressions.isEmpty()) { @@ -84,10 +87,15 @@ public boolean shouldAutoCreate(String index, ClusterState state) { String indexExpression = expression.v1(); boolean include = expression.v2(); if (Regex.simpleMatch(indexExpression, index)) { - return include; + if (include) { + return true; + } + throw new IndexNotFoundException("no such index and [" + AUTO_CREATE_INDEX_SETTING.getKey() + "] contains [-" + + indexExpression + "] which forbids automatic creation of the index", index); } } - return false; + throw new IndexNotFoundException("no such index and [" + AUTO_CREATE_INDEX_SETTING.getKey() + "] ([" + autoCreate + + "]) doesn't match", index); } AutoCreate getAutoCreate() { @@ -101,29 +109,33 @@ void setAutoCreate(AutoCreate autoCreate) { static class AutoCreate { private final boolean autoCreateIndex; private final List> expressions; + private final String string; private AutoCreate(String value) { boolean autoCreateIndex; List> expressions = new ArrayList<>(); try { - autoCreateIndex = Booleans.parseBooleanExact(value); + autoCreateIndex = Booleans.parseBoolean(value); } catch (IllegalArgumentException ex) { try { String[] patterns = Strings.commaDelimitedListToStringArray(value); for (String pattern : patterns) { if (pattern == null || pattern.trim().length() == 0) { - throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] must be either [true, false, or a comma separated list of index patterns]"); + throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] must " + + "be either [true, false, or a comma separated list of index patterns]"); } pattern = pattern.trim(); Tuple expression; if (pattern.startsWith("-")) { if (pattern.length() == 1) { - throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] must contain an index name after [-]"); + throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] " + + "must contain an index name after [-]"); } expression = new Tuple<>(pattern.substring(1), false); } else if(pattern.startsWith("+")) { if (pattern.length() == 1) { - throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] must contain an index name after [+]"); + throw new IllegalArgumentException("Can't parse [" + value + "] for setting [action.auto_create_index] " + + "must contain an index name after [+]"); } expression = new Tuple<>(pattern.substring(1), true); } else { @@ -139,6 +151,7 @@ private AutoCreate(String value) { } this.expressions = expressions; this.autoCreateIndex = autoCreateIndex; + this.string = value; } boolean isAutoCreateIndex() { @@ -148,5 +161,10 @@ boolean isAutoCreateIndex() { List> getExpressions() { return expressions; } + + @Override + public String toString() { + return string; + } } } diff --git a/core/src/main/java/org/elasticsearch/action/support/ContextPreservingActionListener.java b/core/src/main/java/org/elasticsearch/action/support/ContextPreservingActionListener.java new file mode 100644 index 0000000000000..72f1e7c1d6643 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/support/ContextPreservingActionListener.java @@ -0,0 +1,61 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.support; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.common.util.concurrent.ThreadContext; + +import java.util.function.Supplier; + +/** + * Restores the given {@link org.elasticsearch.common.util.concurrent.ThreadContext.StoredContext} + * once the listener is invoked + */ +public final class ContextPreservingActionListener implements ActionListener { + + private final ActionListener delegate; + private final Supplier context; + + public ContextPreservingActionListener(Supplier contextSupplier, ActionListener delegate) { + this.delegate = delegate; + this.context = contextSupplier; + } + + @Override + public void onResponse(R r) { + try (ThreadContext.StoredContext ignore = context.get()) { + delegate.onResponse(r); + } + } + + @Override + public void onFailure(Exception e) { + try (ThreadContext.StoredContext ignore = context.get()) { + delegate.onFailure(e); + } + } + + /** + * Wraps the provided action listener in a {@link ContextPreservingActionListener} that will + * also copy the response headers when the {@link ThreadContext.StoredContext} is closed + */ + public static ContextPreservingActionListener wrapPreservingContext(ActionListener listener, ThreadContext threadContext) { + return new ContextPreservingActionListener<>(threadContext.newRestorableContext(true), listener); + } +} diff --git a/core/src/main/java/org/elasticsearch/action/support/DefaultShardOperationFailedException.java b/core/src/main/java/org/elasticsearch/action/support/DefaultShardOperationFailedException.java index 3f7df803e2439..2ced9145674a2 100644 --- a/core/src/main/java/org/elasticsearch/action/support/DefaultShardOperationFailedException.java +++ b/core/src/main/java/org/elasticsearch/action/support/DefaultShardOperationFailedException.java @@ -125,7 +125,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws if (reason != null) { builder.field("reason"); builder.startObject(); - ElasticsearchException.toXContent(builder, params, reason); + ElasticsearchException.generateThrowableXContent(builder, params, reason); builder.endObject(); } return builder; diff --git a/core/src/main/java/org/elasticsearch/action/support/GroupedActionListener.java b/core/src/main/java/org/elasticsearch/action/support/GroupedActionListener.java new file mode 100644 index 0000000000000..ed9b7c8d15d60 --- /dev/null +++ b/core/src/main/java/org/elasticsearch/action/support/GroupedActionListener.java @@ -0,0 +1,81 @@ +/* + * Licensed to Elasticsearch under one or more contributor + * license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright + * ownership. Elasticsearch licenses this file to you under + * the Apache License, Version 2.0 (the "License"); you may + * not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ +package org.elasticsearch.action.support; + +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.common.util.concurrent.AtomicArray; +import org.elasticsearch.common.util.concurrent.CountDown; + +import java.util.Collection; +import java.util.Collections; +import java.util.List; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.concurrent.atomic.AtomicReference; + +/** + * An action listener that delegates it's results to another listener once + * it has received one or more failures or N results. This allows synchronous + * tasks to be forked off in a loop with the same listener and respond to a + * higher level listener once all tasks responded. + */ +public final class GroupedActionListener implements ActionListener { + private final CountDown countDown; + private final AtomicInteger pos = new AtomicInteger(); + private final AtomicArray results; + private final ActionListener> delegate; + private final Collection defaults; + private final AtomicReference failure = new AtomicReference<>(); + + /** + * Creates a new listener + * @param delegate the delegate listener + * @param groupSize the group size + */ + public GroupedActionListener(ActionListener> delegate, int groupSize, + Collection defaults) { + results = new AtomicArray<>(groupSize); + countDown = new CountDown(groupSize); + this.delegate = delegate; + this.defaults = defaults; + } + + @Override + public void onResponse(T element) { + results.setOnce(pos.incrementAndGet() - 1, element); + if (countDown.countDown()) { + if (failure.get() != null) { + delegate.onFailure(failure.get()); + } else { + List collect = this.results.asList(); + collect.addAll(defaults); + delegate.onResponse(Collections.unmodifiableList(collect)); + } + } + } + + @Override + public void onFailure(Exception e) { + if (failure.compareAndSet(null, e) == false) { + failure.get().addSuppressed(e); + } + if (countDown.countDown()) { + delegate.onFailure(failure.get()); + } + } +} diff --git a/core/src/main/java/org/elasticsearch/action/support/IndicesOptions.java b/core/src/main/java/org/elasticsearch/action/support/IndicesOptions.java index 2bc49f7e9f869..9ab4ee80ccf9b 100644 --- a/core/src/main/java/org/elasticsearch/action/support/IndicesOptions.java +++ b/core/src/main/java/org/elasticsearch/action/support/IndicesOptions.java @@ -19,6 +19,7 @@ package org.elasticsearch.action.support; +import org.elasticsearch.Version; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.rest.RestRequest; @@ -26,7 +27,7 @@ import java.io.IOException; import java.util.Map; -import static org.elasticsearch.common.xcontent.support.XContentMapValues.lenientNodeBooleanValue; +import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeBooleanValue; import static org.elasticsearch.common.xcontent.support.XContentMapValues.nodeStringArrayValue; /** @@ -43,6 +44,7 @@ public class IndicesOptions { private static final byte EXPAND_WILDCARDS_CLOSED = 8; private static final byte FORBID_ALIASES_TO_MULTIPLE_INDICES = 16; private static final byte FORBID_CLOSED_INDICES = 32; + private static final byte IGNORE_ALIASES = 64; private static final byte STRICT_EXPAND_OPEN = 6; private static final byte LENIENT_EXPAND_OPEN = 7; @@ -51,10 +53,10 @@ public class IndicesOptions { private static final byte STRICT_SINGLE_INDEX_NO_EXPAND_FORBID_CLOSED = 48; static { - byte max = 1 << 6; + short max = 1 << 7; VALUES = new IndicesOptions[max]; - for (byte id = 0; id < max; id++) { - VALUES[id] = new IndicesOptions(id); + for (short id = 0; id < max; id++) { + VALUES[id] = new IndicesOptions((byte)id); } } @@ -106,18 +108,31 @@ public boolean forbidClosedIndices() { * @return whether aliases pointing to multiple indices are allowed */ public boolean allowAliasesToMultipleIndices() { - //true is default here, for bw comp we keep the first 16 values - //in the array same as before + the default value for the new flag + // true is default here, for bw comp we keep the first 16 values + // in the array same as before + the default value for the new flag return (id & FORBID_ALIASES_TO_MULTIPLE_INDICES) == 0; } + /** + * @return whether aliases should be ignored (when resolving a wildcard) + */ + public boolean ignoreAliases() { + return (id & IGNORE_ALIASES) != 0; + } + public void writeIndicesOptions(StreamOutput out) throws IOException { - out.write(id); + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha2)) { + out.write(id); + } else { + // if we are talking to a node that doesn't support the newly added flag (ignoreAliases) + // flip to 0 all the bits starting from the 7th + out.write(id & 0x3f); + } } public static IndicesOptions readIndicesOptions(StreamInput in) throws IOException { - //if we read from a node that doesn't support the newly added flag (allowAliasesToMultipleIndices) - //we just receive the old corresponding value with the new flag set to true (default) + //if we read from a node that doesn't support the newly added flag (ignoreAliases) + //we just receive the old corresponding value with the new flag set to false (default) byte id = in.readByte(); if (id >= VALUES.length) { throw new IllegalArgumentException("No valid missing index type id: " + id); @@ -133,8 +148,16 @@ public static IndicesOptions fromOptions(boolean ignoreUnavailable, boolean allo return fromOptions(ignoreUnavailable, allowNoIndices, expandToOpenIndices, expandToClosedIndices, defaultOptions.allowAliasesToMultipleIndices(), defaultOptions.forbidClosedIndices()); } - static IndicesOptions fromOptions(boolean ignoreUnavailable, boolean allowNoIndices, boolean expandToOpenIndices, boolean expandToClosedIndices, boolean allowAliasesToMultipleIndices, boolean forbidClosedIndices) { - byte id = toByte(ignoreUnavailable, allowNoIndices, expandToOpenIndices, expandToClosedIndices, allowAliasesToMultipleIndices, forbidClosedIndices); + public static IndicesOptions fromOptions(boolean ignoreUnavailable, boolean allowNoIndices, boolean expandToOpenIndices, + boolean expandToClosedIndices, boolean allowAliasesToMultipleIndices, boolean forbidClosedIndices) { + return fromOptions(ignoreUnavailable, allowNoIndices, expandToOpenIndices, expandToClosedIndices, allowAliasesToMultipleIndices, + forbidClosedIndices, false); + } + + public static IndicesOptions fromOptions(boolean ignoreUnavailable, boolean allowNoIndices, boolean expandToOpenIndices, + boolean expandToClosedIndices, boolean allowAliasesToMultipleIndices, boolean forbidClosedIndices, boolean ignoreAliases) { + byte id = toByte(ignoreUnavailable, allowNoIndices, expandToOpenIndices, expandToClosedIndices, allowAliasesToMultipleIndices, + forbidClosedIndices, ignoreAliases); return VALUES[id]; } @@ -195,8 +218,8 @@ public static IndicesOptions fromParameters(Object wildcardsString, Object ignor //note that allowAliasesToMultipleIndices is not exposed, always true (only for internal use) return fromOptions( - lenientNodeBooleanValue(ignoreUnavailableString, defaultSettings.ignoreUnavailable()), - lenientNodeBooleanValue(allowNoIndicesString, defaultSettings.allowNoIndices()), + nodeBooleanValue(ignoreUnavailableString, "ignore_unavailable", defaultSettings.ignoreUnavailable()), + nodeBooleanValue(allowNoIndicesString, "allow_no_indices", defaultSettings.allowNoIndices()), expandWildcardsOpen, expandWildcardsClosed, defaultSettings.allowAliasesToMultipleIndices(), @@ -246,7 +269,7 @@ public static IndicesOptions lenientExpandOpen() { } private static byte toByte(boolean ignoreUnavailable, boolean allowNoIndices, boolean wildcardExpandToOpen, - boolean wildcardExpandToClosed, boolean allowAliasesToMultipleIndices, boolean forbidClosedIndices) { + boolean wildcardExpandToClosed, boolean allowAliasesToMultipleIndices, boolean forbidClosedIndices, boolean ignoreAliases) { byte id = 0; if (ignoreUnavailable) { id |= IGNORE_UNAVAILABLE; @@ -268,6 +291,9 @@ private static byte toByte(boolean ignoreUnavailable, boolean allowNoIndices, bo if (forbidClosedIndices) { id |= FORBID_CLOSED_INDICES; } + if (ignoreAliases) { + id |= IGNORE_ALIASES; + } return id; } @@ -279,8 +305,9 @@ public String toString() { ", allow_no_indices=" + allowNoIndices() + ", expand_wildcards_open=" + expandWildcardsOpen() + ", expand_wildcards_closed=" + expandWildcardsClosed() + - ", allow_alisases_to_multiple_indices=" + allowAliasesToMultipleIndices() + + ", allow_aliases_to_multiple_indices=" + allowAliasesToMultipleIndices() + ", forbid_closed_indices=" + forbidClosedIndices() + + ", ignore_aliases=" + ignoreAliases() + ']'; } } diff --git a/core/src/main/java/org/elasticsearch/action/support/PlainListenableActionFuture.java b/core/src/main/java/org/elasticsearch/action/support/PlainListenableActionFuture.java index c9b0cf9d82f43..749bf1fea019d 100644 --- a/core/src/main/java/org/elasticsearch/action/support/PlainListenableActionFuture.java +++ b/core/src/main/java/org/elasticsearch/action/support/PlainListenableActionFuture.java @@ -19,17 +19,120 @@ package org.elasticsearch.action.support; +import org.apache.logging.log4j.Logger; +import org.elasticsearch.action.ActionListener; +import org.elasticsearch.action.ListenableActionFuture; +import org.elasticsearch.common.logging.Loggers; import org.elasticsearch.threadpool.ThreadPool; -public class PlainListenableActionFuture extends AbstractListenableActionFuture { +import java.util.ArrayList; +import java.util.List; - public PlainListenableActionFuture(ThreadPool threadPool) { - super(threadPool); +public class PlainListenableActionFuture extends AdapterActionFuture implements ListenableActionFuture { + + volatile Object listeners; + boolean executedListeners = false; + + private PlainListenableActionFuture() {} + + /** + * This method returns a listenable future. The listeners will be called on completion of the future. + * The listeners will be executed by the same thread that completes the future. + * + * @param the result of the future + * @return a listenable future + */ + public static PlainListenableActionFuture newListenableFuture() { + return new PlainListenableActionFuture<>(); + } + + /** + * This method returns a listenable future. The listeners will be called on completion of the future. + * The listeners will be executed on the LISTENER thread pool. + * @param threadPool the thread pool used to execute listeners + * @param the result of the future + * @return a listenable future + */ + public static PlainListenableActionFuture newDispatchingListenableFuture(ThreadPool threadPool) { + return new DispatchingListenableActionFuture<>(threadPool); } @Override - protected T convert(T response) { - return response; + public void addListener(final ActionListener listener) { + internalAddListener(listener); } + @Override + protected void done() { + super.done(); + synchronized (this) { + executedListeners = true; + } + Object listeners = this.listeners; + if (listeners != null) { + if (listeners instanceof List) { + List list = (List) listeners; + for (Object listener : list) { + executeListener((ActionListener) listener); + } + } else { + executeListener((ActionListener) listeners); + } + } + } + + @Override + protected T convert(T listenerResponse) { + return listenerResponse; + } + + private void internalAddListener(ActionListener listener) { + boolean executeImmediate = false; + synchronized (this) { + if (executedListeners) { + executeImmediate = true; + } else { + Object listeners = this.listeners; + if (listeners == null) { + listeners = listener; + } else if (listeners instanceof List) { + ((List) this.listeners).add(listener); + } else { + Object orig = listeners; + listeners = new ArrayList<>(2); + ((List) listeners).add(orig); + ((List) listeners).add(listener); + } + this.listeners = listeners; + } + } + if (executeImmediate) { + executeListener(listener); + } + } + + private void executeListener(final ActionListener listener) { + try { + // we use a timeout of 0 to by pass assertion forbidding to call actionGet() (blocking) on a network thread. + // here we know we will never block + listener.onResponse(actionGet(0)); + } catch (Exception e) { + listener.onFailure(e); + } + } + + private static final class DispatchingListenableActionFuture extends PlainListenableActionFuture { + + private static final Logger logger = Loggers.getLogger(DispatchingListenableActionFuture.class); + private final ThreadPool threadPool; + + private DispatchingListenableActionFuture(ThreadPool threadPool) { + this.threadPool = threadPool; + } + + @Override + public void addListener(final ActionListener listener) { + super.addListener(new ThreadedActionListener<>(logger, threadPool, ThreadPool.Names.LISTENER, listener, false)); + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/support/TransportAction.java b/core/src/main/java/org/elasticsearch/action/support/TransportAction.java index e8f4d943e9535..22edbfca2dc12 100644 --- a/core/src/main/java/org/elasticsearch/action/support/TransportAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/TransportAction.java @@ -26,7 +26,6 @@ import org.elasticsearch.action.ActionRequestValidationException; import org.elasticsearch.action.ActionResponse; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.common.ParseFieldMatcher; import org.elasticsearch.common.component.AbstractComponent; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.tasks.Task; @@ -43,7 +42,6 @@ public abstract class TransportAction shardFailures) { + public BroadcastResponse(int totalShards, int successfulShards, int failedShards, + List shardFailures) { + assertNoShardNotAvailableFailures(shardFailures); this.totalShards = totalShards; this.successfulShards = successfulShards; this.failedShards = failedShards; - this.shardFailures = shardFailures == null ? EMPTY : shardFailures.toArray(new ShardOperationFailedException[shardFailures.size()]); + this.shardFailures = shardFailures == null ? EMPTY : + shardFailures.toArray(new ShardOperationFailedException[shardFailures.size()]); + } + + private void assertNoShardNotAvailableFailures(List shardFailures) { + if (shardFailures != null) { + for (Object e : shardFailures) { + assert (e instanceof ShardNotFoundException) == false : "expected no ShardNotFoundException failures, but got " + e; + } + } } /** @@ -70,6 +83,17 @@ public int getFailedShards() { return failedShards; } + /** + * The REST status that should be used for the response + */ + public RestStatus getStatus() { + if (failedShards > 0) { + return shardFailures[0].status(); + } else { + return RestStatus.OK; + } + } + /** * The list of shard failures exception. */ diff --git a/core/src/main/java/org/elasticsearch/action/support/broadcast/TransportBroadcastAction.java b/core/src/main/java/org/elasticsearch/action/support/broadcast/TransportBroadcastAction.java index c48fa1e81223c..53764f4ee88d6 100644 --- a/core/src/main/java/org/elasticsearch/action/support/broadcast/TransportBroadcastAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/broadcast/TransportBroadcastAction.java @@ -94,7 +94,7 @@ protected ShardResponse shardOperation(ShardRequest request, Task task) throws I * Determines the shards this operation will be executed on. The operation is executed once per shard iterator, typically * on the first shard in it. If the operation fails, it will be retried on the next shard in the iterator. */ - protected abstract GroupShardsIterator shards(ClusterState clusterState, Request request, String[] concreteIndices); + protected abstract GroupShardsIterator shards(ClusterState clusterState, Request request, String[] concreteIndices); protected abstract ClusterBlockException checkGlobalBlock(ClusterState state, Request request); @@ -107,7 +107,7 @@ protected class AsyncBroadcastAction { private final ActionListener listener; private final ClusterState clusterState; private final DiscoveryNodes nodes; - private final GroupShardsIterator shardsIts; + private final GroupShardsIterator shardsIts; private final int expectedOps; private final AtomicInteger counterOps = new AtomicInteger(); private final AtomicReferenceArray shardsResponses; @@ -175,7 +175,6 @@ protected void performOperation(final ShardIterator shardIt, final ShardRouting // no node connected, act as failure onOperation(shard, shardIt, shardIndex, new NoShardAvailableActionException(shardIt.shardId())); } else { - taskManager.registerChildTask(task, node.getId()); transportService.sendRequest(node, transportShardAction, shardRequest, new TransportResponseHandler() { @Override public ShardResponse newInstance() { diff --git a/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java b/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java index 9f11b9b5a707b..3ef967472a597 100644 --- a/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/broadcast/node/TransportBroadcastByNodeAction.java @@ -172,7 +172,7 @@ private Response newResponse( * @param successfulShards the total number of shards for which execution of the operation was successful * @param failedShards the total number of shards for which execution of the operation failed * @param results the per-node aggregated shard-level results - * @param shardFailures the exceptions corresponding to shard operationa failures + * @param shardFailures the exceptions corresponding to shard operation failures * @param clusterState the cluster state * @return the response */ @@ -270,7 +270,7 @@ protected AsyncAction(Task task, Request request, ActionListener liste ShardsIterator shardIt = shards(clusterState, request, concreteIndices); nodeIds = new HashMap<>(); - for (ShardRouting shard : shardIt.asUnordered()) { + for (ShardRouting shard : shardIt) { // send a request to the shard only if it is assigned to a node that is in the local node's cluster state // a scenario in which a shard can be assigned but to a node that is not in the local node's cluster state // is when the shard is assigned to the master node, the local node has detected the master as failed @@ -318,7 +318,6 @@ private void sendNodeRequest(final DiscoveryNode node, List shards NodeRequest nodeRequest = new NodeRequest(node.getId(), request, shards); if (task != null) { nodeRequest.setParentTask(clusterService.localNode().getId(), task.getId()); - taskManager.registerChildTask(task, node.getId()); } transportService.sendRequest(node, transportNodeBroadcastAction, nodeRequest, new TransportResponseHandler() { @Override @@ -439,7 +438,6 @@ private void onShardOperation(final NodeRequest request, final Object[] shardRes } catch (Exception e) { BroadcastShardOperationFailedException failure = new BroadcastShardOperationFailedException(shardRouting.shardId(), "operation " + actionName + " failed", e); - failure.setIndex(shardRouting.getIndexName()); failure.setShard(shardRouting.shardId()); shardResults[shardIndex] = failure; if (TransportActions.isShardNotAvailableException(e)) { @@ -524,10 +522,10 @@ class NodeResponse extends TransportResponse { protected List exceptions; protected List results; - public NodeResponse() { + NodeResponse() { } - public NodeResponse(String nodeId, + NodeResponse(String nodeId, int totalShards, List results, List exceptions) { diff --git a/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeAction.java b/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeAction.java index fbae9f7a12bde..f2bc4da423dea 100644 --- a/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/master/TransportMasterNodeAction.java @@ -160,7 +160,6 @@ public void onFailure(Exception t) { } } }; - taskManager.registerChildTask(task, nodes.getLocalNodeId()); threadPool.executor(executor).execute(new ActionRunnable(delegate) { @Override protected void doRun() throws Exception { @@ -173,7 +172,6 @@ protected void doRun() throws Exception { logger.debug("no known master node, scheduling a retry"); retry(null, masterChangePredicate); } else { - taskManager.registerChildTask(task, nodes.getMasterNode().getId()); transportService.sendRequest(nodes.getMasterNode(), actionName, request, new ActionListenerResponseHandler(listener, TransportMasterNodeAction.this::newResponse) { @Override public void handleException(final TransportException exp) { diff --git a/core/src/main/java/org/elasticsearch/action/support/master/info/TransportClusterInfoAction.java b/core/src/main/java/org/elasticsearch/action/support/master/info/TransportClusterInfoAction.java index 59b1997b35613..9bf7356a9fe33 100644 --- a/core/src/main/java/org/elasticsearch/action/support/master/info/TransportClusterInfoAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/master/info/TransportClusterInfoAction.java @@ -52,5 +52,5 @@ protected final void masterOperation(final Request request, final ClusterState s doMasterOperation(request, concreteIndices, state, listener); } - protected abstract void doMasterOperation(Request request, String[] concreteIndices, ClusterState state, final ActionListener listener); + protected abstract void doMasterOperation(Request request, String[] concreteIndices, ClusterState state, ActionListener listener); } diff --git a/core/src/main/java/org/elasticsearch/action/support/nodes/TransportNodesAction.java b/core/src/main/java/org/elasticsearch/action/support/nodes/TransportNodesAction.java index 6cc063d5af1cb..4583e47bc1db7 100644 --- a/core/src/main/java/org/elasticsearch/action/support/nodes/TransportNodesAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/nodes/TransportNodesAction.java @@ -106,16 +106,11 @@ protected NodesResponse newResponse(NodesRequest request, AtomicReferenceArray n final List responses = new ArrayList<>(); final List failures = new ArrayList<>(); - final boolean accumulateExceptions = accumulateExceptions(); for (int i = 0; i < nodesResponses.length(); ++i) { Object response = nodesResponses.get(i); if (response instanceof FailedNodeException) { - if (accumulateExceptions) { - failures.add((FailedNodeException)response); - } else { - logger.warn("not accumulating exceptions, excluding exception from response", (FailedNodeException)response); - } + failures.add((FailedNodeException)response); } else { responses.add(nodeResponseClass.cast(response)); } @@ -145,8 +140,6 @@ protected NodeResponse nodeOperation(NodeRequest request, Task task) { return nodeOperation(request); } - protected abstract boolean accumulateExceptions(); - /** * resolve node ids to concrete nodes of the incoming request **/ @@ -199,7 +192,6 @@ void start() { TransportRequest nodeRequest = newNodeRequest(nodeId, request); if (task != null) { nodeRequest.setParentTask(clusterService.localNode().getId(), task.getId()); - taskManager.registerChildTask(task, node.getId()); } transportService.sendRequest(node, transportNodeAction, nodeRequest, builder.build(), diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicatedWriteRequest.java b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicatedWriteRequest.java index 107c791a069eb..fa02dac9e1e2d 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicatedWriteRequest.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicatedWriteRequest.java @@ -19,14 +19,12 @@ package org.elasticsearch.action.support.replication; -import org.elasticsearch.Version; import org.elasticsearch.action.bulk.BulkShardRequest; import org.elasticsearch.action.delete.DeleteRequest; import org.elasticsearch.action.index.IndexRequest; import org.elasticsearch.action.support.WriteRequest; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; -import org.elasticsearch.index.seqno.SequenceNumbersService; import org.elasticsearch.index.shard.ShardId; import java.io.IOException; @@ -38,8 +36,6 @@ public abstract class ReplicatedWriteRequest> extends ReplicationRequest implements WriteRequest { private RefreshPolicy refreshPolicy = RefreshPolicy.NONE; - private long seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; - /** * Constructor for deserialization. */ @@ -66,32 +62,11 @@ public RefreshPolicy getRefreshPolicy() { public void readFrom(StreamInput in) throws IOException { super.readFrom(in); refreshPolicy = RefreshPolicy.readFrom(in); - if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { - seqNo = in.readZLong(); - } else { - seqNo = SequenceNumbersService.UNASSIGNED_SEQ_NO; - } } @Override public void writeTo(StreamOutput out) throws IOException { super.writeTo(out); refreshPolicy.writeTo(out); - if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { - out.writeZLong(seqNo); - } - } - - /** - * Returns the sequence number for this operation. The sequence number is assigned while the operation - * is performed on the primary shard. - */ - public long getSeqNo() { - return seqNo; - } - - /** sets the sequence number for this operation. should only be called on the primary shard */ - public void setSeqNo(long seqNo) { - this.seqNo = seqNo; } } diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java index 25dcc29a5c3a3..5623d9bbc1174 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationOperation.java @@ -20,6 +20,7 @@ import org.apache.logging.log4j.Logger; import org.apache.logging.log4j.message.ParameterizedMessage; +import org.apache.lucene.store.AlreadyClosedException; import org.elasticsearch.ElasticsearchException; import org.elasticsearch.ExceptionsHelper; import org.elasticsearch.action.ActionListener; @@ -35,7 +36,6 @@ import org.elasticsearch.common.Nullable; import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.util.set.Sets; -import org.elasticsearch.index.engine.VersionConflictEngineException; import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.rest.RestStatus; @@ -74,7 +74,6 @@ public class ReplicationOperation< */ private final AtomicInteger pendingActions = new AtomicInteger(); private final AtomicInteger successfulShards = new AtomicInteger(); - private final boolean executeOnReplicas; private final Primary primary; private final Replicas replicasProxy; private final AtomicBoolean finished = new AtomicBoolean(); @@ -86,9 +85,8 @@ public class ReplicationOperation< public ReplicationOperation(Request request, Primary primary, ActionListener listener, - boolean executeOnReplicas, Replicas replicas, + Replicas replicas, Supplier clusterStateSupplier, Logger logger, String opType) { - this.executeOnReplicas = executeOnReplicas; this.replicasProxy = replicas; this.primary = primary; this.resultListener = listener; @@ -128,7 +126,7 @@ public void execute() throws Exception { markUnavailableShardsAsStale(replicaRequest, inSyncAllocationIds, shards); - performOnReplicas(replicaRequest, shards); + performOnReplicas(replicaRequest, primary.globalCheckpoint(), shards); } successfulShards.incrementAndGet(); // mark primary as successful @@ -147,7 +145,7 @@ private void markUnavailableShardsAsStale(ReplicaRequest replicaRequest, Set decPendingAndFinishIfNeeded() @@ -156,11 +154,11 @@ private void markUnavailableShardsAsStale(ReplicaRequest replicaRequest, Set shards) { + private void performOnReplicas(final ReplicaRequest replicaRequest, final long globalCheckpoint, final List shards) { final String localNodeId = primary.routingEntry().currentNodeId(); // If the index gets deleted after primary operation, we skip replication for (final ShardRouting shard : shards) { - if (executeOnReplicas == false || shard.unassigned()) { + if (shard.unassigned()) { if (shard.primary() == false) { totalShards.incrementAndGet(); } @@ -168,27 +166,35 @@ private void performOnReplicas(ReplicaRequest replicaRequest, List } if (shard.currentNodeId().equals(localNodeId) == false) { - performOnReplica(shard, replicaRequest); + performOnReplica(shard, replicaRequest, globalCheckpoint); } if (shard.relocating() && shard.relocatingNodeId().equals(localNodeId) == false) { - performOnReplica(shard.getTargetRelocatingShard(), replicaRequest); + performOnReplica(shard.getTargetRelocatingShard(), replicaRequest, globalCheckpoint); } } } - private void performOnReplica(final ShardRouting shard, final ReplicaRequest replicaRequest) { + private void performOnReplica(final ShardRouting shard, final ReplicaRequest replicaRequest, final long globalCheckpoint) { if (logger.isTraceEnabled()) { logger.trace("[{}] sending op [{}] to replica {} for request [{}]", shard.shardId(), opType, shard, replicaRequest); } totalShards.incrementAndGet(); pendingActions.incrementAndGet(); - replicasProxy.performOn(shard, replicaRequest, new ActionListener() { + replicasProxy.performOn(shard, replicaRequest, globalCheckpoint, new ActionListener() { @Override public void onResponse(ReplicaResponse response) { successfulShards.incrementAndGet(); - primary.updateLocalCheckpointForShard(response.allocationId(), response.localCheckpoint()); + try { + primary.updateLocalCheckpointForShard(response.allocationId(), response.localCheckpoint()); + } catch (final AlreadyClosedException e) { + // okay, the index was deleted or this shard was never activated after a relocation; fall through and finish normally + } catch (final Exception e) { + // fail the primary but fall through and let the rest of operation processing complete + final String message = String.format(Locale.ROOT, "primary failed updating local checkpoint for replica %s", shard); + primary.failShard(message, e); + } decPendingAndFinishIfNeeded(); } @@ -202,21 +208,16 @@ public void onFailure(Exception replicaException) { shard, replicaRequest), replicaException); - if (ignoreReplicaException(replicaException)) { + if (TransportActions.isShardNotAvailableException(replicaException)) { decPendingAndFinishIfNeeded(); } else { RestStatus restStatus = ExceptionsHelper.status(replicaException); shardReplicaFailures.add(new ReplicationResponse.ShardInfo.Failure( shard.shardId(), shard.currentNodeId(), replicaException, restStatus, false)); String message = String.format(Locale.ROOT, "failed to perform %s on replica %s", opType, shard); - logger.warn( - (org.apache.logging.log4j.util.Supplier) - () -> new ParameterizedMessage("[{}] {}", shard.shardId(), message), replicaException); - replicasProxy.failShard(shard, replicaRequest.primaryTerm(), message, replicaException, - ReplicationOperation.this::decPendingAndFinishIfNeeded, - ReplicationOperation.this::onPrimaryDemoted, - throwable -> decPendingAndFinishIfNeeded() - ); + replicasProxy.failShardIfNeeded(shard, replicaRequest.primaryTerm(), message, + replicaException, ReplicationOperation.this::decPendingAndFinishIfNeeded, + ReplicationOperation.this::onPrimaryDemoted, throwable -> decPendingAndFinishIfNeeded()); } } }); @@ -314,34 +315,13 @@ private void finishAsFailed(Exception exception) { } } - /** - * Should an exception be ignored when the operation is performed on the replica. + * An encapsulation of an operation that is to be performed on the primary shard */ - public static boolean ignoreReplicaException(Exception e) { - if (TransportActions.isShardNotAvailableException(e)) { - return true; - } - // on version conflict or document missing, it means - // that a new change has crept into the replica, and it's fine - if (isConflictException(e)) { - return true; - } - return false; - } - - public static boolean isConflictException(Throwable t) { - final Throwable cause = ExceptionsHelper.unwrapCause(t); - // on version conflict or document missing, it means - // that a new change has crept into the replica, and it's fine - return cause instanceof VersionConflictEngineException; - } - - public interface Primary< - Request extends ReplicationRequest, - ReplicaRequest extends ReplicationRequest, - PrimaryResultT extends PrimaryResult + RequestT extends ReplicationRequest, + ReplicaRequestT extends ReplicationRequest, + PrimaryResultT extends PrimaryResult > { /** @@ -350,7 +330,10 @@ public interface Primary< ShardRouting routingEntry(); /** - * fail the primary, typically due to the fact that the operation has learned the primary has been demoted by the master + * Fail the primary shard. + * + * @param message the failure message + * @param exception the exception that triggered the failure */ void failShard(String message, Exception exception); @@ -360,10 +343,9 @@ public interface Primary< * also complete after. Deal with it. * * @param request the request to perform - * @return the request to send to the repicas + * @return the request to send to the replicas */ - PrimaryResultT perform(Request request) throws Exception; - + PrimaryResultT perform(RequestT request) throws Exception; /** * Notifies the primary of a local checkpoint for the given allocation. @@ -375,37 +357,58 @@ public interface Primary< */ void updateLocalCheckpointForShard(String allocationId, long checkpoint); - /** returns the local checkpoint of the primary shard */ + /** + * Returns the local checkpoint on the primary shard. + * + * @return the local checkpoint + */ long localCheckpoint(); + + /** + * Returns the global checkpoint on the primary shard. + * + * @return the global checkpoint + */ + long globalCheckpoint(); + } - public interface Replicas> { + /** + * An encapsulation of an operation that will be executed on the replica shards, if present. + */ + public interface Replicas> { /** - * performs the the given request on the specified replica + * Performs the the specified request on the specified replica. * - * @param replica {@link ShardRouting} of the shard this request should be executed on - * @param replicaRequest operation to peform - * @param listener a callback to call once the operation has been complicated, either successfully or with an error. + * @param replica the shard this request should be executed on + * @param replicaRequest the operation to perform + * @param globalCheckpoint the global checkpoint on the primary + * @param listener callback for handling the response or failure */ - void performOn(ShardRouting replica, ReplicaRequest replicaRequest, ActionListener listener); + void performOn(ShardRouting replica, RequestT replicaRequest, long globalCheckpoint, ActionListener listener); /** - * Fail the specified shard, removing it from the current set of active shards + * Fail the specified shard if needed, removing it from the current set + * of active shards. Whether a failure is needed is left up to the + * implementation. + * * @param replica shard to fail * @param primaryTerm the primary term of the primary shard when requesting the failure * @param message a (short) description of the reason * @param exception the original exception which caused the ReplicationOperation to request the shard to be failed * @param onSuccess a callback to call when the shard has been successfully removed from the active set. * @param onPrimaryDemoted a callback to call when the shard can not be failed because the current primary has been demoted -* by the master. + * by the master. * @param onIgnoredFailure a callback to call when failing a shard has failed, but it that failure can be safely ignored and the */ - void failShard(ShardRouting replica, long primaryTerm, String message, Exception exception, Runnable onSuccess, - Consumer onPrimaryDemoted, Consumer onIgnoredFailure); + void failShardIfNeeded(ShardRouting replica, long primaryTerm, String message, Exception exception, Runnable onSuccess, + Consumer onPrimaryDemoted, Consumer onIgnoredFailure); /** - * Marks shard copy as stale, removing its allocation id from the set of in-sync allocation ids. + * Marks shard copy as stale if needed, removing its allocation id from + * the set of in-sync allocation ids. Whether marking as stale is needed + * is left up to the implementation. * * @param shardId shard id * @param allocationId allocation id to remove from the set of in-sync allocation ids @@ -415,8 +418,8 @@ void failShard(ShardRouting replica, long primaryTerm, String message, Exception * by the master. * @param onIgnoredFailure a callback to call when the request failed, but the failure can be safely ignored. */ - void markShardCopyAsStale(ShardId shardId, String allocationId, long primaryTerm, Runnable onSuccess, - Consumer onPrimaryDemoted, Consumer onIgnoredFailure); + void markShardCopyAsStaleIfNeeded(ShardId shardId, String allocationId, long primaryTerm, Runnable onSuccess, + Consumer onPrimaryDemoted, Consumer onIgnoredFailure); } /** @@ -446,13 +449,13 @@ public RetryOnPrimaryException(StreamInput in) throws IOException { } } - public interface PrimaryResult> { + public interface PrimaryResult> { /** * @return null if no operation needs to be sent to a replica * (for example when the operation failed on the primary due to a parsing exception) */ - @Nullable R replicaRequest(); + @Nullable RequestT replicaRequest(); void setShardInfo(ReplicationResponse.ShardInfo shardInfo); } diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationResponse.java b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationResponse.java index afb92e27205f1..4b1873e8d06e4 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationResponse.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/ReplicationResponse.java @@ -28,7 +28,7 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentParser; import org.elasticsearch.index.shard.ShardId; @@ -38,7 +38,6 @@ import java.util.ArrayList; import java.util.Arrays; import java.util.List; -import java.util.Objects; import static org.elasticsearch.common.xcontent.XContentParserUtils.ensureExpectedToken; import static org.elasticsearch.common.xcontent.XContentParserUtils.throwUnknownField; @@ -72,9 +71,8 @@ public void setShardInfo(ShardInfo shardInfo) { this.shardInfo = shardInfo; } - public static class ShardInfo implements Streamable, ToXContent { + public static class ShardInfo implements Streamable, ToXContentObject { - private static final String _SHARDS = "_shards"; private static final String TOTAL = "total"; private static final String SUCCESSFUL = "successful"; private static final String FAILED = "failed"; @@ -134,25 +132,6 @@ public RestStatus status() { return status; } - @Override - public boolean equals(Object that) { - if (this == that) { - return true; - } - if (that == null || getClass() != that.getClass()) { - return false; - } - ShardInfo other = (ShardInfo) that; - return Objects.equals(total, other.total) && - Objects.equals(successful, other.successful) && - Arrays.equals(failures, other.failures); - } - - @Override - public int hashCode() { - return Objects.hash(total, successful, failures); - } - @Override public void readFrom(StreamInput in) throws IOException { total = in.readVInt(); @@ -178,7 +157,7 @@ public void writeTo(StreamOutput out) throws IOException { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { - builder.startObject(_SHARDS); + builder.startObject(); builder.field(TOTAL, total); builder.field(SUCCESSFUL, successful); builder.field(FAILED, getFailed()); @@ -194,18 +173,12 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws } public static ShardInfo fromXContent(XContentParser parser) throws IOException { - XContentParser.Token token = parser.nextToken(); - ensureExpectedToken(XContentParser.Token.FIELD_NAME, token, parser::getTokenLocation); - - String currentFieldName = parser.currentName(); - if (_SHARDS.equals(currentFieldName) == false) { - throwUnknownField(currentFieldName, parser.getTokenLocation()); - } - token = parser.nextToken(); + XContentParser.Token token = parser.currentToken(); ensureExpectedToken(XContentParser.Token.START_OBJECT, token, parser::getTokenLocation); int total = 0, successful = 0; List failuresList = null; + String currentFieldName = null; while ((token = parser.nextToken()) != XContentParser.Token.END_OBJECT) { if (token == XContentParser.Token.FIELD_NAME) { currentFieldName = parser.currentName(); @@ -250,7 +223,7 @@ public static ShardInfo readShardInfo(StreamInput in) throws IOException { return shardInfo; } - public static class Failure implements ShardOperationFailedException, ToXContent { + public static class Failure implements ShardOperationFailedException, ToXContentObject { private static final String _INDEX = "_index"; private static final String _SHARD = "_shard"; @@ -333,27 +306,6 @@ public boolean primary() { return primary; } - @Override - public boolean equals(Object that) { - if (this == that) { - return true; - } - if (that == null || getClass() != that.getClass()) { - return false; - } - Failure failure = (Failure) that; - return Objects.equals(primary, failure.primary) && - Objects.equals(shardId, failure.shardId) && - Objects.equals(nodeId, failure.nodeId) && - Objects.equals(cause, failure.cause) && - Objects.equals(status, failure.status); - } - - @Override - public int hashCode() { - return Objects.hash(shardId, nodeId, cause, status, primary); - } - @Override public void readFrom(StreamInput in) throws IOException { shardId = ShardId.readShardId(in); @@ -380,7 +332,7 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(_NODE, nodeId); builder.field(REASON); builder.startObject(); - ElasticsearchException.toXContent(builder, params, cause); + ElasticsearchException.generateThrowableXContent(builder, params, cause); builder.endObject(); builder.field(STATUS, status); builder.field(PRIMARY, primary); diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/TransportBroadcastReplicationAction.java b/core/src/main/java/org/elasticsearch/action/support/replication/TransportBroadcastReplicationAction.java index e33d10eaa25ad..8193cf77cebef 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/TransportBroadcastReplicationAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/TransportBroadcastReplicationAction.java @@ -119,7 +119,6 @@ public void onFailure(Exception e) { protected void shardExecute(Task task, Request request, ShardId shardId, ActionListener shardActionListener) { ShardRequest shardRequest = newShardRequest(request, shardId); shardRequest.setParentTask(clusterService.localNode().getId(), task.getId()); - taskManager.registerChildTask(task, clusterService.localNode().getId()); replicatedBroadcastShardAction.execute(shardRequest, shardActionListener); } diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java b/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java index 15d62ea23a246..946692f182643 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/TransportReplicationAction.java @@ -38,7 +38,6 @@ import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.IndexMetaData; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; -import org.elasticsearch.cluster.metadata.MetaData; import org.elasticsearch.cluster.node.DiscoveryNode; import org.elasticsearch.cluster.routing.AllocationId; import org.elasticsearch.cluster.routing.IndexShardRoutingTable; @@ -52,7 +51,6 @@ import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.AbstractRunnable; -import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.index.IndexNotFoundException; import org.elasticsearch.index.IndexService; import org.elasticsearch.index.shard.IndexShard; @@ -99,15 +97,15 @@ public abstract class TransportReplicationAction< private final TransportService transportService; protected final ClusterService clusterService; + protected final ShardStateAction shardStateAction; private final IndicesService indicesService; - private final ShardStateAction shardStateAction; private final TransportRequestOptions transportOptions; private final String executor; // package private for testing private final String transportReplicaAction; private final String transportPrimaryAction; - private final ReplicasProxy replicasProxy; + private final ReplicationOperation.Replicas replicasProxy; protected TransportReplicationAction(Settings settings, String actionName, TransportService transportService, ClusterService clusterService, IndicesService indicesService, @@ -129,13 +127,13 @@ protected TransportReplicationAction(Settings settings, String actionName, Trans new PrimaryOperationTransportHandler()); // we must never reject on because of thread pool capacity on replicas transportService.registerRequestHandler(transportReplicaAction, - () -> new ConcreteShardRequest<>(replicaRequest), + () -> new ConcreteReplicaRequest<>(replicaRequest), executor, true, true, new ReplicaOperationTransportHandler()); this.transportOptions = transportOptions(); - this.replicasProxy = new ReplicasProxy(); + this.replicasProxy = newReplicasProxy(); } @Override @@ -148,18 +146,20 @@ protected void doExecute(Task task, Request request, ActionListener li new ReroutePhase((ReplicationTask) task, request, listener).run(); } + protected ReplicationOperation.Replicas newReplicasProxy() { + return new ReplicasProxy(); + } + protected abstract Response newResponseInstance(); /** - * Resolves derived values in the request. For example, the target shard id of the incoming request, - * if not set at request construction + * Resolves derived values in the request. For example, the target shard id of the incoming request, if not set at request construction. * Additional processing or validation of the request should be done here. * - * @param metaData cluster state metadata * @param indexMetaData index metadata of the concrete index this request is going to operate on * @param request the request to resolve */ - protected void resolveRequest(MetaData metaData, IndexMetaData indexMetaData, Request request) { + protected void resolveRequest(final IndexMetaData indexMetaData, final Request request) { if (request.waitForActiveShards() == ActiveShardCount.DEFAULT) { // if the wait for active shard count has not been set in the request, // resolve it from the index settings @@ -173,11 +173,12 @@ protected void resolveRequest(MetaData metaData, IndexMetaData indexMetaData, Re * @param shardRequest the request to the primary shard * @param primary the primary shard to perform the operation on */ - protected abstract PrimaryResult shardOperationOnPrimary(Request shardRequest, IndexShard primary) throws Exception; + protected abstract PrimaryResult shardOperationOnPrimary( + Request shardRequest, IndexShard primary) throws Exception; /** - * Synchronous replica operation on nodes with replica copies. This is done under the lock form - * {@link IndexShard#acquireReplicaOperationLock(long, ActionListener, String)} + * Synchronously execute the specified replica operation. This is done under a permit from + * {@link IndexShard#acquireReplicaOperationPermit(long, ActionListener, String)}. * * @param shardRequest the request to the replica shard * @param replica the replica shard to perform the operation on @@ -314,11 +315,10 @@ public void handleException(TransportException exp) { } else { setPhase(replicationTask, "primary"); final IndexMetaData indexMetaData = clusterService.state().getMetaData().index(request.shardId().getIndex()); - final boolean executeOnReplicas = (indexMetaData == null) || shouldExecuteReplication(indexMetaData.getSettings()); final ActionListener listener = createResponseListener(primaryShardReference); createReplicatedOperation(request, ActionListener.wrap(result -> result.respond(listener), listener::onFailure), - primaryShardReference, executeOnReplicas) + primaryShardReference) .execute(); } } catch (Exception e) { @@ -364,19 +364,20 @@ public void onFailure(Exception e) { }; } - protected ReplicationOperation createReplicatedOperation( - Request request, ActionListener listener, - PrimaryShardReference primaryShardReference, boolean executeOnReplicas) { + protected ReplicationOperation> createReplicatedOperation( + Request request, ActionListener> listener, + PrimaryShardReference primaryShardReference) { return new ReplicationOperation<>(request, primaryShardReference, listener, - executeOnReplicas, replicasProxy, clusterService::state, logger, actionName - ); + replicasProxy, clusterService::state, logger, actionName); } } - protected class PrimaryResult implements ReplicationOperation.PrimaryResult { + protected static class PrimaryResult, + Response extends ReplicationResponse> + implements ReplicationOperation.PrimaryResult { final ReplicaRequest replicaRequest; - final Response finalResponseIfSuccessful; - final Exception finalFailure; + public final Response finalResponseIfSuccessful; + public final Exception finalFailure; /** * Result of executing a primary operation @@ -416,7 +417,7 @@ public void respond(ActionListener listener) { } } - protected class ReplicaResult { + protected static class ReplicaResult { final Exception finalFailure; public ReplicaResult(Exception finalFailure) { @@ -436,18 +437,28 @@ public void respond(ActionListener listener) { } } - class ReplicaOperationTransportHandler implements TransportRequestHandler> { + class ReplicaOperationTransportHandler implements TransportRequestHandler> { + @Override - public void messageReceived(final ConcreteShardRequest request, final TransportChannel channel) - throws Exception { + public void messageReceived( + final ConcreteReplicaRequest replicaRequest, final TransportChannel channel) throws Exception { throw new UnsupportedOperationException("the task parameter is required for this operation"); } @Override - public void messageReceived(ConcreteShardRequest requestWithAID, TransportChannel channel, Task task) + public void messageReceived( + final ConcreteReplicaRequest replicaRequest, + final TransportChannel channel, + final Task task) throws Exception { - new AsyncReplicaAction(requestWithAID.request, requestWithAID.targetAllocationID, channel, (ReplicationTask) task).run(); + new AsyncReplicaAction( + replicaRequest.getRequest(), + replicaRequest.getTargetAllocationID(), + replicaRequest.getGlobalCheckpoint(), + channel, + (ReplicationTask) task).run(); } + } public static class RetryOnReplicaException extends ElasticsearchException { @@ -466,6 +477,7 @@ private final class AsyncReplicaAction extends AbstractRunnable implements Actio private final ReplicaRequest request; // allocation id of the replica this request is meant for private final String targetAllocationID; + private final long globalCheckpoint; private final TransportChannel channel; private final IndexShard replica; /** @@ -476,11 +488,17 @@ private final class AsyncReplicaAction extends AbstractRunnable implements Actio // something we want to avoid at all costs private final ClusterStateObserver observer = new ClusterStateObserver(clusterService, null, logger, threadPool.getThreadContext()); - AsyncReplicaAction(ReplicaRequest request, String targetAllocationID, TransportChannel channel, ReplicationTask task) { + AsyncReplicaAction( + ReplicaRequest request, + String targetAllocationID, + long globalCheckpoint, + TransportChannel channel, + ReplicationTask task) { this.request = request; this.channel = channel; this.task = task; this.targetAllocationID = targetAllocationID; + this.globalCheckpoint = globalCheckpoint; final ShardId shardId = request.shardId(); assert shardId != null : "request shardId must be set"; this.replica = getIndexShard(shardId); @@ -489,12 +507,13 @@ private final class AsyncReplicaAction extends AbstractRunnable implements Actio @Override public void onResponse(Releasable releasable) { try { - ReplicaResult replicaResult = shardOperationOnReplica(request, replica); + replica.updateGlobalCheckpointOnReplica(globalCheckpoint); + final ReplicaResult replicaResult = shardOperationOnReplica(request, replica); releasable.close(); // release shard operation lock before responding to caller final TransportReplicationAction.ReplicaResponse response = new ReplicaResponse(replica.routingEntry().allocationId().getId(), replica.getLocalCheckpoint()); replicaResult.respond(new ResponseListener(response)); - } catch (Exception e) { + } catch (final Exception e) { Releasables.closeWhileHandlingException(releasable); // release shard operation lock before responding to caller AsyncReplicaAction.this.onFailure(e); } @@ -511,18 +530,17 @@ public void onFailure(Exception e) { request), e); request.onRetry(); - final ThreadContext.StoredContext context = threadPool.getThreadContext().newStoredContext(); observer.waitForNextChange(new ClusterStateObserver.Listener() { @Override public void onNewClusterState(ClusterState state) { - context.close(); // Forking a thread on local node via transport service so that custom transport service have an // opportunity to execute custom logic before the replica operation begins String extraMessage = "action [" + transportReplicaAction + "], request[" + request + "]"; TransportChannelResponseHandler handler = - new TransportChannelResponseHandler<>(logger, channel, extraMessage, () -> TransportResponse.Empty.INSTANCE); + new TransportChannelResponseHandler<>(logger, channel, extraMessage, + () -> TransportResponse.Empty.INSTANCE); transportService.sendRequest(clusterService.localNode(), transportReplicaAction, - new ConcreteShardRequest<>(request, targetAllocationID), + new ConcreteReplicaRequest<>(request, targetAllocationID, globalCheckpoint), handler); } @@ -564,7 +582,7 @@ protected void doRun() throws Exception { throw new ShardNotFoundException(this.replica.shardId(), "expected aID [{}] but found [{}]", targetAllocationID, actualAllocationId); } - replica.acquireReplicaOperationLock(request.primaryTerm, this, executor); + replica.acquireReplicaOperationPermit(request.primaryTerm, this, executor); } /** @@ -573,7 +591,7 @@ protected void doRun() throws Exception { private class ResponseListener implements ActionListener { private final ReplicaResponse replicaResponse; - public ResponseListener(ReplicaResponse replicaResponse) { + ResponseListener(ReplicaResponse replicaResponse) { this.replicaResponse = replicaResponse; } @@ -652,7 +670,7 @@ protected void doRun() { } // resolve all derived request fields, so we can route and apply it - resolveRequest(state.metaData(), indexMetaData, request); + resolveRequest(indexMetaData, request); assert request.shardId() != null : "request shardId must be set in resolveRequest"; assert request.waitForActiveShards() != ActiveShardCount.DEFAULT : "request waitForActiveShards must be set in resolveRequest"; @@ -661,7 +679,6 @@ protected void doRun() { return; } final DiscoveryNode node = state.nodes().get(primary.currentNodeId()); - taskManager.registerChildTask(task, node.getId()); if (primary.currentNodeId().equals(state.nodes().getLocalNodeId())) { performLocalAction(state, primary, node); } else { @@ -807,11 +824,9 @@ void retry(Exception failure) { } setPhase(task, "waiting_for_retry"); request.onRetry(); - final ThreadContext.StoredContext context = threadPool.getThreadContext().newStoredContext(); observer.waitForNextChange(new ClusterStateObserver.Listener() { @Override public void onNewClusterState(ClusterState state) { - context.close(); run(); } @@ -822,7 +837,6 @@ public void onClusterServiceClose() { @Override public void onTimeout(TimeValue timeout) { - context.close(); // Try one more time... run(); } @@ -905,15 +919,7 @@ public void onFailure(Exception e) { } }; - indexShard.acquirePrimaryOperationLock(onAcquired, executor); - } - - /** - * Indicated whether this operation should be replicated to shadow replicas or not. If this method returns true the replication phase - * will be skipped. For example writes such as index and delete don't need to be replicated on shadow replicas but refresh and flush do. - */ - protected boolean shouldExecuteReplication(Settings settings) { - return IndexMetaData.isIndexUsingShadowReplicas(settings) == false; + indexShard.acquirePrimaryOperationPermit(onAcquired, executor); } class ShardReference implements Releasable { @@ -941,7 +947,8 @@ public ShardRouting routingEntry() { } - class PrimaryShardReference extends ShardReference implements ReplicationOperation.Primary { + class PrimaryShardReference extends ShardReference + implements ReplicationOperation.Primary> { PrimaryShardReference(IndexShard indexShard, Releasable operationLock) { super(indexShard, operationLock); @@ -981,6 +988,11 @@ public long localCheckpoint() { return indexShard.getLocalCheckpoint(); } + @Override + public long globalCheckpoint() { + return indexShard.getGlobalCheckpoint(); + } + } @@ -999,7 +1011,7 @@ public ReplicaResponse(String allocationId, long localCheckpoint) { @Override public void readFrom(StreamInput in) throws IOException { - if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { super.readFrom(in); localCheckpoint = in.readZLong(); allocationId = in.readString(); @@ -1010,7 +1022,7 @@ public void readFrom(StreamInput in) throws IOException { @Override public void writeTo(StreamOutput out) throws IOException { - if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1_UNRELEASED)) { + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { super.writeTo(out); out.writeZLong(localCheckpoint); out.writeString(allocationId); @@ -1031,69 +1043,69 @@ public String allocationId() { } } - final class ReplicasProxy implements ReplicationOperation.Replicas { + /** + * The {@code ReplicasProxy} is an implementation of the {@code Replicas} + * interface that performs the actual {@code ReplicaRequest} on the replica + * shards. It also encapsulates the logic required for failing the replica + * if deemed necessary as well as marking it as stale when needed. + */ + class ReplicasProxy implements ReplicationOperation.Replicas { @Override - public void performOn(ShardRouting replica, ReplicaRequest request, ActionListener listener) { + public void performOn( + final ShardRouting replica, + final ReplicaRequest request, + final long globalCheckpoint, + final ActionListener listener) { String nodeId = replica.currentNodeId(); final DiscoveryNode node = clusterService.state().nodes().get(nodeId); if (node == null) { listener.onFailure(new NoNodeAvailableException("unknown node [" + nodeId + "]")); return; } - final ConcreteShardRequest concreteShardRequest = - new ConcreteShardRequest<>(request, replica.allocationId().getId()); - sendReplicaRequest(concreteShardRequest, node, listener); + final ConcreteReplicaRequest replicaRequest = + new ConcreteReplicaRequest<>(request, replica.allocationId().getId(), globalCheckpoint); + sendReplicaRequest(replicaRequest, node, listener); } @Override - public void failShard(ShardRouting replica, long primaryTerm, String message, Exception exception, - Runnable onSuccess, Consumer onPrimaryDemoted, Consumer onIgnoredFailure) { - shardStateAction.remoteShardFailed(replica.shardId(), replica.allocationId().getId(), primaryTerm, message, exception, - createListener(onSuccess, onPrimaryDemoted, onIgnoredFailure)); + public void failShardIfNeeded(ShardRouting replica, long primaryTerm, String message, Exception exception, + Runnable onSuccess, Consumer onPrimaryDemoted, Consumer onIgnoredFailure) { + // This does not need to fail the shard. The idea is that this + // is a non-write operation (something like a refresh or a global + // checkpoint sync) and therefore the replica should still be + // "alive" if it were to fail. + onSuccess.run(); } @Override - public void markShardCopyAsStale(ShardId shardId, String allocationId, long primaryTerm, Runnable onSuccess, - Consumer onPrimaryDemoted, Consumer onIgnoredFailure) { - shardStateAction.remoteShardFailed(shardId, allocationId, primaryTerm, "mark copy as stale", null, - createListener(onSuccess, onPrimaryDemoted, onIgnoredFailure)); - } - - private ShardStateAction.Listener createListener(final Runnable onSuccess, final Consumer onPrimaryDemoted, - final Consumer onIgnoredFailure) { - return new ShardStateAction.Listener() { - @Override - public void onSuccess() { - onSuccess.run(); - } - - @Override - public void onFailure(Exception shardFailedError) { - if (shardFailedError instanceof ShardStateAction.NoLongerPrimaryShardException) { - onPrimaryDemoted.accept(shardFailedError); - } else { - // these can occur if the node is shutting down and are okay - // any other exception here is not expected and merits investigation - assert shardFailedError instanceof TransportException || - shardFailedError instanceof NodeClosedException : shardFailedError; - onIgnoredFailure.accept(shardFailedError); - } - } - }; + public void markShardCopyAsStaleIfNeeded(ShardId shardId, String allocationId, long primaryTerm, Runnable onSuccess, + Consumer onPrimaryDemoted, Consumer onIgnoredFailure) { + // This does not need to make the shard stale. The idea is that this + // is a non-write operation (something like a refresh or a global + // checkpoint sync) and therefore the replica should still be + // "alive" if it were to be marked as stale. + onSuccess.run(); } } - /** sends the given replica request to the supplied nodes */ - protected void sendReplicaRequest(ConcreteShardRequest concreteShardRequest, DiscoveryNode node, - ActionListener listener) { - transportService.sendRequest(node, transportReplicaAction, concreteShardRequest, transportOptions, - // Eclipse can't handle when this is <> so we specify the type here. - new ActionListenerResponseHandler(listener, ReplicaResponse::new)); + /** + * Sends the specified replica request to the specified node. + * + * @param replicaRequest the replica request + * @param node the node to send the request to + * @param listener callback for handling the response or failure + */ + protected void sendReplicaRequest( + final ConcreteReplicaRequest replicaRequest, + final DiscoveryNode node, + final ActionListener listener) { + final ActionListenerResponseHandler handler = new ActionListenerResponseHandler<>(listener, ReplicaResponse::new); + transportService.sendRequest(node, transportReplicaAction, replicaRequest, transportOptions, handler); } /** a wrapper class to encapsulate a request when being sent to a specific allocation id **/ - public static final class ConcreteShardRequest extends TransportRequest { + public static class ConcreteShardRequest extends TransportRequest { /** {@link AllocationId#getId()} of the shard this request is sent to **/ private String targetAllocationID; @@ -1163,6 +1175,49 @@ public String toString() { } } + protected static final class ConcreteReplicaRequest extends ConcreteShardRequest { + + private long globalCheckpoint; + + public ConcreteReplicaRequest(final Supplier requestSupplier) { + super(requestSupplier); + } + + public ConcreteReplicaRequest(final R request, final String targetAllocationID, final long globalCheckpoint) { + super(request, targetAllocationID); + this.globalCheckpoint = globalCheckpoint; + } + + @Override + public void readFrom(StreamInput in) throws IOException { + super.readFrom(in); + if (in.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { + globalCheckpoint = in.readZLong(); + } + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + super.writeTo(out); + if (out.getVersion().onOrAfter(Version.V_6_0_0_alpha1)) { + out.writeZLong(globalCheckpoint); + } + } + + public long getGlobalCheckpoint() { + return globalCheckpoint; + } + + @Override + public String toString() { + return "ConcreteReplicaRequest{" + + "targetAllocationID='" + getTargetAllocationID() + '\'' + + ", request=" + getRequest() + + ", globalCheckpoint=" + globalCheckpoint + + '}'; + } + } + /** * Sets the current phase on the task if it isn't null. Pulled into its own * method because its more convenient that way. diff --git a/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java b/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java index 1a62c67aa5995..938e90b82b2fb 100644 --- a/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/replication/TransportWriteAction.java @@ -20,6 +20,7 @@ package org.elasticsearch.action.support.replication; import org.apache.logging.log4j.Logger; +import org.apache.logging.log4j.message.ParameterizedMessage; import org.elasticsearch.action.ActionListener; import org.elasticsearch.action.support.ActionFilters; import org.elasticsearch.action.support.WriteRequest; @@ -27,20 +28,25 @@ import org.elasticsearch.cluster.action.shard.ShardStateAction; import org.elasticsearch.cluster.block.ClusterBlockLevel; import org.elasticsearch.cluster.metadata.IndexNameExpressionResolver; +import org.elasticsearch.cluster.routing.ShardRouting; import org.elasticsearch.cluster.service.ClusterService; import org.elasticsearch.common.Nullable; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.index.shard.IndexShard; +import org.elasticsearch.index.shard.ShardId; import org.elasticsearch.index.translog.Translog; import org.elasticsearch.index.translog.Translog.Location; import org.elasticsearch.indices.IndicesService; +import org.elasticsearch.node.NodeClosedException; import org.elasticsearch.threadpool.ThreadPool; +import org.elasticsearch.transport.TransportException; import org.elasticsearch.transport.TransportResponse; import org.elasticsearch.transport.TransportService; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; +import java.util.function.Consumer; import java.util.function.Supplier; /** @@ -61,6 +67,11 @@ protected TransportWriteAction(Settings settings, String actionName, TransportSe indexNameExpressionResolver, request, replicaRequest, executor); } + @Override + protected ReplicationOperation.Replicas newReplicasProxy() { + return new WriteActionReplicasProxy(); + } + /** * Called on the primary with a reference to the primary {@linkplain IndexShard} to modify. * @@ -68,7 +79,8 @@ protected TransportWriteAction(Settings settings, String actionName, TransportSe * async refresh is performed on the primary shard according to the Request refresh policy */ @Override - protected abstract WritePrimaryResult shardOperationOnPrimary(Request request, IndexShard primary) throws Exception; + protected abstract WritePrimaryResult shardOperationOnPrimary( + Request request, IndexShard primary) throws Exception; /** * Called once per replica with a reference to the replica {@linkplain IndexShard} to modify. @@ -77,19 +89,26 @@ protected TransportWriteAction(Settings settings, String actionName, TransportSe * async refresh is performed on the replica shard according to the ReplicaRequest refresh policy */ @Override - protected abstract WriteReplicaResult shardOperationOnReplica(ReplicaRequest request, IndexShard replica) throws Exception; + protected abstract WriteReplicaResult shardOperationOnReplica( + ReplicaRequest request, IndexShard replica) throws Exception; /** * Result of taking the action on the primary. + * + * NOTE: public for testing */ - protected class WritePrimaryResult extends PrimaryResult implements RespondingWriteResult { + public static class WritePrimaryResult, + Response extends ReplicationResponse & WriteResponse> extends PrimaryResult + implements RespondingWriteResult { boolean finishedAsyncActions; + public final Location location; ActionListener listener = null; public WritePrimaryResult(ReplicaRequest request, @Nullable Response finalResponse, @Nullable Location location, @Nullable Exception operationFailure, - IndexShard primary) { + IndexShard primary, Logger logger) { super(request, finalResponse, operationFailure); + this.location = location; assert location == null || operationFailure == null : "expected either failure to be null or translog location to be null, " + "but found: [" + location + "] translog location and [" + operationFailure + "] failure"; @@ -139,13 +158,16 @@ public synchronized void onSuccess(boolean forcedRefresh) { /** * Result of taking the action on the replica. */ - protected class WriteReplicaResult extends ReplicaResult implements RespondingWriteResult { + protected static class WriteReplicaResult> + extends ReplicaResult implements RespondingWriteResult { + public final Location location; boolean finishedAsyncActions; private ActionListener listener; public WriteReplicaResult(ReplicaRequest request, @Nullable Location location, - @Nullable Exception operationFailure, IndexShard replica) { + @Nullable Exception operationFailure, IndexShard replica, Logger logger) { super(operationFailure); + this.location = location; if (operationFailure != null) { this.finishedAsyncActions = true; } else { @@ -277,15 +299,21 @@ private void maybeFinish() { } void run() { - // we either respond immediately ie. if we we don't fsync per request or wait for refresh - // OR we got an pass async operations on and wait for them to return to respond. - indexShard.maybeFlush(); - maybeFinish(); // decrement the pendingOpts by one, if there is nothing else to do we just respond with success. + /* + * We either respond immediately (i.e., if we do not fsync per request or wait for + * refresh), or we there are past async operations and we wait for them to return to + * respond. + */ + indexShard.afterWriteOperation(); + // decrement pending by one, if there is nothing else to do we just respond with success + maybeFinish(); if (waitUntilRefresh) { assert pendingOps.get() > 0; indexShard.addRefreshListener(location, forcedRefresh -> { if (forcedRefresh) { - logger.warn("block_until_refresh request ran out of slots and forced a refresh: [{}]", request); + logger.warn( + "block until refresh ran out of slots and forced a refresh: [{}]", + request); } refreshed.set(forcedRefresh); maybeFinish(); @@ -300,4 +328,55 @@ void run() { } } } + + /** + * A proxy for write operations that need to be performed on the + * replicas, where a failure to execute the operation should fail + * the replica shard and/or mark the replica as stale. + * + * This extends {@code TransportReplicationAction.ReplicasProxy} to do the + * failing and stale-ing. + */ + class WriteActionReplicasProxy extends ReplicasProxy { + + @Override + public void failShardIfNeeded(ShardRouting replica, long primaryTerm, String message, Exception exception, + Runnable onSuccess, Consumer onPrimaryDemoted, Consumer onIgnoredFailure) { + + logger.warn((org.apache.logging.log4j.util.Supplier) + () -> new ParameterizedMessage("[{}] {}", replica.shardId(), message), exception); + shardStateAction.remoteShardFailed(replica.shardId(), replica.allocationId().getId(), primaryTerm, message, exception, + createListener(onSuccess, onPrimaryDemoted, onIgnoredFailure)); + } + + @Override + public void markShardCopyAsStaleIfNeeded(ShardId shardId, String allocationId, long primaryTerm, Runnable onSuccess, + Consumer onPrimaryDemoted, Consumer onIgnoredFailure) { + shardStateAction.remoteShardFailed(shardId, allocationId, primaryTerm, "mark copy as stale", null, + createListener(onSuccess, onPrimaryDemoted, onIgnoredFailure)); + } + + public ShardStateAction.Listener createListener(final Runnable onSuccess, final Consumer onPrimaryDemoted, + final Consumer onIgnoredFailure) { + return new ShardStateAction.Listener() { + @Override + public void onSuccess() { + onSuccess.run(); + } + + @Override + public void onFailure(Exception shardFailedError) { + if (shardFailedError instanceof ShardStateAction.NoLongerPrimaryShardException) { + onPrimaryDemoted.accept(shardFailedError); + } else { + // these can occur if the node is shutting down and are okay + // any other exception here is not expected and merits investigation + assert shardFailedError instanceof TransportException || + shardFailedError instanceof NodeClosedException : shardFailedError; + onIgnoredFailure.accept(shardFailedError); + } + } + }; + } + } } diff --git a/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksResponse.java b/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksResponse.java index b62cfd714bbf2..4ddbe541993e6 100644 --- a/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksResponse.java +++ b/core/src/main/java/org/elasticsearch/action/support/tasks/BaseTasksResponse.java @@ -81,13 +81,13 @@ public void rethrowFailures(String operationName) { public void readFrom(StreamInput in) throws IOException { super.readFrom(in); int size = in.readVInt(); - List taskFailures = new ArrayList<>(); + List taskFailures = new ArrayList<>(size); for (int i = 0; i < size; i++) { taskFailures.add(new TaskOperationFailure(in)); } size = in.readVInt(); this.taskFailures = Collections.unmodifiableList(taskFailures); - List nodeFailures = new ArrayList<>(); + List nodeFailures = new ArrayList<>(size); for (int i = 0; i < size; i++) { nodeFailures.add(new FailedNodeException(in)); } diff --git a/core/src/main/java/org/elasticsearch/action/support/tasks/TransportTasksAction.java b/core/src/main/java/org/elasticsearch/action/support/tasks/TransportTasksAction.java index ee384b819b025..35b2b41dfda6e 100644 --- a/core/src/main/java/org/elasticsearch/action/support/tasks/TransportTasksAction.java +++ b/core/src/main/java/org/elasticsearch/action/support/tasks/TransportTasksAction.java @@ -135,14 +135,14 @@ private void respondIfFinished() { } List results = new ArrayList<>(); List exceptions = new ArrayList<>(); - for (AtomicArray.Entry> response : responses.asList()) { - if (response.value.v1() == null) { - assert response.value.v2() != null; + for (Tuple response : responses.asList()) { + if (response.v1() == null) { + assert response.v2() != null; exceptions.add(new TaskOperationFailure(clusterService.localNode().getId(), tasks.get(taskIndex).getId(), - response.value.v2())); + response.v2())); } else { - assert response.value.v2() == null; - results.add(response.value.v1()); + assert response.v2() == null; + results.add(response.v1()); } } listener.onResponse(new NodeTasksResponse(clusterService.localNode().getId(), results, exceptions)); @@ -226,8 +226,6 @@ protected boolean transportCompress() { return false; } - protected abstract boolean accumulateExceptions(); - private class AsyncAction { private final TasksRequest request; @@ -278,7 +276,6 @@ private void start() { } else { NodeTaskRequest nodeRequest = new NodeTaskRequest(request); nodeRequest.setParentTask(clusterService.localNode().getId(), task.getId()); - taskManager.registerChildTask(task, node.getId()); transportService.sendRequest(node, transportNodeAction, nodeRequest, builder.build(), new TransportResponseHandler() { @Override @@ -322,9 +319,9 @@ private void onFailure(int idx, String nodeId, Throwable t) { (org.apache.logging.log4j.util.Supplier) () -> new ParameterizedMessage("failed to execute on node [{}]", nodeId), t); } - if (accumulateExceptions()) { - responses.set(idx, new FailedNodeException(nodeId, "Failed node [" + nodeId + "]", t)); - } + + responses.set(idx, new FailedNodeException(nodeId, "Failed node [" + nodeId + "]", t)); + if (counter.incrementAndGet() == responses.length()) { finishHim(); } @@ -403,10 +400,10 @@ private class NodeTasksResponse extends TransportResponse { protected List exceptions; protected List results; - public NodeTasksResponse() { + NodeTasksResponse() { } - public NodeTasksResponse(String nodeId, + NodeTasksResponse(String nodeId, List results, List exceptions) { this.nodeId = nodeId; diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsResponse.java b/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsResponse.java index 233d4b0c63884..8508c834a9f36 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsResponse.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/MultiTermVectorsResponse.java @@ -24,14 +24,14 @@ import org.elasticsearch.common.io.stream.StreamInput; import org.elasticsearch.common.io.stream.StreamOutput; import org.elasticsearch.common.io.stream.Streamable; -import org.elasticsearch.common.xcontent.ToXContent; +import org.elasticsearch.common.xcontent.ToXContentObject; import org.elasticsearch.common.xcontent.XContentBuilder; import java.io.IOException; import java.util.Arrays; import java.util.Iterator; -public class MultiTermVectorsResponse extends ActionResponse implements Iterable, ToXContent { +public class MultiTermVectorsResponse extends ActionResponse implements Iterable, ToXContentObject { /** * Represents a failure. @@ -124,6 +124,7 @@ public Iterator iterator() { @Override public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); builder.startArray(Fields.DOCS); for (MultiTermVectorsItemResponse response : responses) { if (response.isFailed()) { @@ -132,16 +133,15 @@ public XContentBuilder toXContent(XContentBuilder builder, Params params) throws builder.field(Fields._INDEX, failure.getIndex()); builder.field(Fields._TYPE, failure.getType()); builder.field(Fields._ID, failure.getId()); - ElasticsearchException.renderException(builder, params, failure.getCause()); + ElasticsearchException.generateFailureXContent(builder, params, failure.getCause(), true); builder.endObject(); } else { TermVectorsResponse getResponse = response.getResponse(); - builder.startObject(); getResponse.toXContent(builder, params); - builder.endObject(); } } builder.endArray(); + builder.endObject(); return builder; } diff --git a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsFields.java b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsFields.java index 534ef4164e236..71742b171348f 100644 --- a/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsFields.java +++ b/core/src/main/java/org/elasticsearch/action/termvectors/TermVectorsFields.java @@ -82,7 +82,7 @@ * If the field statistics were requested ({@code hasFieldStatistics} is true, * see {@code headerRef}), the following numbers are stored: *