diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md
index 21c81c508e16e..20d81164e6aac 100644
--- a/docs/running-on-kubernetes.md
+++ b/docs/running-on-kubernetes.md
@@ -354,6 +354,27 @@ spark.kubernetes.executor.volumes.persistentVolumeClaim.data.mount.readOnly=fals
For a complete list of available options for each supported type of volumes, please refer to the [Spark Properties](#spark-properties) section below.
+### PVC-oriented executor pod allocation
+
+Since disks are one of the important resource types, Spark driver provides a fine-grained control
+via a set of configurations. For example, by default, on-demand PVCs are owned by executors and
+the lifecycle of PVCs are tightly coupled with its owner executors.
+However, on-demand PVCs can be owned by driver and reused by another executors during the Spark job's
+lifetime with the following options. This reduces the overhead of PVC creation and deletion.
+
+```
+spark.kubernetes.driver.ownPersistentVolumeClaim=true
+spark.kubernetes.driver.reusePersistentVolumeClaim=true
+```
+
+In addition, since Spark 3.4, Spark driver is able to do PVC-oriented executor allocation which means
+Spark counts the total number of created PVCs which the job can have, and holds on a new executor creation
+if the driver owns the maximum number of PVCs. This helps the transition of the existing PVC from one executor
+to another executor.
+```
+spark.kubernetes.driver.waitToReusePersistentVolumeClaim=true
+```
+
## Local Storage
Spark supports using volumes to spill data during shuffles and other operations. To use a volume as local storage, the volume's name should starts with `spark-local-dir-`, for example:
@@ -1475,6 +1496,18 @@ See the [configuration page](configuration.html) for information on Spark config
3.2.0 |
+
+ spark.kubernetes.driver.waitToReusePersistentVolumeClaim |
+ false |
+
+ If true, driver pod counts the number of created on-demand persistent volume claims
+ and wait if the number is greater than or equal to the total number of volumes which
+ the Spark job is able to have. This config requires both
+ spark.kubernetes.driver.ownPersistentVolumeClaim=true and
+ spark.kubernetes.driver.reusePersistentVolumeClaim=true.
+ |
+ 3.4.0 |
+
spark.kubernetes.executor.disableConfigMap |
false |
diff --git a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
index ddb846916259b..fa4904b930215 100644
--- a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
+++ b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
@@ -21,7 +21,7 @@ import java.util.concurrent.TimeUnit
import org.apache.spark.deploy.k8s.Constants._
import org.apache.spark.internal.Logging
-import org.apache.spark.internal.config.{ConfigBuilder, DYN_ALLOCATION_MAX_EXECUTORS, EXECUTOR_INSTANCES, PYSPARK_DRIVER_PYTHON, PYSPARK_PYTHON}
+import org.apache.spark.internal.config.{ConfigBuilder, PYSPARK_DRIVER_PYTHON, PYSPARK_PYTHON}
private[spark] object Config extends Logging {
@@ -101,12 +101,11 @@ private[spark] object Config extends Logging {
.createWithDefault(true)
val KUBERNETES_DRIVER_WAIT_TO_REUSE_PVC =
- ConfigBuilder("spark.kubernetes.driver.waitToReusePersistentVolumeClaims")
+ ConfigBuilder("spark.kubernetes.driver.waitToReusePersistentVolumeClaim")
.doc("If true, driver pod counts the number of created on-demand persistent volume claims " +
- s"and wait if the number is greater than or equal to the maximum which is " +
- s"${EXECUTOR_INSTANCES.key} or ${DYN_ALLOCATION_MAX_EXECUTORS.key}. " +
- s"This config requires both ${KUBERNETES_DRIVER_OWN_PVC.key}=true and " +
- s"${KUBERNETES_DRIVER_REUSE_PVC.key}=true.")
+ "and wait if the number is greater than or equal to the total number of volumes which " +
+ "the Spark job is able to have. This config requires both " +
+ s"${KUBERNETES_DRIVER_OWN_PVC.key}=true and ${KUBERNETES_DRIVER_REUSE_PVC.key}=true.")
.version("3.4.0")
.booleanConf
.createWithDefault(false)