Skip to content
This repository has been archived by the owner on Jan 9, 2020. It is now read-only.

Config for hard cpu limit on pods; default unlimited #356

Merged
merged 1 commit into from
Jun 23, 2017
Merged

Conversation

FANNG1
Copy link

@FANNG1 FANNG1 commented Jun 22, 2017

@@ -438,7 +443,7 @@ private[spark] class KubernetesClusterSchedulerBackend(
.addToRequests("memory", executorMemoryQuantity)
.addToLimits("memory", executorMemoryLimitQuantity)
.addToRequests("cpu", executorCpuQuantity)
.addToLimits("cpu", executorCpuQuantity)
.addToLimits("cpu", executorCpuLimitQuantity)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sandflee can you please change this PR so that if spark.kubernetes.executor.limit.cores is not set then there is no .addToLimits("cpu", executorCpuLimitQuantity) call here?

From discussion on the linked issue I think we want the default to be no cpu limit, and only add a limit if the user specifies one

@ash211
Copy link

ash211 commented Jun 22, 2017

Linking to #38 which is the umbrella issue for fully-generic customization of things like this via user-provided yaml files instead of incrementally adding additional options for every k8s feature.

@FANNG1
Copy link
Author

FANNG1 commented Jun 22, 2017

patch updated, add spark.kubernetes.driver.limit.cores besides spark.kubernetes.executor.limit.cores

Copy link

@ash211 ash211 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple tiny changes and then this LGTM. Let's merge once you fix those bits.

@mccheah any thoughts?


private[spark] val KUBERNETES_EXECUTOR_LIMIT_CORES =
ConfigBuilder("spark.kubernetes.executor.limit.cores")
.doc("cpu limit for executor")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe should emphasize that this is a single executor, like hard cpu limit for a single executor

@@ -64,6 +64,8 @@ private[spark] class Client(

// CPU settings
private val driverCpuCores = sparkConf.getOption("spark.driver.cores").getOrElse("1")
private val driverLimitCores = sparkConf.
getOption(KUBERNETES_DRIVER_LIMIT_CORES.key).getOrElse("")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

drop the .getOrElse("") and leave this value as Option[String]

@@ -108,6 +108,8 @@ private[spark] class KubernetesClusterSchedulerBackend(
private val executorMemoryWithOverhead = executorMemoryMb + memoryOverheadMb

private val executorCores = conf.getOption("spark.executor.cores").getOrElse("1")
private val executorLimitCores = conf.
getOption(KUBERNETES_EXECUTOR_LIMIT_CORES.key).getOrElse("")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leave as Option[String]

.editResources
.addToLimits("cpu", executorCpuLimitQuantity)
.endResources()
.endContainer()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

de-indent a bit

@ash211 ash211 changed the title add cpu limit to executor Config for hard cpu limit on pods; default unlimited Jun 22, 2017
<td><code>spark.kubernetes.driver.limit.cores</code></td>
<td>(none)</td>
<td>
specify the cpu limit for the driver pod
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks rest of the docs for config keys all use capitalized full sentences. Can you do the same to be consistent?

<td><code>spark.kubernetes.executor.limit.cores</code></td>
<td>(none)</td>
<td>
specify the cpu limit for the executor pod
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

@@ -485,6 +485,18 @@ package object config extends Logging {
.stringConf
.createOptional

private[spark] val KUBERNETES_DRIVER_LIMIT_CORES =
ConfigBuilder("spark.kubernetes.driver.limit.cores")
.doc("cpu limit for driver")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.


private[spark] val KUBERNETES_EXECUTOR_LIMIT_CORES =
ConfigBuilder("spark.kubernetes.executor.limit.cores")
.doc("cpu limit for executor")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto.

@ash211
Copy link

ash211 commented Jun 22, 2017

rerun integration test please

@ash211
Copy link

ash211 commented Jun 23, 2017

Hi @sandflee it looks like a recent commit caused a merge conflict in this PR. Can you please fix that and address the outstanding comments?

With those minor changes I think this PR is ready to merge

@ash211 ash211 merged commit 8b3248f into apache-spark-on-k8s:branch-2.1-kubernetes Jun 23, 2017
@ash211
Copy link

ash211 commented Jun 23, 2017

Thank a bunch @sandflee for the contribution! Looking forward to jobs feeling faster for everyone

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants