Skip to content

Commit

Permalink
[SPARK-3890][Docs]remove redundant spark.executor.memory in doc
Browse files Browse the repository at this point in the history
Introduced in pwendell@f7e79bc, I'm not sure why we need two spark.executor.memory here.

Author: WangTaoTheTonic <[email protected]>
Author: WangTao <[email protected]>

Closes apache#2745 from WangTaoTheTonic/redundantconfig and squashes the following commits:

e7564dc [WangTao] too long line
fdbdb1f [WangTaoTheTonic] trivial workaround
d06b6e5 [WangTaoTheTonic] remove redundant spark.executor.memory in doc
  • Loading branch information
WangTaoTheTonic authored and andrewor14 committed Oct 17, 2014
1 parent 642b246 commit e7f4ea8
Showing 1 changed file with 4 additions and 12 deletions.
16 changes: 4 additions & 12 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,14 +161,6 @@ Apart from these, the following properties are also available, and may be useful
#### Runtime Environment
<table class="table">
<tr><th>Property Name</th><th>Default</th><th>Meaning</th></tr>
<tr>
<td><code>spark.executor.memory</code></td>
<td>512m</td>
<td>
Amount of memory to use per executor process, in the same format as JVM memory strings
(e.g. <code>512m</code>, <code>2g</code>).
</td>
</tr>
<tr>
<td><code>spark.executor.extraJavaOptions</code></td>
<td>(none)</td>
Expand Down Expand Up @@ -365,7 +357,7 @@ Apart from these, the following properties are also available, and may be useful
<td><code>spark.ui.port</code></td>
<td>4040</td>
<td>
Port for your application's dashboard, which shows memory and workload data
Port for your application's dashboard, which shows memory and workload data.
</td>
</tr>
<tr>
Expand Down Expand Up @@ -880,8 +872,8 @@ Apart from these, the following properties are also available, and may be useful
<td><code>spark.scheduler.revive.interval</code></td>
<td>1000</td>
<td>
The interval length for the scheduler to revive the worker resource offers to run tasks.
(in milliseconds)
The interval length for the scheduler to revive the worker resource offers to run tasks
(in milliseconds).
</td>
</tr>
</tr>
Expand All @@ -893,7 +885,7 @@ Apart from these, the following properties are also available, and may be useful
to wait for before scheduling begins. Specified as a double between 0 and 1.
Regardless of whether the minimum ratio of resources has been reached,
the maximum amount of time it will wait before scheduling begins is controlled by config
<code>spark.scheduler.maxRegisteredResourcesWaitingTime</code>
<code>spark.scheduler.maxRegisteredResourcesWaitingTime</code>.
</td>
</tr>
<tr>
Expand Down

0 comments on commit e7f4ea8

Please sign in to comment.