Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The default value of "spark.locality.wait.process" is error. #1232

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -343,23 +343,23 @@ Apart from these, the following properties are also available, and may be useful
</tr>
<tr>
<td>spark.locality.wait.process</td>
<td>spark.locality.wait</td>
<td>3000</td>
<td>
Customize the locality wait for process locality. This affects tasks that attempt to access
cached data in a particular executor process.
</td>
</tr>
<tr>
<td>spark.locality.wait.node</td>
<td>spark.locality.wait</td>
<td>3000</td>
<td>
Customize the locality wait for node locality. For example, you can set this to 0 to skip
node locality and search immediately for rack locality (if your cluster has rack information).
</td>
</tr>
<tr>
<td>spark.locality.wait.rack</td>
<td>spark.locality.wait</td>
<td>3000</td>
<td>
Customize the locality wait for rack locality.
</td>
Expand Down