Skip to content
This repository has been archived by the owner on Oct 23, 2024. It is now read-only.

[SPARK-23941][MESOS] Mesos task failed on specific spark app name #33

Merged
merged 5 commits into from
Aug 30, 2018

Conversation

samvantran
Copy link

@samvantran samvantran commented Jun 22, 2018

Port from SPARK#21014

** edit: not a direct port from upstream Spark. Changes were needed because we saw PySpark jobs fail to launch when 1) run with docker and 2) including --py-files

What changes were proposed in this pull request?

Shell escaped the name passed to spark-submit and change how conf attributes are shell escaped.

How was this patch tested?

This test has been tested manually with Hive-on-spark with mesos or with the use case described in the issue with the sparkPi application with a custom name which contains illegal shell characters.

With this PR, hive-on-spark on mesos works like a charm with hive 3.0.0-SNAPSHOT.

I state that this contribution is my original work and that I license the work to the project under the project’s open source license

Author: Bounkong Khamphousone [email protected]

Closes apache#21014 from tiboun/fix/SPARK-23941.

## What changes were proposed in this pull request?
Shell escaped the name passed to spark-submit and change how conf attributes are shell escaped.

## How was this patch tested?
This test has been tested manually with Hive-on-spark with mesos or with the use case described in the issue with the sparkPi application with a custom name which contains illegal shell characters.

With this PR, hive-on-spark on mesos works like a charm with hive 3.0.0-SNAPSHOT.

I state that this contribution is my original work and that I license the work to the project under the project’s open source license

Author: Bounkong Khamphousone <[email protected]>

Closes apache#21014 from tiboun/fix/SPARK-23941.
@samvantran samvantran requested a review from susanxhuynh June 22, 2018 20:12
println("L554: options -> printing all options after key=value set")
println("================================================================")

val opts = options.map(shellEscape)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a test for this? Also please remove the printlns

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was just a test, planning to delete the last commit

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't look like Apache/Spark caught the issue for --py-files. Change+test added in 9f110fd

Specifically, we do not want to shell-escape the --py-files. What we've
seen IRL is that for spark jobs that use docker images coupled w/ python
files, the $MESOS_SANDBOX path is escaped and results in
FileNotFoundErrors during SparkSession.getOrCreate
@susanxhuynh
Copy link

@samvantran You might want to consider adding better unit test coverage of this code to prevent regressions in the future. (maybe in the upstream PR?) For example, (1) the case where the app name requires shell escaping and (2) the driverExtraJavaOptions with spaces. Any future changes to this code should handle these cases, in addition to the --pyfiles one that you did add a unit test for.

@samvantran
Copy link
Author

Good point. I added a few more tests to this PR in ef3e328 but once we release, I'll create a Spark JIRA and PR to see if we can merge these upstream.

@samvantran samvantran merged commit a1d7e7a into custom-branch-2.2.1-X Aug 30, 2018
@samvantran samvantran deleted the shell-escaping branch August 30, 2018 17:27
akirillov pushed a commit that referenced this pull request Oct 22, 2018
* [SPARK-23941][MESOS] Mesos task failed on specific spark app name

Port from SPARK#21014

** edit: not a direct port from upstream Spark. Changes were needed because we saw PySpark jobs fail to launch when 1) run with docker and 2) including --py-files

==============

* Shell escape only appName, mainClass, default and driverConf

Specifically, we do not want to shell-escape the --py-files. What we've
seen IRL is that for spark jobs that use docker images coupled w/ python
files, the $MESOS_SANDBOX path is escaped and results in
FileNotFoundErrors during py4j.SparkSession.getOrCreate
akirillov pushed a commit that referenced this pull request Feb 8, 2019
* [SPARK-23941][MESOS] Mesos task failed on specific spark app name

Port from SPARK#21014

** edit: not a direct port from upstream Spark. Changes were needed because we saw PySpark jobs fail to launch when 1) run with docker and 2) including --py-files

==============

* Shell escape only appName, mainClass, default and driverConf

Specifically, we do not want to shell-escape the --py-files. What we've
seen IRL is that for spark jobs that use docker images coupled w/ python
files, the $MESOS_SANDBOX path is escaped and results in
FileNotFoundErrors during py4j.SparkSession.getOrCreate
Igosuki added a commit to Adikteev/spark that referenced this pull request Feb 19, 2019
alembiewski pushed a commit that referenced this pull request Jun 12, 2019
* [SPARK-23941][MESOS] Mesos task failed on specific spark app name

Port from SPARK#21014

** edit: not a direct port from upstream Spark. Changes were needed because we saw PySpark jobs fail to launch when 1) run with docker and 2) including --py-files

==============

* Shell escape only appName, mainClass, default and driverConf

Specifically, we do not want to shell-escape the --py-files. What we've
seen IRL is that for spark jobs that use docker images coupled w/ python
files, the $MESOS_SANDBOX path is escaped and results in
FileNotFoundErrors during py4j.SparkSession.getOrCreate
alembiewski added a commit that referenced this pull request Aug 19, 2019
* Support for DSCOS_SERVICE_ACCOUNT_CREDENTIAL environment variable in MesosClusterScheduler

* File Based Secrets support

* [SPARK-723][SPARK-740] Add Metrics to Dispatcher and Driver

- Counters: The total number of times that submissions have entered states
- Timers: The duration from submit or launch until a submission entered a given state
- Histogram: The retry counts at time of retry

* Fixes to handling finished drivers

- Rename 'failed' case to 'exception'
- When a driver is 'finished', record its final MesosTaskState
- Fix naming consistency after seeing how they look in practice

* Register "finished" counters up-front

Otherwise their values are never published.

* [SPARK-692] Added spark.mesos.executor.gpus to specify the number of Executor CPUs

* [SPARK-23941][MESOS] Mesos task failed on specific spark app name (#33)

* [SPARK-23941][MESOS] Mesos task failed on specific spark app name

Port from SPARK#21014

** edit: not a direct port from upstream Spark. Changes were needed because we saw PySpark jobs fail to launch when 1) run with docker and 2) including --py-files

==============

* Shell escape only appName, mainClass, default and driverConf

Specifically, we do not want to shell-escape the --py-files. What we've
seen IRL is that for spark jobs that use docker images coupled w/ python
files, the $MESOS_SANDBOX path is escaped and results in
FileNotFoundErrors during py4j.SparkSession.getOrCreate

* [DCOS-39150][SPARK] Support unique Executor IDs in cluster managers (#36)

Using incremental integers as Executor IDs leads to a situation when Spark Executors launched by different Drivers have same IDs. This leads to a situation when Mesos Task IDs for multiple Spark Executors are the same too. This PR prepends UUID unique for a CoarseGrainedSchedulerBackend instance to numeric ID thus allowing to distinguish Executors belonging to different drivers.

This PR reverts commit ebe3c7f "[SPARK-12864][YARN] initialize executorIdCounter after ApplicationMaster killed for max n…)"

* Upgrade of Hadoop, ZooKeeper, and Jackson libraries to fix CVEs. Updates for JSON-related tests. (#43)

List of upgrades for 3rd-party libraries having CVEs:

- Hadoop: 2.7.3 -> 2.7.7. Fixes: CVE-2016-6811, CVE-2017-3166, CVE-2017-3162, CVE-2018-8009
- Jackson 2.6.5 -> 2.9.6. Fixes: CVE-2017-15095, CVE-2017-17485, CVE-2017-7525, CVE-2018-7489, CVE-2016-3720
- ZooKeeper 3.4.6 -> 3.4.13 (https://zookeeper.apache.org/doc/r3.4.13/releasenotes.html)

# Conflicts:
#	dev/deps/spark-deps-hadoop-2.6
#	dev/deps/spark-deps-hadoop-2.7
#	dev/deps/spark-deps-hadoop-3.1
#	pom.xml

* CNI Support for Docker containerizer, binding to SPARK_LOCAL_IP instead of 0.0.0.0 to properly advertise executors during shuffle (#44)

* Spark Dispatcher support for launching applications in the same virtual network by default (#45)

* [DCOS-46585] Fix supervised driver retry logic for outdated tasks (#46)

This commit fixes a bug where `--supervised` drivers would relaunch after receiving an outdated status update from a restarted/crashed agent even if they had already been relaunched and running elsewhere. In those scenarios, previous logic would cause two identical jobs to be running and ZK state would only have a record of the latest one effectively orphaning the 1st job.

* Revert "[SPARK-25088][CORE][MESOS][DOCS] Update Rest Server docs & defaults."

This reverts commit 1024875.

The change introduced in the reverted commit is breaking:
- breaks semantics of `spark.master.rest.enabled` which belongs to Spark Standalone Master only but not to SparkSubmit
- reverts the default behavior for Spark Standalone from REST to legacy RPC
- contains misleading messages in `require` assertion blocks
- prevents users from running jobs without specifying `spark.master.rest.enabled`

* [DCOS-49020] Specify user in CommandInfo for Spark Driver launched on Mesos (#49)

* [DCOS-40974] Mesos checkpointing support for Spark Drivers (#51)

* [DCOS-51158] Improved Task ID assignment for Executor tasks (#52)

* [DCOS-51454] Remove irrelevant Mesos REPL test (#54)

* [DCOS-51453] Added Hadoop 2.9 profile (#53)

* [DCOS-34235] spark.mesos.executor.memoryOverhead equivalent for the Driver when running on Mesos (#55)

* Refactoring of metrics naming to add mesos semantics and avoid clashing with existing Spark metrics (#58)

* [DCOS-34549] Mesos label NPE fix (#60)
rpalaznik pushed a commit that referenced this pull request Feb 24, 2020
* Support for DSCOS_SERVICE_ACCOUNT_CREDENTIAL environment variable in MesosClusterScheduler

* File Based Secrets support

* [SPARK-723][SPARK-740] Add Metrics to Dispatcher and Driver

- Counters: The total number of times that submissions have entered states
- Timers: The duration from submit or launch until a submission entered a given state
- Histogram: The retry counts at time of retry

* Fixes to handling finished drivers

- Rename 'failed' case to 'exception'
- When a driver is 'finished', record its final MesosTaskState
- Fix naming consistency after seeing how they look in practice

* Register "finished" counters up-front

Otherwise their values are never published.

* [SPARK-692] Added spark.mesos.executor.gpus to specify the number of Executor CPUs

* [SPARK-23941][MESOS] Mesos task failed on specific spark app name (#33)

* [SPARK-23941][MESOS] Mesos task failed on specific spark app name

Port from SPARK#21014

** edit: not a direct port from upstream Spark. Changes were needed because we saw PySpark jobs fail to launch when 1) run with docker and 2) including --py-files

==============

* Shell escape only appName, mainClass, default and driverConf

Specifically, we do not want to shell-escape the --py-files. What we've
seen IRL is that for spark jobs that use docker images coupled w/ python
files, the $MESOS_SANDBOX path is escaped and results in
FileNotFoundErrors during py4j.SparkSession.getOrCreate

* [DCOS-39150][SPARK] Support unique Executor IDs in cluster managers (#36)

Using incremental integers as Executor IDs leads to a situation when Spark Executors launched by different Drivers have same IDs. This leads to a situation when Mesos Task IDs for multiple Spark Executors are the same too. This PR prepends UUID unique for a CoarseGrainedSchedulerBackend instance to numeric ID thus allowing to distinguish Executors belonging to different drivers.

This PR reverts commit ebe3c7f "[SPARK-12864][YARN] initialize executorIdCounter after ApplicationMaster killed for max n…)"

* Upgrade of Hadoop, ZooKeeper, and Jackson libraries to fix CVEs. Updates for JSON-related tests. (#43)

List of upgrades for 3rd-party libraries having CVEs:

- Hadoop: 2.7.3 -> 2.7.7. Fixes: CVE-2016-6811, CVE-2017-3166, CVE-2017-3162, CVE-2018-8009
- Jackson 2.6.5 -> 2.9.6. Fixes: CVE-2017-15095, CVE-2017-17485, CVE-2017-7525, CVE-2018-7489, CVE-2016-3720
- ZooKeeper 3.4.6 -> 3.4.13 (https://zookeeper.apache.org/doc/r3.4.13/releasenotes.html)

* CNI Support for Docker containerizer, binding to SPARK_LOCAL_IP instead of 0.0.0.0 to properly advertise executors during shuffle (#44)

* Spark Dispatcher support for launching applications in the same virtual network by default (#45)

* [DCOS-46585] Fix supervised driver retry logic for outdated tasks (#46)

This commit fixes a bug where `--supervised` drivers would relaunch after receiving an outdated status update from a restarted/crashed agent even if they had already been relaunched and running elsewhere. In those scenarios, previous logic would cause two identical jobs to be running and ZK state would only have a record of the latest one effectively orphaning the 1st job.

* Revert "[SPARK-25088][CORE][MESOS][DOCS] Update Rest Server docs & defaults."

This reverts commit 1024875.

The change introduced in the reverted commit is breaking:
- breaks semantics of `spark.master.rest.enabled` which belongs to Spark Standalone Master only but not to SparkSubmit
- reverts the default behavior for Spark Standalone from REST to legacy RPC
- contains misleading messages in `require` assertion blocks
- prevents users from running jobs without specifying `spark.master.rest.enabled`

* [DCOS-49020] Specify user in CommandInfo for Spark Driver launched on Mesos (#49)

* [DCOS-40974] Mesos checkpointing support for Spark Drivers (#51)

* [DCOS-51158] Improved Task ID assignment for Executor tasks (#52)

* [DCOS-51454] Remove irrelevant Mesos REPL test (#54)

* [DCOS-51453] Added Hadoop 2.9 profile (#53)

* [DCOS-34235] spark.mesos.executor.memoryOverhead equivalent for the Driver when running on Mesos (#55)

* Refactoring of metrics naming to add mesos semantics and avoid clashing with existing Spark metrics (#58)

* [DCOS-34549] Mesos label NPE fix (#60)
farhan5900 pushed a commit that referenced this pull request Aug 7, 2020
* Support for DSCOS_SERVICE_ACCOUNT_CREDENTIAL environment variable in MesosClusterScheduler

* File Based Secrets support

* [SPARK-723][SPARK-740] Add Metrics to Dispatcher and Driver

- Counters: The total number of times that submissions have entered states
- Timers: The duration from submit or launch until a submission entered a given state
- Histogram: The retry counts at time of retry

* Fixes to handling finished drivers

- Rename 'failed' case to 'exception'
- When a driver is 'finished', record its final MesosTaskState
- Fix naming consistency after seeing how they look in practice

* Register "finished" counters up-front

Otherwise their values are never published.

* [SPARK-692] Added spark.mesos.executor.gpus to specify the number of Executor CPUs

* [SPARK-23941][MESOS] Mesos task failed on specific spark app name (#33)

* [SPARK-23941][MESOS] Mesos task failed on specific spark app name

Port from SPARK#21014

** edit: not a direct port from upstream Spark. Changes were needed because we saw PySpark jobs fail to launch when 1) run with docker and 2) including --py-files

==============

* Shell escape only appName, mainClass, default and driverConf

Specifically, we do not want to shell-escape the --py-files. What we've
seen IRL is that for spark jobs that use docker images coupled w/ python
files, the $MESOS_SANDBOX path is escaped and results in
FileNotFoundErrors during py4j.SparkSession.getOrCreate

* [DCOS-39150][SPARK] Support unique Executor IDs in cluster managers (#36)

Using incremental integers as Executor IDs leads to a situation when Spark Executors launched by different Drivers have same IDs. This leads to a situation when Mesos Task IDs for multiple Spark Executors are the same too. This PR prepends UUID unique for a CoarseGrainedSchedulerBackend instance to numeric ID thus allowing to distinguish Executors belonging to different drivers.

This PR reverts commit ebe3c7f "[SPARK-12864][YARN] initialize executorIdCounter after ApplicationMaster killed for max n…)"

* Upgrade of Hadoop, ZooKeeper, and Jackson libraries to fix CVEs. Updates for JSON-related tests. (#43)

List of upgrades for 3rd-party libraries having CVEs:

- Hadoop: 2.7.3 -> 2.7.7. Fixes: CVE-2016-6811, CVE-2017-3166, CVE-2017-3162, CVE-2018-8009
- Jackson 2.6.5 -> 2.9.6. Fixes: CVE-2017-15095, CVE-2017-17485, CVE-2017-7525, CVE-2018-7489, CVE-2016-3720
- ZooKeeper 3.4.6 -> 3.4.13 (https://zookeeper.apache.org/doc/r3.4.13/releasenotes.html)

* CNI Support for Docker containerizer, binding to SPARK_LOCAL_IP instead of 0.0.0.0 to properly advertise executors during shuffle (#44)

* Spark Dispatcher support for launching applications in the same virtual network by default (#45)

* [DCOS-46585] Fix supervised driver retry logic for outdated tasks (#46)

This commit fixes a bug where `--supervised` drivers would relaunch after receiving an outdated status update from a restarted/crashed agent even if they had already been relaunched and running elsewhere. In those scenarios, previous logic would cause two identical jobs to be running and ZK state would only have a record of the latest one effectively orphaning the 1st job.

* Revert "[SPARK-25088][CORE][MESOS][DOCS] Update Rest Server docs & defaults."

This reverts commit 1024875.

The change introduced in the reverted commit is breaking:
- breaks semantics of `spark.master.rest.enabled` which belongs to Spark Standalone Master only but not to SparkSubmit
- reverts the default behavior for Spark Standalone from REST to legacy RPC
- contains misleading messages in `require` assertion blocks
- prevents users from running jobs without specifying `spark.master.rest.enabled`

* [DCOS-49020] Specify user in CommandInfo for Spark Driver launched on Mesos (#49)

* [DCOS-40974] Mesos checkpointing support for Spark Drivers (#51)

* [DCOS-51158] Improved Task ID assignment for Executor tasks (#52)

* [DCOS-51454] Remove irrelevant Mesos REPL test (#54)

* [DCOS-51453] Added Hadoop 2.9 profile (#53)

* [DCOS-34235] spark.mesos.executor.memoryOverhead equivalent for the Driver when running on Mesos (#55)

* Refactoring of metrics naming to add mesos semantics and avoid clashing with existing Spark metrics (#58)

* [DCOS-34549] Mesos label NPE fix (#60)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants