-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-1395] Fix "local:" URI support in Yarn mode (again). #560
Conversation
Can one of the admins verify this patch? |
Just rebased on top of master. No changes. |
[WIP] SPARK-1067: Default log4j initialization causes errors for those not using log4j To fix this - we add a check when initializing log4j. Author: Patrick Wendell <[email protected]> == Merge branch commits == commit ffdce513877f64b6eed6d36138c3e0003d392889 Author: Patrick Wendell <[email protected]> Date: Fri Feb 7 15:22:29 2014 -0800 Logging fix
…pache#560." This reverts commit b6d40b7.
I rebased and cleaned up the code some more. I think it's in pretty good shape now, and the tests are much better. Tested on yarn client / cluster, with and without extra jar dependencies. |
I was just talking about the log message in yarn/stable/src/main/scala/org/apache/spark/deploy/yarn/ExecutorRunnable.scala. The rest is shared, if I understand correctly. |
Ah, that guy. Sorry, my bad. :-) It made my life easier debugging things, but not sure how many people actively develop Spark on top of yarn-alpha. Wouldn't hurt to add it, though. |
val LOCAL_SCHEME = "local" | ||
val CONF_SPARK_JAR = "spark.yarn.jar" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you document the config
I haven't gotten all the way through this patch but please explain what you have done with the log4j stuff. at quick glance it looks like you just removed it all? Perhaps I'm missing something though. What about in the case where I don't want to specify one and want the default one (log4j-spark-container.properties) to be used? (added in https://issues.apache.org/jira/browse/SPARK-1252) Also note we can't just remove the way it is done now as it breaks backwards compatibility. |
Hi Tom, I just removed all the log4j-related code. You can achieve both previously available use cases with spark-submit arguments:
Note that since local: is broken in 1.0.0, the second one above cannot be achieved by using SPARK_LOG4J_CONF in that release. As for backwards compatibility, SPARK_LOG4J_CONF doesn't seem to be documented anywhere, and the tricks above work fine with 1.0.0. I guess I could add the content of SPARK_LOG4J_CONF to the list of copied files if you think that's important. |
Unfortunately I guess our documentation has very little on logging, but that doesn't mean that isn't how it worked. That is the way it worked in 0.9 and still worked in 1.0 so I don't want to break that method so would like it added back. Hopefully its not to painful to just add to the list of files to copy as you mention. I would also like to see the documentation updated to make it clear to users how they do it. If you would like to do it here that would be fine or we can file a separate jira also. |
@vanzin Can you merge the latest code? |
@witgo: on it, will retest and update soon. I'll also take a closer look at your related change to make sure I got everything right. |
Hi, does anybody have any more feedback about this? |
// If we are being launched in client mode, forward the spark-conf options | ||
// onto the executor launcher | ||
for ((k, v) <- sparkConf.getAll) { | ||
javaOpts += "-D" + k + "=" + "\\\"" + v + "\\\"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was this change necessary? If not I would prefer to split this off into a separate jira. If it was please explain why.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this change breaks backwards compatibility with being able to set SPARK_JAVA_OPTS to include configs. I would rather not change it in this pr as it will take more investigation and testing to make sure it doesn't break anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate? SparkConf will include all the system properties that start with "spark", which is exactly what the previous code was doing when reading system properties directly.
I'll re-test without this fix (my tests were failing without it but after reading the rest of the code I'm not so sure it was because of this), but in general I think this is a cleaner way of doing it (since it treats both cases as the same).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
export SPARK_JAVA_OPTS="-Dspark...=foo" . This needs to work for backwards compatibility but it doesn't with this code change because SparkConf throws an error saying you aren't supposed to do this.
I agree it is cleaner the way you have it but its going to require more investigation and a bunch of testing to make sure it works properly and I would rather have it split off into another jira/pr. There are already alot of things in this pr.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So here's what's happening.
prepareLocalResources()
propagates some settings (previously CONF_SPARK_YARN_SECONDARY_JARS
, and now the two new settings I'm adding) using SparkConf.set. In cluster mode, createContainerLaunchContext()
propagates only things that are set as system properties, so those config options are not provided to the executors. So if your executor depends on something that is in those options, it will fail.
I'll double-check whether SPARK_JAVA_OPTS
works as before and make adjustments if necessary, but the change is needed for things to work correctly.
@vanzin I'm not sure how your change fixes still being able to use SPARK_JAVA_OPTS to set configs. Caused by: java.lang.Exception: spark.executor.extraJavaOptions is not allowed to set Spark options (was '-Dspark.authentication=true)'. Set them directly on a SparkConf or in a properties file when using ./bin/spark-submit. |
HI @tgravescs, When I tested I used a non
I then run this (against the master branch) in yarn-cluster mode:
The driver launches but executors die with:
So I'm not sure SPARK_JAVA_OPTS is really working on master at all. Another thing I tried is to use just I can look at fixing the problem you mention, but it looks like SPARK_JAVA_OPTS has some questionable behaviour even without my changes. |
so the idea is that you can set spark configs via SPARK_JAVA_OPTS in order to be backwards compatible with 0.9. Before the SparkConf (conf/spark-defaults.conf) and spark-submit the only way to specify the spark configs was either through SPARK_JAVA_OPTS on command line or spark-env.sh. So that is what I've been looking at. My examples which is pretty easy to see if works is set: Run one of the Spark examples (SparkPi): Hopefully people using spark-submit will have converted to use configs so testing with spark-class is more important for those that haven't converted: With this change it fails with the error I listed, without this change it works and you can view the logs and see that the configs take affect: |
Your example works but only if you just want to propagate the property to the driver, and not the executors. Without being familiar with how things worked in 0.9. I'm a little sceptical that it's the full correct behaviour (since I'd expect SPARK_JAVA_OPTS to show up on both). Anyway, I can make changes so that the use case you mention is covered. |
what config is not showing up for you on the executor? the above example works fine for both driver and executor for me. |
On the master branch, when set using SPARK_JAVA_OPTS: anything that does not start with "spark." does not show up in the driver. Nothing at all shows up in executors. |
so the example I gave above is on the master branch and the configs I set show up on both driver and executors. I'm not concerned with configs that don't start with "spark." as those aren't spark configs. When you say it doesn't show up on executors how are you checking? You don't see the logs in the executors that I listed? I ran on both 0.23 and 2.4 clusters. So I'm wondering why its not working for you. Note that most configs get converted into SparkConf and sent to the executor via akka when it registers so they won't show up in the processes line with -D's or via getProperty. The security settings are special as they are needed before the registration happens. Again I'm only concerned with actual spark configs (spark.*) and I'm only concerned about the spark framework properly reading them. I'm not concerned with application code reading them. Another example you can use is : make sure that the akka settings get logged at the beginning of the executor processes. |
@tgravescs I posted my test code a few comments above. I'd expect "SPARK_JAVA_OPTS" to be just that - JVM options, not SparkConf options. So the properties I'm setting via JVM options are not showing up in the executors. |
Recent changes ignored the fact that path may be defined with "local:" URIs, which means they need to be explicitly added to the classpath everywhere a remote process is started. This change fixes that by: - Using the correct methods to add paths to the classpath - Creating SparkConf settings for the Spark jar itself and for the user's jar - Propagating those two settings to the remote processes where needed This ensures that both in client and in cluster mode, the driver has the necessary info to build the executor's classpath and have things still work when they contain "local:" references. On the cleanup front, I removed the hacky way that log4j configuration was being propagated to handle the "local:" case. It's much more cleanly (and generically) handled by using spark-submit arguments (--files to upload a config file, or setting spark.executor.extraJavaOptions to pass JVM arguments and use a local file).
Also add documentation about logging to the Yarn guide. In cluster mode, the change modifies some code added in fb98488 to treat both client and cluster modes as mostly the same. Previously, cluster mode was only forwarding system properties that started with "spark", which caused it to ignore anything that SparkSubmit sets directly in the SparkConf object.
ClientBase disagreed with itself about how to propagate config options. Some places used the SparkConf object, others relied on system properties. This lead to certain properties not being propagated, mainly in cluster mode. So standardize on using SparkConf for that. To maintain compatibility with SPARK_JAVA_OPTS, remove the hack in ClientBase and just call SparkConf.validateSettings(), which does the right thing and correctly warns the user to stop using the env variable in the future.
Users expected it to be possible to set spark.* config options using SPARK_JAVA_OPTS, but that's not possible when trying to propagate the env variable using spark.*.extraJavaOptions. So instead, in Yarn mode, propagate the env variable itself. Also make sure that, in cluster mode, the warning about SPARK_JAVA_OPTS being deprecated is printed to the logs.
Rebased and fixed the SPARK_JAVA_OPTS issues. @tgravescs, let me know if that addresses your concerns. Thanks! |
BTW I also tested "-Dspark.akka.logAkkaConfig=true" aside from my previous test case, and saw the akka logs in both driver and executor. |
+1 looks good, thanks @vanzin. |
Recent changes ignored the fact that path may be defined with "local:" URIs, which means they need to be explicitly added to the classpath everywhere a remote process is started. This change fixes that by: - Using the correct methods to add paths to the classpath - Creating SparkConf settings for the Spark jar itself and for the user's jar - Propagating those two settings to the remote processes where needed This ensures that both in client and in cluster mode, the driver has the necessary info to build the executor's classpath and have things still work when they contain "local:" references. The change also fixes some confusion in ClientBase about whether to use SparkConf or system properties to propagate config options to the driver and executors, by standardizing on using data held by SparkConf. On the cleanup front, I removed the hacky way that log4j configuration was being propagated to handle the "local:" case. It's much more cleanly (and generically) handled by using spark-submit arguments (--files to upload a config file, or setting spark.executor.extraJavaOptions to pass JVM arguments and use a local file). Author: Marcelo Vanzin <[email protected]> Closes apache#560 from vanzin/yarn-local-2 and squashes the following commits: 4e7f066 [Marcelo Vanzin] Correctly propagate SPARK_JAVA_OPTS to driver/executor. 6a454ea [Marcelo Vanzin] Use constants for PWD in test. 6dd5943 [Marcelo Vanzin] Fix propagation of config options to driver / executor. b2e377f [Marcelo Vanzin] Review feedback. 93c3f85 [Marcelo Vanzin] Fix ClassCastException in test. e5c682d [Marcelo Vanzin] Fix cluster mode, restore SPARK_LOG4J_CONF. 1dfbb40 [Marcelo Vanzin] Add documentation for spark.yarn.jar. bbdce05 [Marcelo Vanzin] [SPARK-1395] Fix "local:" URI support in Yarn mode (again).
Recent changes ignored the fact that path may be defined with "local:" URIs, which means they need to be explicitly added to the classpath everywhere a remote process is started. This change fixes that by: - Using the correct methods to add paths to the classpath - Creating SparkConf settings for the Spark jar itself and for the user's jar - Propagating those two settings to the remote processes where needed This ensures that both in client and in cluster mode, the driver has the necessary info to build the executor's classpath and have things still work when they contain "local:" references. The change also fixes some confusion in ClientBase about whether to use SparkConf or system properties to propagate config options to the driver and executors, by standardizing on using data held by SparkConf. On the cleanup front, I removed the hacky way that log4j configuration was being propagated to handle the "local:" case. It's much more cleanly (and generically) handled by using spark-submit arguments (--files to upload a config file, or setting spark.executor.extraJavaOptions to pass JVM arguments and use a local file). Author: Marcelo Vanzin <[email protected]> Closes apache#560 from vanzin/yarn-local-2 and squashes the following commits: 4e7f066 [Marcelo Vanzin] Correctly propagate SPARK_JAVA_OPTS to driver/executor. 6a454ea [Marcelo Vanzin] Use constants for PWD in test. 6dd5943 [Marcelo Vanzin] Fix propagation of config options to driver / executor. b2e377f [Marcelo Vanzin] Review feedback. 93c3f85 [Marcelo Vanzin] Fix ClassCastException in test. e5c682d [Marcelo Vanzin] Fix cluster mode, restore SPARK_LOG4J_CONF. 1dfbb40 [Marcelo Vanzin] Add documentation for spark.yarn.jar. bbdce05 [Marcelo Vanzin] [SPARK-1395] Fix "local:" URI support in Yarn mode (again).
Since its name reduced at #560, the log4j-spark-container.properties was never used again. And I have searched its name globally in code and found no cite. Author: WangTaoTheTonic <[email protected]> Closes #2977 from WangTaoTheTonic/delLog4j and squashes the following commits: fb2729f [WangTaoTheTonic] delete the log4j file obsoleted
[WIP] SPARK-1067: Default log4j initialization causes errors for those not using log4j To fix this - we add a check when initializing log4j. Author: Patrick Wendell <[email protected]> == Merge branch commits == commit ffdce513877f64b6eed6d36138c3e0003d392889 Author: Patrick Wendell <[email protected]> Date: Fri Feb 7 15:22:29 2014 -0800 Logging fix (cherry picked from commit b6d40b7) Signed-off-by: Reynold Xin <[email protected]>
…pache#560." This reverts commit 2e3d1c3.
* Adding multi table loader - This allows us to load multiple matching tables into one Union DataFrame. If we have the fallowing MFS structure: ``` /clients/client_1/data.table /clients/client_2/data.table ``` we can load a union dataframe by doing `loadFromMapRDB("/clients/*/*.table")` * Fixing the path to the reader
…cript K8S-1077 (apache#598) * K8S-1077 - use single k8s secret with user info MapR [SPARK-651] Replacing joda-time-*.jar with joda-time-2.10.3.jar. MapR [SPARK-638] Wrong permissions when creating files under directory with GID bit set. MapR [SPARK-627] SparkHistoryServer-2.4 is getting 403 Unauthorized home page for users(spark.ui.view.acls) via spark-submit MapR [SPARK-639] Default headers are adding two times MapR [SPARK-629] Spark UI for job lose CSS styles MapR [MS-925] After upgrade to MEP 6.2 (Spark 2.4.0) can no longer consume Kafka / MapR Streams. MapR [SPARK-626] Update kafka dependencies for Spark 2.4.4.0 in release MEP-6.3.0 MapR [SPARK-340] Jetty web server version at Spark should be updated tp v9.4.X MapR [SPARK-617] an't use ssl via spark beeline MapR [SPARK-617] Can't use ssl via spark beeline MapR [SPARK-620] Replace core dependency in Spark-2.4.4 MapR [SPARK-621] Fix multiple XML configuration initialization for (apache#575) custom headers. Use X-XSS-Protection, X-Content-Type-Options Content-Security-Policy and Strict-Transport-Security configuration only in case: cluster security is enabled OR spark.ui.security.headers.enabled set to true. MapR [SPARK-595] Spark cannot access hs2 through zookeeper Revert "MapR [SPARK-595] Spark cannot access hs2 through zookeeper (apache#577)" MapR [SPARK-595] Spark cannot access hs2 through zookeeper MapR [SPARK-620] Replace core dependency in Spark-2.4. MapR [SPARK-619] Move absent commits from 2.4.3 branch to 2.4.4 (apache#574) * Adding SQL API to write to kafka from Spark (apache#567) * Branch 2.4.3 extended kafka and examples (apache#569) * The v2 API is in its own package - the v2 api is in a different package - the old functionality is available in a separated package * v2 API examples - All the examples are using the newest API. - I have removed the old examples since they are not relevant any more and the same functionality is shown in the new examples usin the new API. * MapR [SPARK-619] Move absent commits from 2.4.3 branch to 2.4.4 CORE-321. Add custom http header support for jetty. MapR [SPARK-609] Port Apache Spark-2.4.4 changes to the MapR Spark-2.4.4 branch Adding multi table loader (apache#560) * Adding multi table loader - This allows us to load multiple matching tables into one Union DataFrame. If we have the fallowing MFS structure: ``` /clients/client_1/data.table /clients/client_2/data.table ``` we can load a union dataframe by doing `loadFromMapRDB("/clients/*/*.table")` * Fixing the path to the reader MapR [SPARK-588] Spark thriftserver fails when work with hive-maprdb json table MapR [SPARK-598] Spark can't add needed properties to hive-site.xml MAPR-SPARK-596: Change HBase compatible version for Spark 2.4.3 MapR [SPARK-592] Add possibility to use start-thriftserver.sh script with 2304 port MapR [SPARK-584] MaprDB connector's setHintUsingIndex method doesn't work as expected MapR [SPARK-583] MaprDB connector's loadFromMaprDB function for Java API doesn't work as expected SPARK-579 info about ssl_trustore is added for metrics MapR [SPARK-552] Failed to get broadcast_11_piece0 of broadcast_11 SPARK-569 Generation of SSL ceritificates for spark UI MapR [SPARK-575] Warning messages in spark workspace after the second attempt to login to job's UI Update zookeeper version Adding `joinWithMapRDBTable` function (apache#529) The related documentation of this function is here https://github.com/anicolaspp/MapRDBConnector#joinwithmaprdbtable. The main idea is that having a dataframe (no matter how was it constructed) we can join it with a MapR-DB table. This functions looks at the join query and load only those records from MapR-DB that will join instead of loading the full table and then join in memory. In other words, we only load what we know will be joint. Adding DataSource Reader Support (apache#525) * Adding DataSource Reader Support * Update SparkSessionExt.scala * creating a package object * Update MapRDBSpark.scala * fully path to avoid name collition * refactorings MapR [SPARK-451] Spark hadoop/core dependency updates MapR [SPARK-566] Move absent commits from 2.4.0 branch MapR [SPARK-561] Spark 2.4.3 porting to MapR MapR [SPARK-561] Spark 2.4.3 porting to MapR MapR [SPARK-558] Render application UI init page if driver is not up MapR [SPARK-541] Avoid duplication of the first unexpired record MapR [COLD-150][K8S] Fix metrics copy MapR [K8S-893] Hide plain text password from logs MapR [SPARK-540] Include 'avro' artifacts MapR [SPARK-536] PySpark streaming package for kafka-0-10 added K8S-853: Enable spark metrics for external tenant MapR [SPARK-531] Remove duplicating entries from classpath in ClasspathFilter MapR [SPARK-516] Spark jobs failure using yarn mode on kerberos fixed MapR [SPARK-462] Spark and SparkHistoryServer allow week ciphers, which can allow man in the middle attack [SPARK-508] MapR-DB OJAI Connector for Spark isNull condition returns incorrect result MapR [SPARK-510] nonmapr "admin" users not able to view other user logs in SHS SPARK-460: Spark Metrics for CollectD Configuration for collecting Spark metrics SPARK-463 MAPR_MAVEN_REPO variable for specifying mapR repository MapR [SPARK-492] Spark 2.4.0.0 configure.sh has error messages MapR [SPARK-515][K8S] Remove configure.sh call for k8s MapR [SPARK-515] Move configuring spark-env.sh back to the private-pkg MapR [SPARK-515] Move configuring spark-env.sh back to the private-pkg MapR [SPARK-514] Recovery from checkpoint is broken MapR [SPARK-445] Messages loss fixed by reverting [MAPR-32290] changes from kafka09 package (apache#460) * MapR [SPARK-445] Revert "[MAPR-32290] Spark processing offsets when messages are already TTL in the first batch (apache#376)" This reverts commit e8d59b9. * MapR [SPARK-445] Revert "[MAPR-32290] Spark processing offsets when messages are already ttl in first batch (apache#368)" This reverts commit b282a8b. MapR [SPARK-445] Messages loss fixed by reverting [MAPR-32290] changes from kafka10 package MapR [SPARK-469] Fix NPE in generated classes by reverting "[SPARK-23466][SQL] Remove redundant null checks in generated Java code by GenerateUnsafeProjection" (apache#455) This reverts commit c5583fd. MapR [SPARK-482] Spark streaming app fails to start by UnknownTopicOrPartitionException with checkpoint MapR [SPARK-496] Spark HS UI doesn't work MapR [SPARK-416] CVE-2018-1320 vulnerability in Apache Thrift MapR [SPARK-486][K8S] Fix sasl encryption error on Kubernetes MapR [SPARK-481] Cannot run spark configure.sh on Client node MapR [K8S-637][K8S] Add configure.sh configuration in spark-defaults.conf for job runtime MapR [SPARK-465] Error messages after update of spark 2.4 MapR [SPARK-465] Error messages after update of spark 2.4 MapR [SPARK-464] Can't submit spark 2.4 jobs from mapr-client [SPARK-466] SparkR errors fixed MapR [SPARK-456] Spark shell can't be started SPARK-417 impersonation fixes for spark executor. Impersonation is mo… (apache#433) * SPARK-417 impersonation fixes for spark executor. Impersonation is moved from HadoopRDD.compute() method to org.apache.spark.executor.Executor.run() method * SPARK-363 Hive version changed to '1.2.0-mapr-spark-MEP-6.0.0' [SPARK-449] Kafka offset commit issue fixed MapR [SPARK-287] Move logic of creating /apps/spark folder from installer's scripts to the configure.sh MapR [SPARK-221] Investigate possibility to move creating of the spark-env.sh from private-pkg to configure.sh MapR [SPARK-430] PID files should be under /opt/mapr/pid MapR [SPARK-446] Spark configure.sh doesn't start/stop Spark services MapR [SPARK-434] Move absent commits from 2.3.2 branch (apache#425) * MapR [SPARK-352] Spark shell fails with "NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream" if java is not available in PATH * MapR [SPARK-350] Deprecate Spark Kafka-09 package * MapR [SPARK-326] Investigate possibility of writing Java example for the MapRDB OJAI connector * [SPARK-356] Merge mapr changes from kafka-09 package into the kafka-10 * SPARK-319 Fix for sparkR version check * MapR [SPARK-349] Update OJAI client to v3 for Spark MapR-DB JSON connector * MapR [SPARK-367] Move absent commits from 2.3.1 branch * MapR [SPARK-137] Analyze the warning during compilation of OJAI connector * MapR [SPARK-369] Spark 2.3.2 fails with error related to zookeeper * [MAPR-26258] hbasecontext.HBaseDistributedScanExample fails * [SPARK-24355] Spark external shuffle server improvement to better handle block fetch requests * MapR [SPARK-374] Spark Hive example fails when we submit job from another(simple) cluster user * MapR [SPARK-434] Move absent commits from 2.3.2 branch * MapR [SPARK-434] Move absent commits from 2.3.2 branch * MapR [SPARK-373] Unexpected behavior during job running in standalone cluster mode * MapR [SPARK-419] Update hive-maprdb-json-handler jar for spark 2.3.2.0 and spark 2.2.1 * MapR [SPARK-396] Interface change of sendToKafka * MapR [SPARK-357] consumer groups are prepeneded with a "service_" prefix * MapR [SPARK-429] Changes in maprdb connector are the cause of broken backward compatibility * MapR [SPARK-427] Update kafka in Spark-2.4.0 to the 1.1.1-mapr * MapR [SPARK-434] Move absent commits from 2.3.2 branch * Move absent commits from 2.3.2 branch * MapR [SPARK-434] Move absent commits from 2.3.2 branch * Move absent commits from 2.3.2 branch * Move absent commits from 2.3.2 branch MapR [SPARK-427] Update kafka in Spark-2.4.0 to the 1.1.1-mapr MapR [SPARK-379] Spark 2.4 4-gidit version MapR [PIC-48][K8S] Port k8s changes to 2.4.0 [PIC-48] Create user for k8s driver and executor if required [PIC-48] Create user for k8s driver and executor if required Revert "Remove spark.ui.filters property" This reverts commit d8941ba36c3451cdce15d18d6c1a52991de3b971. [SPARK-351] Copy kubernetes start scripts anyway PIC-34: Rename default configmap name to be consistent with mapr-kubernetes [SPARK-23668][K8S] Add config option for passing through k8s Pod.spec.imagePullSecrets (apache#355) Pass through the `imagePullSecrets` option to the k8s pod in order to allow user to access private image registries. See https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ Unit tests + manual testing. Manual testing procedure: 1. Have private image registry. 2. Spark-submit application with no `spark.kubernetes.imagePullSecret` set. Do `kubectl describe pod ...`. See the error message: ``` Error syncing pod, skipping: failed to "StartContainer" for "spark-kubernetes-driver" with ErrImagePull: "rpc error: code = 2 desc = Error: Status 400 trying to pull repository ...: \"{\\n \\\"errors\\\" : [ {\\n \\\"status\\\" : 400,\\n \\\"message\\\" : \\\"Unsupported docker v1 repository request for '...'\\\"\\n } ]\\n}\"" ``` 3. Create secret `kubectl create secret docker-registry ...` 4. Spark-submit with `spark.kubernetes.imagePullSecret` set to the new secret. See that deployment was successful. Author: Andrew Korzhuev <[email protected]> Author: Andrew Korzhuev <[email protected]> Closes apache#20811 from andrusha/spark-23668-image-pull-secrets. [SPARK-321] Change default value of spark.mapr.ssl.secret.prefix property [PIC-32] Spark on k8s with MapR secure cluster Update entrypoint.sh with correct spark version (apache#340) This PR has minor fix to correct the spark version string [SPARK-274] Create home directory for user who submitted job [MAPR-SPARK-230] Implement security for Spark on Kubernetes Run Spark job with specify the username for driver and executor Read cluster configs from configMap Run configure.sh script form entrypoint.sh Remove spark.kubernetes.driver.pod.commands property Add Spark properties for executor and driver environment variable MapR [SPARK-296] Structured Streaming memory leak Revert "[MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.tem…" (apache#252) * Revert "[MAPR-SPARK-176] Fix Spark Project Catalyst unit tests (apache#251)" This reverts commit 5de05075cd14abf8ac65046a57a5d76617818fbe. * Revert "[MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.template (apache#249)" This reverts commit 1baa677d727e89db7c605ffbae9a9eba00337ad0. [MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.template MapR [SPARK-379] Port Spark to 2.4.0 MapR [SPARK-341] Spark 2.3.2 porting [MAPR-32290] Spark processing offsets when messages are already TTL in the first batch * Bug 32263 - Seek called on unsubscribed partitions [MSPARK-331] Remove snapshot versions of mapr dependencies from Spark-2.3.1 [MAPR-32290] Spark processing offsets when messages are already ttl in first batch MapR [SPARK-325] Add examples for work with the MapRDB JSON connector into the Spark project [ATS-449] Unit test for EBF 32013 created. MAPR-SPARK-311: Spark beeline uses default ssl truststore instead of mapr ssl truststore Bug 32355 - Executor tab empty on Spark UI [SPARK-318] Submitting Spark jobs from Oozie fails due to ClassNotFoundException Bug 32014 - Spark Consumer fails with java.lang.AssertionError Revert "[SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1" (apache#341) * Revert "[SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1 (apache#335)" This reverts commit 832411e. Bug 32014 - Spark Consumer fails with java.lang.AssertionError (apache#326) (apache#336) * MapR [32014] Spark Consumer fails with java.lang.AssertionError [SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1 DEVOPS-2768 temporarily removed curl for file downloading [SPARK-302] Local privilege escalation MapR [SPARK-297] Added unit test for empty value conversion MapR [SPARK-297] Empty values are loaded as non-null MapR [SPARK-296] Structured Streaming memory leak 2.3.1 spark 289 (apache#318) * MapR [SPARK-289] Fix unit test for Spark-2.3.1 [SPARK-130] MapRDB connector - NPE while saving Pair RDD with 'null' values MapR [SPARK-283] Unit tests fail during initialization SSL properties. [SPARK-212] SparkHiveExample fails when we run it twice MapR [SPARK-282] Remove maprfs and hadoop jars from mapr spark package MapR [SPARK-278] Spark submit fails for jobs with python MapR [SPARK-279] Can't connect to spark thrift server with new spark and hive packages MapR [SPARK-276] Update zookeeper dependency to v.3.4.11 for spark 2.3.1 MapR [SPARK-272] Use only client passwords from ssl-client.xml MapR [SPARK-266] Spark jobs can't finish correctly, when there is an error during job running MapR [SPARK-263] Add possibility to use keyPassword which is different from keyStorePassword [MSPARK-31632] RM UI showing broken page for Spark jobs MapR [SPARK-261] Use mapr-security-web for getting passwords. MapR [SPARK-259] Spark application doesn't finish correctly MapR [SPARK-268] Update Spark version for Warden change project version to 2.3.1-mapr-SNAPSHOT MapR [SPARK-256] Spark doesn't work on yarn mode MapR [SPARK-255] Installer fresh install 610/600 secure fails to start "mapr-spark-thriftserver", "mapr-spark-historyserver" Mapr [SPARK-248] MapRDBTableScanRDD fails to convert to Scala Dataframe when using where clause MapR [SPARK-225] Hadoop credentials provider usage for hiding passwords at spark-defaults MapR [SPARK-214] Hive-2.1 poperties can't be read from a hive-site.xml as Spark uses Hive-1.2 MapR [SPARK-216] Spark thriftserver fails when work with hive-maprdb json table SPARK-244 (apache#278) Provide ability to use MapR-Negotiation authentication for Spark HistoryServer MapR [SPARK-226] Spark - pySpark Security Vulnerability MapR [SPARK-220] SparkR fails with UDF functions bug fixed MapR [SPARK-227] KafkaUtils.createDirectStream fails with kafka-09 MapR [SPARK-183] Spark Integration for Kafka 0.10 unit tests disabled MapR [SPARK-182] Spark Project External Kafka Producer v09 unit tests fixed MapR [SPARK-179] Spark Integration for Kafka 0.9 unit tests fixed MapR [SPARK-181] Kafka 0.10 Structured Streaming unit tests fixed [MSPARK-31305] Spark History server NOT loading applications submitted by users other than 'mapr' MapR [SPARK-175] Fix Spark Project Streaming unit tests [MAPR-SPARK-176] Fix Spark Project Catalyst unit tests [MAPR-SPARK-178] Fix Spark Project Hive unit tests MapR [SPARK-174] Spark Core unit tests fixed Changed version for spark-kafka connector. MapR [SPARK-202] Update MapR Spark to 2.3.0 Fixed compile time errors in tests Change project version [SPARK-198] Update hadoop dependency version to 2.7.0-mapr-1803 for Spark 2.2.1 MapR [SPARK-188] Couldn't connect to thrift server via spark beeline on kerberos cluster MapR [SPARK-143] Spark History Server does not require login for secured-by-default clusters MapR [SPARK-186] Update OJAI versions to the latest for Spark-2.2.1 OJAI Connector MapR [SPARK-191] Incorrect work of MapR-DB Sink 'complete' and 'update' modes fixed MapR [SPARK-170] StackOverflowException in equals method in DBMapValue 2.2.1 build fixed (apache#231) * MapR [SPARK-164] Update Kafka version to 1.0.1-mapr in Spark Kafka Producer module MapR [SPARK-161] Include Kafka Structured streaming jar to Spark package. MapR [SPARK-155] Change Spark Master port from 8080 MapR [SPARK-153] Exception in spark job with configured labels on yarn-client mode MapR [SPARK-152] Incorrect date string parsing fixed MapR [SPARK-21] Structured Streaming MapR-DB Sink created MapR [SPARK-135] Spark 2.2 with MapR Streams ( Kafka 1.0) (apache#218) * MapR [SPARK-135] Spark 2.2 with MapR Streams (Kafka 1.0) Added functionality of MapR-Streams specific EOF handling. MapR [SPARK-143] Spark History Server does not require login for secured-by-default clusters Disable build failing if scalastyle checking is fall. MapR [SPARK-16] Change Spark version in Warden files and configure.sh MapR [SPARK-144] Add insertToMapRDB method for rdd for Java API [MAPR-30536] Spark SQL queries on Map column fails after upgrade MapR [SPARK-139] Remove "update" related APIs from connector MapR [SPARK-140] Change the option name "tableName" to "tablePath" in the Spark/MapR-DB connectors. MapR [SPARK-121] Spark OJAI JAVA: update functionality removed MapR [SPARK-118] Spark OJAI Python: missed DataFrame import while moving imports in order to fix MapR [ZEP-101] interpreter issue MapR [SPARK-118] Spark OJAI Python: move MapR DB Connector class importing in order to fix MapR [ZEP-101] interpreter issue MapR [SPARK-117] Spark OJAI Python: Save functionality implementation MapR [SPARK-131] Exception when try to save JSON table with Binary _id field Spark OJAI JAVA: load to RDD, save from RDD implementation (apache#195) * MapR [SPARK-124] Loading to JavaRDD implemented * MapR [SPARK-124] MapRDBJavaSparkContext constructor changed * MapR [SPARK-124] implemented RDD[Row] saving MapR [SPARK-118] Spark OJAI Python: Read implementation MapR [SPARK-128] MapRDB connector - wrong handle of null fields when nullable is false * MapR [SPARK-121] Spark OJAI JAVA: Read to Dataset functionality implementation * Minor refactoring MapR [SPARK-125] Default value of idFieldPath parameter is not handle MapR [SPARK-113] Hit java.lang.UnsupportedOperationException: empty.reduceLeft during loadFromMapRDB Spark Mapr-DB connector was refactored according to Scala style Removed code duplication [MSPARK-107]idField information is lost in MapRDBDataFrameWriterFunctions.saveToMapRDB configure.sh takes options to change ports Kafka client excluded from package because correct version is located in "mapr classpath" Changed Kafka version in Kafka producer module. Branch spark 69 (apache#170) * Fixing the wrong type casting of TimeStamp to OTimeStamp when read from spark dataFrame. * SPARK-69: Problem with license when we try to read from json and write to maprdb remove creatin /usr/local/spark link from configure.sh. This link will be creates by private-pkg remove include-maprdb from default profiles added profiles in maprdb pom file instead of two pom files Fixed maprdb connector dependencies. Fixing the wrong type casting of TimeStamp to OTimeStamp when read from spark dataFrame. changed port for spark-thriftserver as it conflicts with hive server changed port for spark-thriftserver as it conflicts with hive server remove .not_configured_yet file after success Ojai connector fixed required java version [MSPARK-45] Move Spark-OJAI connector code to Spark github repo (apache#132) * SPARK-45 Move Spark-OJAI connector code to Spark github repo * Fixing pom versions for maprdb spark connector. * Changes made to the connector code to be compatible with 5.2.* and 6.0 clients. Spark 2.1.0 mapr 29106 (apache#150) * [SPARK-20922][CORE] Add whitelist of classes that can be deserialized by the launcher. Blindly deserializing classes using Java serialization opens the code up to issues in other libraries, since just deserializing data from a stream may end up execution code (think readObject()). Since the launcher protocol is pretty self-contained, there's just a handful of classes it legitimately needs to deserialize, and they're in just two packages, so add a filter that throws errors if classes from any other package show up in the stream. This also maintains backwards compatibility (the updated launcher code can still communicate with the backend code in older Spark releases). Tested with new and existing unit tests. Author: Marcelo Vanzin <[email protected]> Closes apache#18166 from vanzin/SPARK-20922. (cherry picked from commit 8efc6e9) Signed-off-by: Marcelo Vanzin <[email protected]> (cherry picked from commit 772a9b9) * [SPARK-20922][CORE][HOTFIX] Don't use Java 8 lambdas in older branches. Author: Marcelo Vanzin <[email protected]> Closes apache#18178 from vanzin/SPARK-20922-hotfix. Added security by default for historyserver use waitForConsumerAssignment() instead of consumer.poll(0) for spark-29052 change MAPR_HADOOP_CLASSPATH in configure.sh for creating it by mapr-classpath.sh change MAPR_HADOOP_CLASSPATH in configure.sh for creating it by mapr-classpath.sh changes for mapr-classpath.sh changes for mapr-classpath.sh configure.sh changes [SPARK-39] Classpath filter was added Fixed impersonation when data read from MapR-DB via Spark-Hive. added configure.sh and warden.spark-thriftserver.conf hive-hbase-handler added to Spark jars Fixed "Single message comes late" 28339 bug fixed Spark streaming skipped message with zero offset from Kafka 0.9 [MSPARK-9] Initial fix for Spark unit tests Bump dependencies after ECO-1703 release [SPARK-33] Streaming example fixed [MAPR-26060] Fixed case when mapr-streams make gaps in offsets ported features from kafka 10 to kafka 9 [MAPR-26289][SPARK-2.1] Streaming general improvements (apache#93) * Added include-kafka-09 profile to Assembly * Set default poll timeout to 120s Set default HBase verison to 1.1.8 Changes from Kafka10 package were ported to Kafka09 package. [MAPR-26053] Include MapR Classes to the default value of spark.sql.hive.metastore.sharedPrefixes [MAPR-25807] Spark-Warehouse path computes incorrectly Add MapR-SASL support for Thrift Server Adding scala library. [MAPR-25713] Spark might try to load MapR Class Loader multiple times and fail [MAPR-25311] Bump Spark dependencies after ECO-1611 release [MINOR] Fix spark-jars.sh script [MAPR-24603] Could not launch beeline shell after starting spark thrift server fixed syntax error in V09DirectKafkaWordCount example Spark 2.0.1 MAPR-streams Python API [MAPR-24415] SPARK_JAVA_OPTS is deprecated Kafka streaming producer added. Minor fix for previous commit Added script for MAPR-24374 Some minor changes to spark-defaults.conf Changed default HBase version to 1.1.1 in compatibility.version Streaming example was refactored [MAPR-24470] HiveFromSpark test fails in yarn-cluster mode Added MapR Repo [MAPR-22940] Failed to connect spark beeline (after spark thrift server is started) on Kerberos cluster [MAPR-18865] Unable to submit spark apps from Windows client Skip maven clean task on the parent module New: Issue with running Hive commands in Spark This is fixed in SPARK-7819 Isolated Hive Client Loader appears to cause Native Library libMapRClient.4.0.2-mapr.so already loaded in another classloader error Spark warden.services.conf should have dependency on cldb Remove DFS shuffle settings. These settings are not used right now. Copy every file in the conf directory into the distribution package. Create spark-defaults.conf for MapR Settings to enable DFS shuffle on MapR. Support hbase classpath computation in util script. Adding external conf and scripts. Enable SPARK_HIVE mode while building. This is needed to bundle datanucleus jars needed for hive table creation. Build Spark on MapR. - make-distribution.sh takes an environment variable to enable profiles - MVN_PROFILE_ARG - Added warden conf files under ext-conf. - Updated pom.xml to use right set of jars and version. Spark Master failed to start in HA mode Updated Apache Curator version Added spark streaming integration with kafka 0.9 and mapr-streams Added MapR Repo
…cript K8S-1077 (apache#598) * K8S-1077 - use single k8s secret with user info MapR [SPARK-651] Replacing joda-time-*.jar with joda-time-2.10.3.jar. MapR [SPARK-638] Wrong permissions when creating files under directory with GID bit set. MapR [SPARK-627] SparkHistoryServer-2.4 is getting 403 Unauthorized home page for users(spark.ui.view.acls) via spark-submit MapR [SPARK-639] Default headers are adding two times MapR [SPARK-629] Spark UI for job lose CSS styles MapR [MS-925] After upgrade to MEP 6.2 (Spark 2.4.0) can no longer consume Kafka / MapR Streams. MapR [SPARK-626] Update kafka dependencies for Spark 2.4.4.0 in release MEP-6.3.0 MapR [SPARK-340] Jetty web server version at Spark should be updated tp v9.4.X MapR [SPARK-617] an't use ssl via spark beeline MapR [SPARK-617] Can't use ssl via spark beeline MapR [SPARK-620] Replace core dependency in Spark-2.4.4 MapR [SPARK-621] Fix multiple XML configuration initialization for (apache#575) custom headers. Use X-XSS-Protection, X-Content-Type-Options Content-Security-Policy and Strict-Transport-Security configuration only in case: cluster security is enabled OR spark.ui.security.headers.enabled set to true. MapR [SPARK-595] Spark cannot access hs2 through zookeeper Revert "MapR [SPARK-595] Spark cannot access hs2 through zookeeper (apache#577)" MapR [SPARK-595] Spark cannot access hs2 through zookeeper MapR [SPARK-620] Replace core dependency in Spark-2.4. MapR [SPARK-619] Move absent commits from 2.4.3 branch to 2.4.4 (apache#574) * Adding SQL API to write to kafka from Spark (apache#567) * Branch 2.4.3 extended kafka and examples (apache#569) * The v2 API is in its own package - the v2 api is in a different package - the old functionality is available in a separated package * v2 API examples - All the examples are using the newest API. - I have removed the old examples since they are not relevant any more and the same functionality is shown in the new examples usin the new API. * MapR [SPARK-619] Move absent commits from 2.4.3 branch to 2.4.4 CORE-321. Add custom http header support for jetty. MapR [SPARK-609] Port Apache Spark-2.4.4 changes to the MapR Spark-2.4.4 branch Adding multi table loader (apache#560) * Adding multi table loader - This allows us to load multiple matching tables into one Union DataFrame. If we have the fallowing MFS structure: ``` /clients/client_1/data.table /clients/client_2/data.table ``` we can load a union dataframe by doing `loadFromMapRDB("/clients/*/*.table")` * Fixing the path to the reader MapR [SPARK-588] Spark thriftserver fails when work with hive-maprdb json table MapR [SPARK-598] Spark can't add needed properties to hive-site.xml MAPR-SPARK-596: Change HBase compatible version for Spark 2.4.3 MapR [SPARK-592] Add possibility to use start-thriftserver.sh script with 2304 port MapR [SPARK-584] MaprDB connector's setHintUsingIndex method doesn't work as expected MapR [SPARK-583] MaprDB connector's loadFromMaprDB function for Java API doesn't work as expected SPARK-579 info about ssl_trustore is added for metrics MapR [SPARK-552] Failed to get broadcast_11_piece0 of broadcast_11 SPARK-569 Generation of SSL ceritificates for spark UI MapR [SPARK-575] Warning messages in spark workspace after the second attempt to login to job's UI Update zookeeper version Adding `joinWithMapRDBTable` function (apache#529) The related documentation of this function is here https://github.com/anicolaspp/MapRDBConnector#joinwithmaprdbtable. The main idea is that having a dataframe (no matter how was it constructed) we can join it with a MapR-DB table. This functions looks at the join query and load only those records from MapR-DB that will join instead of loading the full table and then join in memory. In other words, we only load what we know will be joint. Adding DataSource Reader Support (apache#525) * Adding DataSource Reader Support * Update SparkSessionExt.scala * creating a package object * Update MapRDBSpark.scala * fully path to avoid name collition * refactorings MapR [SPARK-451] Spark hadoop/core dependency updates MapR [SPARK-566] Move absent commits from 2.4.0 branch MapR [SPARK-561] Spark 2.4.3 porting to MapR MapR [SPARK-561] Spark 2.4.3 porting to MapR MapR [SPARK-558] Render application UI init page if driver is not up MapR [SPARK-541] Avoid duplication of the first unexpired record MapR [COLD-150][K8S] Fix metrics copy MapR [K8S-893] Hide plain text password from logs MapR [SPARK-540] Include 'avro' artifacts MapR [SPARK-536] PySpark streaming package for kafka-0-10 added K8S-853: Enable spark metrics for external tenant MapR [SPARK-531] Remove duplicating entries from classpath in ClasspathFilter MapR [SPARK-516] Spark jobs failure using yarn mode on kerberos fixed MapR [SPARK-462] Spark and SparkHistoryServer allow week ciphers, which can allow man in the middle attack [SPARK-508] MapR-DB OJAI Connector for Spark isNull condition returns incorrect result MapR [SPARK-510] nonmapr "admin" users not able to view other user logs in SHS SPARK-460: Spark Metrics for CollectD Configuration for collecting Spark metrics SPARK-463 MAPR_MAVEN_REPO variable for specifying mapR repository MapR [SPARK-492] Spark 2.4.0.0 configure.sh has error messages MapR [SPARK-515][K8S] Remove configure.sh call for k8s MapR [SPARK-515] Move configuring spark-env.sh back to the private-pkg MapR [SPARK-515] Move configuring spark-env.sh back to the private-pkg MapR [SPARK-514] Recovery from checkpoint is broken MapR [SPARK-445] Messages loss fixed by reverting [MAPR-32290] changes from kafka09 package (apache#460) * MapR [SPARK-445] Revert "[MAPR-32290] Spark processing offsets when messages are already TTL in the first batch (apache#376)" This reverts commit e8d59b9. * MapR [SPARK-445] Revert "[MAPR-32290] Spark processing offsets when messages are already ttl in first batch (apache#368)" This reverts commit b282a8b. MapR [SPARK-445] Messages loss fixed by reverting [MAPR-32290] changes from kafka10 package MapR [SPARK-469] Fix NPE in generated classes by reverting "[SPARK-23466][SQL] Remove redundant null checks in generated Java code by GenerateUnsafeProjection" (apache#455) This reverts commit c5583fd. MapR [SPARK-482] Spark streaming app fails to start by UnknownTopicOrPartitionException with checkpoint MapR [SPARK-496] Spark HS UI doesn't work MapR [SPARK-416] CVE-2018-1320 vulnerability in Apache Thrift MapR [SPARK-486][K8S] Fix sasl encryption error on Kubernetes MapR [SPARK-481] Cannot run spark configure.sh on Client node MapR [K8S-637][K8S] Add configure.sh configuration in spark-defaults.conf for job runtime MapR [SPARK-465] Error messages after update of spark 2.4 MapR [SPARK-465] Error messages after update of spark 2.4 MapR [SPARK-464] Can't submit spark 2.4 jobs from mapr-client [SPARK-466] SparkR errors fixed MapR [SPARK-456] Spark shell can't be started SPARK-417 impersonation fixes for spark executor. Impersonation is mo… (apache#433) * SPARK-417 impersonation fixes for spark executor. Impersonation is moved from HadoopRDD.compute() method to org.apache.spark.executor.Executor.run() method * SPARK-363 Hive version changed to '1.2.0-mapr-spark-MEP-6.0.0' [SPARK-449] Kafka offset commit issue fixed MapR [SPARK-287] Move logic of creating /apps/spark folder from installer's scripts to the configure.sh MapR [SPARK-221] Investigate possibility to move creating of the spark-env.sh from private-pkg to configure.sh MapR [SPARK-430] PID files should be under /opt/mapr/pid MapR [SPARK-446] Spark configure.sh doesn't start/stop Spark services MapR [SPARK-434] Move absent commits from 2.3.2 branch (apache#425) * MapR [SPARK-352] Spark shell fails with "NoClassDefFoundError: org/apache/hadoop/fs/FSDataInputStream" if java is not available in PATH * MapR [SPARK-350] Deprecate Spark Kafka-09 package * MapR [SPARK-326] Investigate possibility of writing Java example for the MapRDB OJAI connector * [SPARK-356] Merge mapr changes from kafka-09 package into the kafka-10 * SPARK-319 Fix for sparkR version check * MapR [SPARK-349] Update OJAI client to v3 for Spark MapR-DB JSON connector * MapR [SPARK-367] Move absent commits from 2.3.1 branch * MapR [SPARK-137] Analyze the warning during compilation of OJAI connector * MapR [SPARK-369] Spark 2.3.2 fails with error related to zookeeper * [MAPR-26258] hbasecontext.HBaseDistributedScanExample fails * [SPARK-24355] Spark external shuffle server improvement to better handle block fetch requests * MapR [SPARK-374] Spark Hive example fails when we submit job from another(simple) cluster user * MapR [SPARK-434] Move absent commits from 2.3.2 branch * MapR [SPARK-434] Move absent commits from 2.3.2 branch * MapR [SPARK-373] Unexpected behavior during job running in standalone cluster mode * MapR [SPARK-419] Update hive-maprdb-json-handler jar for spark 2.3.2.0 and spark 2.2.1 * MapR [SPARK-396] Interface change of sendToKafka * MapR [SPARK-357] consumer groups are prepeneded with a "service_" prefix * MapR [SPARK-429] Changes in maprdb connector are the cause of broken backward compatibility * MapR [SPARK-427] Update kafka in Spark-2.4.0 to the 1.1.1-mapr * MapR [SPARK-434] Move absent commits from 2.3.2 branch * Move absent commits from 2.3.2 branch * MapR [SPARK-434] Move absent commits from 2.3.2 branch * Move absent commits from 2.3.2 branch * Move absent commits from 2.3.2 branch MapR [SPARK-427] Update kafka in Spark-2.4.0 to the 1.1.1-mapr MapR [SPARK-379] Spark 2.4 4-gidit version MapR [PIC-48][K8S] Port k8s changes to 2.4.0 [PIC-48] Create user for k8s driver and executor if required [PIC-48] Create user for k8s driver and executor if required Revert "Remove spark.ui.filters property" This reverts commit d8941ba36c3451cdce15d18d6c1a52991de3b971. [SPARK-351] Copy kubernetes start scripts anyway PIC-34: Rename default configmap name to be consistent with mapr-kubernetes [SPARK-23668][K8S] Add config option for passing through k8s Pod.spec.imagePullSecrets (apache#355) Pass through the `imagePullSecrets` option to the k8s pod in order to allow user to access private image registries. See https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ Unit tests + manual testing. Manual testing procedure: 1. Have private image registry. 2. Spark-submit application with no `spark.kubernetes.imagePullSecret` set. Do `kubectl describe pod ...`. See the error message: ``` Error syncing pod, skipping: failed to "StartContainer" for "spark-kubernetes-driver" with ErrImagePull: "rpc error: code = 2 desc = Error: Status 400 trying to pull repository ...: \"{\\n \\\"errors\\\" : [ {\\n \\\"status\\\" : 400,\\n \\\"message\\\" : \\\"Unsupported docker v1 repository request for '...'\\\"\\n } ]\\n}\"" ``` 3. Create secret `kubectl create secret docker-registry ...` 4. Spark-submit with `spark.kubernetes.imagePullSecret` set to the new secret. See that deployment was successful. Author: Andrew Korzhuev <[email protected]> Author: Andrew Korzhuev <[email protected]> Closes apache#20811 from andrusha/spark-23668-image-pull-secrets. [SPARK-321] Change default value of spark.mapr.ssl.secret.prefix property [PIC-32] Spark on k8s with MapR secure cluster Update entrypoint.sh with correct spark version (apache#340) This PR has minor fix to correct the spark version string [SPARK-274] Create home directory for user who submitted job [MAPR-SPARK-230] Implement security for Spark on Kubernetes Run Spark job with specify the username for driver and executor Read cluster configs from configMap Run configure.sh script form entrypoint.sh Remove spark.kubernetes.driver.pod.commands property Add Spark properties for executor and driver environment variable MapR [SPARK-296] Structured Streaming memory leak Revert "[MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.tem…" (apache#252) * Revert "[MAPR-SPARK-176] Fix Spark Project Catalyst unit tests (apache#251)" This reverts commit 5de05075cd14abf8ac65046a57a5d76617818fbe. * Revert "[MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.template (apache#249)" This reverts commit 1baa677d727e89db7c605ffbae9a9eba00337ad0. [MAPR-SPARK-210] Rename sprk-defaults.conf to spark-defaults.conf.template MapR [SPARK-379] Port Spark to 2.4.0 MapR [SPARK-341] Spark 2.3.2 porting [MAPR-32290] Spark processing offsets when messages are already TTL in the first batch * Bug 32263 - Seek called on unsubscribed partitions [MSPARK-331] Remove snapshot versions of mapr dependencies from Spark-2.3.1 [MAPR-32290] Spark processing offsets when messages are already ttl in first batch MapR [SPARK-325] Add examples for work with the MapRDB JSON connector into the Spark project [ATS-449] Unit test for EBF 32013 created. MAPR-SPARK-311: Spark beeline uses default ssl truststore instead of mapr ssl truststore Bug 32355 - Executor tab empty on Spark UI [SPARK-318] Submitting Spark jobs from Oozie fails due to ClassNotFoundException Bug 32014 - Spark Consumer fails with java.lang.AssertionError Revert "[SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1" (apache#341) * Revert "[SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1 (apache#335)" This reverts commit 832411e. Bug 32014 - Spark Consumer fails with java.lang.AssertionError (apache#326) (apache#336) * MapR [32014] Spark Consumer fails with java.lang.AssertionError [SPARK-306] Kafka clients 1.0.1 present in jars directory for Spark 2.3.1 DEVOPS-2768 temporarily removed curl for file downloading [SPARK-302] Local privilege escalation MapR [SPARK-297] Added unit test for empty value conversion MapR [SPARK-297] Empty values are loaded as non-null MapR [SPARK-296] Structured Streaming memory leak 2.3.1 spark 289 (apache#318) * MapR [SPARK-289] Fix unit test for Spark-2.3.1 [SPARK-130] MapRDB connector - NPE while saving Pair RDD with 'null' values MapR [SPARK-283] Unit tests fail during initialization SSL properties. [SPARK-212] SparkHiveExample fails when we run it twice MapR [SPARK-282] Remove maprfs and hadoop jars from mapr spark package MapR [SPARK-278] Spark submit fails for jobs with python MapR [SPARK-279] Can't connect to spark thrift server with new spark and hive packages MapR [SPARK-276] Update zookeeper dependency to v.3.4.11 for spark 2.3.1 MapR [SPARK-272] Use only client passwords from ssl-client.xml MapR [SPARK-266] Spark jobs can't finish correctly, when there is an error during job running MapR [SPARK-263] Add possibility to use keyPassword which is different from keyStorePassword [MSPARK-31632] RM UI showing broken page for Spark jobs MapR [SPARK-261] Use mapr-security-web for getting passwords. MapR [SPARK-259] Spark application doesn't finish correctly MapR [SPARK-268] Update Spark version for Warden change project version to 2.3.1-mapr-SNAPSHOT MapR [SPARK-256] Spark doesn't work on yarn mode MapR [SPARK-255] Installer fresh install 610/600 secure fails to start "mapr-spark-thriftserver", "mapr-spark-historyserver" Mapr [SPARK-248] MapRDBTableScanRDD fails to convert to Scala Dataframe when using where clause MapR [SPARK-225] Hadoop credentials provider usage for hiding passwords at spark-defaults MapR [SPARK-214] Hive-2.1 poperties can't be read from a hive-site.xml as Spark uses Hive-1.2 MapR [SPARK-216] Spark thriftserver fails when work with hive-maprdb json table SPARK-244 (apache#278) Provide ability to use MapR-Negotiation authentication for Spark HistoryServer MapR [SPARK-226] Spark - pySpark Security Vulnerability MapR [SPARK-220] SparkR fails with UDF functions bug fixed MapR [SPARK-227] KafkaUtils.createDirectStream fails with kafka-09 MapR [SPARK-183] Spark Integration for Kafka 0.10 unit tests disabled MapR [SPARK-182] Spark Project External Kafka Producer v09 unit tests fixed MapR [SPARK-179] Spark Integration for Kafka 0.9 unit tests fixed MapR [SPARK-181] Kafka 0.10 Structured Streaming unit tests fixed [MSPARK-31305] Spark History server NOT loading applications submitted by users other than 'mapr' MapR [SPARK-175] Fix Spark Project Streaming unit tests [MAPR-SPARK-176] Fix Spark Project Catalyst unit tests [MAPR-SPARK-178] Fix Spark Project Hive unit tests MapR [SPARK-174] Spark Core unit tests fixed Changed version for spark-kafka connector. MapR [SPARK-202] Update MapR Spark to 2.3.0 Fixed compile time errors in tests Change project version [SPARK-198] Update hadoop dependency version to 2.7.0-mapr-1803 for Spark 2.2.1 MapR [SPARK-188] Couldn't connect to thrift server via spark beeline on kerberos cluster MapR [SPARK-143] Spark History Server does not require login for secured-by-default clusters MapR [SPARK-186] Update OJAI versions to the latest for Spark-2.2.1 OJAI Connector MapR [SPARK-191] Incorrect work of MapR-DB Sink 'complete' and 'update' modes fixed MapR [SPARK-170] StackOverflowException in equals method in DBMapValue 2.2.1 build fixed (apache#231) * MapR [SPARK-164] Update Kafka version to 1.0.1-mapr in Spark Kafka Producer module MapR [SPARK-161] Include Kafka Structured streaming jar to Spark package. MapR [SPARK-155] Change Spark Master port from 8080 MapR [SPARK-153] Exception in spark job with configured labels on yarn-client mode MapR [SPARK-152] Incorrect date string parsing fixed MapR [SPARK-21] Structured Streaming MapR-DB Sink created MapR [SPARK-135] Spark 2.2 with MapR Streams ( Kafka 1.0) (apache#218) * MapR [SPARK-135] Spark 2.2 with MapR Streams (Kafka 1.0) Added functionality of MapR-Streams specific EOF handling. MapR [SPARK-143] Spark History Server does not require login for secured-by-default clusters Disable build failing if scalastyle checking is fall. MapR [SPARK-16] Change Spark version in Warden files and configure.sh MapR [SPARK-144] Add insertToMapRDB method for rdd for Java API [MAPR-30536] Spark SQL queries on Map column fails after upgrade MapR [SPARK-139] Remove "update" related APIs from connector MapR [SPARK-140] Change the option name "tableName" to "tablePath" in the Spark/MapR-DB connectors. MapR [SPARK-121] Spark OJAI JAVA: update functionality removed MapR [SPARK-118] Spark OJAI Python: missed DataFrame import while moving imports in order to fix MapR [ZEP-101] interpreter issue MapR [SPARK-118] Spark OJAI Python: move MapR DB Connector class importing in order to fix MapR [ZEP-101] interpreter issue MapR [SPARK-117] Spark OJAI Python: Save functionality implementation MapR [SPARK-131] Exception when try to save JSON table with Binary _id field Spark OJAI JAVA: load to RDD, save from RDD implementation (apache#195) * MapR [SPARK-124] Loading to JavaRDD implemented * MapR [SPARK-124] MapRDBJavaSparkContext constructor changed * MapR [SPARK-124] implemented RDD[Row] saving MapR [SPARK-118] Spark OJAI Python: Read implementation MapR [SPARK-128] MapRDB connector - wrong handle of null fields when nullable is false * MapR [SPARK-121] Spark OJAI JAVA: Read to Dataset functionality implementation * Minor refactoring MapR [SPARK-125] Default value of idFieldPath parameter is not handle MapR [SPARK-113] Hit java.lang.UnsupportedOperationException: empty.reduceLeft during loadFromMapRDB Spark Mapr-DB connector was refactored according to Scala style Removed code duplication [MSPARK-107]idField information is lost in MapRDBDataFrameWriterFunctions.saveToMapRDB configure.sh takes options to change ports Kafka client excluded from package because correct version is located in "mapr classpath" Changed Kafka version in Kafka producer module. Branch spark 69 (apache#170) * Fixing the wrong type casting of TimeStamp to OTimeStamp when read from spark dataFrame. * SPARK-69: Problem with license when we try to read from json and write to maprdb remove creatin /usr/local/spark link from configure.sh. This link will be creates by private-pkg remove include-maprdb from default profiles added profiles in maprdb pom file instead of two pom files Fixed maprdb connector dependencies. Fixing the wrong type casting of TimeStamp to OTimeStamp when read from spark dataFrame. changed port for spark-thriftserver as it conflicts with hive server changed port for spark-thriftserver as it conflicts with hive server remove .not_configured_yet file after success Ojai connector fixed required java version [MSPARK-45] Move Spark-OJAI connector code to Spark github repo (apache#132) * SPARK-45 Move Spark-OJAI connector code to Spark github repo * Fixing pom versions for maprdb spark connector. * Changes made to the connector code to be compatible with 5.2.* and 6.0 clients. Spark 2.1.0 mapr 29106 (apache#150) * [SPARK-20922][CORE] Add whitelist of classes that can be deserialized by the launcher. Blindly deserializing classes using Java serialization opens the code up to issues in other libraries, since just deserializing data from a stream may end up execution code (think readObject()). Since the launcher protocol is pretty self-contained, there's just a handful of classes it legitimately needs to deserialize, and they're in just two packages, so add a filter that throws errors if classes from any other package show up in the stream. This also maintains backwards compatibility (the updated launcher code can still communicate with the backend code in older Spark releases). Tested with new and existing unit tests. Author: Marcelo Vanzin <[email protected]> Closes apache#18166 from vanzin/SPARK-20922. (cherry picked from commit 8efc6e9) Signed-off-by: Marcelo Vanzin <[email protected]> (cherry picked from commit 772a9b9) * [SPARK-20922][CORE][HOTFIX] Don't use Java 8 lambdas in older branches. Author: Marcelo Vanzin <[email protected]> Closes apache#18178 from vanzin/SPARK-20922-hotfix. Added security by default for historyserver use waitForConsumerAssignment() instead of consumer.poll(0) for spark-29052 change MAPR_HADOOP_CLASSPATH in configure.sh for creating it by mapr-classpath.sh change MAPR_HADOOP_CLASSPATH in configure.sh for creating it by mapr-classpath.sh changes for mapr-classpath.sh changes for mapr-classpath.sh configure.sh changes [SPARK-39] Classpath filter was added Fixed impersonation when data read from MapR-DB via Spark-Hive. added configure.sh and warden.spark-thriftserver.conf hive-hbase-handler added to Spark jars Fixed "Single message comes late" 28339 bug fixed Spark streaming skipped message with zero offset from Kafka 0.9 [MSPARK-9] Initial fix for Spark unit tests Bump dependencies after ECO-1703 release [SPARK-33] Streaming example fixed [MAPR-26060] Fixed case when mapr-streams make gaps in offsets ported features from kafka 10 to kafka 9 [MAPR-26289][SPARK-2.1] Streaming general improvements (apache#93) * Added include-kafka-09 profile to Assembly * Set default poll timeout to 120s Set default HBase verison to 1.1.8 Changes from Kafka10 package were ported to Kafka09 package. [MAPR-26053] Include MapR Classes to the default value of spark.sql.hive.metastore.sharedPrefixes [MAPR-25807] Spark-Warehouse path computes incorrectly Add MapR-SASL support for Thrift Server Adding scala library. [MAPR-25713] Spark might try to load MapR Class Loader multiple times and fail [MAPR-25311] Bump Spark dependencies after ECO-1611 release [MINOR] Fix spark-jars.sh script [MAPR-24603] Could not launch beeline shell after starting spark thrift server fixed syntax error in V09DirectKafkaWordCount example Spark 2.0.1 MAPR-streams Python API [MAPR-24415] SPARK_JAVA_OPTS is deprecated Kafka streaming producer added. Minor fix for previous commit Added script for MAPR-24374 Some minor changes to spark-defaults.conf Changed default HBase version to 1.1.1 in compatibility.version Streaming example was refactored [MAPR-24470] HiveFromSpark test fails in yarn-cluster mode Added MapR Repo [MAPR-22940] Failed to connect spark beeline (after spark thrift server is started) on Kerberos cluster [MAPR-18865] Unable to submit spark apps from Windows client Skip maven clean task on the parent module New: Issue with running Hive commands in Spark This is fixed in SPARK-7819 Isolated Hive Client Loader appears to cause Native Library libMapRClient.4.0.2-mapr.so already loaded in another classloader error Spark warden.services.conf should have dependency on cldb Remove DFS shuffle settings. These settings are not used right now. Copy every file in the conf directory into the distribution package. Create spark-defaults.conf for MapR Settings to enable DFS shuffle on MapR. Support hbase classpath computation in util script. Adding external conf and scripts. Enable SPARK_HIVE mode while building. This is needed to bundle datanucleus jars needed for hive table creation. Build Spark on MapR. - make-distribution.sh takes an environment variable to enable profiles - MVN_PROFILE_ARG - Added warden conf files under ext-conf. - Updated pom.xml to use right set of jars and version. Spark Master failed to start in HA mode Updated Apache Curator version Added spark streaming integration with kafka 0.9 and mapr-streams Added MapR Repo
Recent changes ignored the fact that path may be defined with "local:"
URIs, which means they need to be explicitly added to the classpath
everywhere a remote process is started. This change fixes that by:
user's jar
This ensures that both in client and in cluster mode, the driver has
the necessary info to build the executor's classpath and have things
still work when they contain "local:" references.
The change also fixes some confusion in ClientBase about whether
to use SparkConf or system properties to propagate config options to
the driver and executors, by standardizing on using data held by
SparkConf.
On the cleanup front, I removed the hacky way that log4j configuration
was being propagated to handle the "local:" case. It's much more cleanly
(and generically) handled by using spark-submit arguments (--files to
upload a config file, or setting spark.executor.extraJavaOptions to pass
JVM arguments and use a local file).