Skip to content

Commit

Permalink
SPARK-1330 removed extra echo from comput_classpath.sh
Browse files Browse the repository at this point in the history
remove the extra echo which prevents spark-class from working.  Note that I did not update the comment above it, which is also wrong because I'm not sure what it should do.

Should hive only be included if explicitly built with sbt hive/assembly or should sbt assembly build it?

Author: Thomas Graves <[email protected]>

Closes apache#241 from tgravescs/SPARK-1330 and squashes the following commits:

b10d708 [Thomas Graves] SPARK-1330 removed extra echo from comput_classpath.sh
  • Loading branch information
tgravescs committed Mar 27, 2014
1 parent 5b2d863 commit 426042a
Showing 1 changed file with 0 additions and 1 deletion.
1 change: 0 additions & 1 deletion bin/compute-classpath.sh
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,6 @@ CLASSPATH="$SPARK_CLASSPATH:$FWDIR/conf"
# Hopefully we will find a way to avoid uber-jars entirely and deploy only the needed packages in
# the future.
if [ -f "$FWDIR"/sql/hive/target/scala-$SCALA_VERSION/spark-hive-assembly-*.jar ]; then
echo "Hive assembly found, including hive support. If this isn't desired run sbt hive/clean."

# Datanucleus jars do not work if only included in the uberjar as plugin.xml metadata is lost.
DATANUCLEUSJARS=$(JARS=("$FWDIR/lib_managed/jars"/datanucleus-*.jar); IFS=:; echo "${JARS[*]}")
Expand Down

0 comments on commit 426042a

Please sign in to comment.