-
Notifications
You must be signed in to change notification settings - Fork 28.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-24418][Build] Upgrade Scala to 2.11.12 and 2.12.6 #21495
Changes from 2 commits
de790fd
f91d75a
c1ffd0b
631ef48
96e87c2
4c852fa
096e6a9
82ca5f6
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,8 +21,22 @@ import scala.collection.mutable | |
import scala.tools.nsc.Settings | ||
import scala.tools.nsc.interpreter._ | ||
|
||
class SparkILoopInterpreter(settings: Settings, out: JPrintWriter) extends IMain(settings, out) { | ||
self => | ||
class SparkILoopInterpreter(settings: Settings, out: JPrintWriter, initializeSpark: () => Unit) | ||
extends IMain(settings, out) { self => | ||
|
||
/** | ||
* We override `initializeSynchronous` to initialize Spark *after* `intp` is properly initialized | ||
* and *before* the REPL sees any files in the private `loadInitFiles` functions, so that | ||
* the Spark context is visible in those files. | ||
* | ||
* This is a bit of a hack, but there isn't another hook available to us at this point. | ||
* | ||
* See the discussion in Scala community https://github.com/scala/bug/issues/10913 for detail. | ||
*/ | ||
override def initializeSynchronous(): Unit = { | ||
super.initializeSynchronous() | ||
initializeSpark() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @som-snytt It's working, but I'm wondering if I'm doing it correctly. In this case, I'll use if (intp.reporter.hasErrors) {
echo("Interpreter encountered errors during initialization!")
null
} in And There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. For my environment (with ~/PR-21495:PR-21495$ bin/spark-shell
18/06/07 23:39:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.0-SNAPSHOT
/_/
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_171)
Type in expressions to have them evaluated.
Type :help for more information.
scala> Spark context Web UI available at http://localhost:4040
Spark context available as 'sc' (master = local[*], app id = local-1528414746558).
Spark session available as 'spark'.
Exception in thread "main" java.lang.NoSuchMethodError: jline.console.completer.CandidateListCompletionHandler.setPrintSpaceAfterFullCompletion(Z)V
at scala.tools.nsc.interpreter.jline.JLineConsoleReader.initCompletion(JLineReader.scala:139)
at scala.tools.nsc.interpreter.jline.InteractiveReader.postInit(JLineReader.scala:54) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is working for me. I got There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep. Of course, it's a clean clone and build of this PR. Accoring to the error message and the followings, we need
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can we upgrade to the corresponding There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Completion was upgraded since the old days; also, other bugs required updating jline. There is interest in upgrading to jline 3. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thank you for confirming, @som-snytt . There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh, I know why it works for me. I just run the following without building a distribution. ./build/sbt clean package
./bin/spark-shell |
||
} | ||
|
||
override lazy val memberHandlers = new { | ||
val intp: self.type = self | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: two spaces
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought for
extends
, it's four spaces?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIRC, four spaces is OK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's definitely two spaces after a period. I've been wanting to make that joke, but held off.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guys, parameters are listed in 4 spaces and other keywords after that are lined up with 2 spaces, which is written in https://github.com/databricks/scala-style-guide#spacing-and-indentation
In case of two lines, it's not explicitly written but wouldn't we better stick to the example as possible as we can? "Use 2-space indentation in general.".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, there's a bunch of code in Spark where both 4-space and 2-space indentation are existed. From my thought it is not so worthy to stick to this tiny point.
If you want to fix it, then creating a separate PR to fix all of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can fix them in place. There are many other style nits too. It's not worth swiping them making the backport harder.
I am fine with ignoring such nits and they don't block this PR but it's not something we should say the opposite side is okay.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really not worthy to block on this thing (if it is the only issue).