Skip to content

Commit

Permalink
[SPARK-6608] [SQL] Makes DataFrame.rdd a lazy val
Browse files Browse the repository at this point in the history
Before 1.3.0, `SchemaRDD.id` works as a unique identifier of each `SchemaRDD`. In 1.3.0, unlike `SchemaRDD`, `DataFrame` is no longer an RDD, and `DataFrame.rdd` is actually a function which always returns a new RDD instance. Making `DataFrame.rdd` a lazy val should bring the unique identifier back.

<!-- Reviewable:start -->
[<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/apache/spark/5265)
<!-- Reviewable:end -->

Author: Cheng Lian <[email protected]>

Closes apache#5265 from liancheng/spark-6608 and squashes the following commits:

7500968 [Cheng Lian] Updates javadoc
7f37d21 [Cheng Lian] Makes DataFrame.rdd a lazy val
  • Loading branch information
liancheng committed Apr 1, 2015
1 parent 0358b08 commit d36c5fc
Showing 1 changed file with 4 additions and 2 deletions.
6 changes: 4 additions & 2 deletions sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala
Original file line number Diff line number Diff line change
Expand Up @@ -952,10 +952,12 @@ class DataFrame private[sql](
/////////////////////////////////////////////////////////////////////////////

/**
* Returns the content of the [[DataFrame]] as an [[RDD]] of [[Row]]s.
* Represents the content of the [[DataFrame]] as an [[RDD]] of [[Row]]s. Note that the RDD is
* memoized. Once called, it won't change even if you change any query planning related Spark SQL
* configurations (e.g. `spark.sql.shuffle.partitions`).
* @group rdd
*/
def rdd: RDD[Row] = {
lazy val rdd: RDD[Row] = {
// use a local variable to make sure the map closure doesn't capture the whole DataFrame
val schema = this.schema
queryExecution.executedPlan.execute().map(ScalaReflection.convertRowToScala(_, schema))
Expand Down

0 comments on commit d36c5fc

Please sign in to comment.