Skip to content

Commit

Permalink
[SPARK-3378] [DOCS] Replace the word "SparkSQL" with right word "Spar…
Browse files Browse the repository at this point in the history
…k SQL"

Author: Kousuke Saruta <[email protected]>

Closes apache#2251 from sarutak/SPARK-3378 and squashes the following commits:

0bfe234 [Kousuke Saruta] Merge branch 'master' of git://git.apache.org/spark into SPARK-3378
bb5938f [Kousuke Saruta] Replaced rest of "SparkSQL" with "Spark SQL"
6df66de [Kousuke Saruta] Replaced "SparkSQL" with "Spark SQL"
  • Loading branch information
sarutak authored and marmbrus committed Sep 4, 2014
1 parent 4feb46c commit dc1ba9e
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion dev/run-tests
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ echo "========================================================================="
echo "Running Spark unit tests"
echo "========================================================================="

# Build Spark; we always build with Hive because the PySpark SparkSQL tests need it.
# Build Spark; we always build with Hive because the PySpark Spark SQL tests need it.
# echo "q" is needed because sbt on encountering a build file with failure
# (either resolution or compilation) prompts the user for input either q, r,
# etc to quit or retry. This echo is there to make it not block.
Expand Down
2 changes: 1 addition & 1 deletion docs/programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -385,7 +385,7 @@ Apart from text files, Spark's Python API also supports several other data forma

* SequenceFile and Hadoop Input/Output Formats

**Note** this feature is currently marked ```Experimental``` and is intended for advanced users. It may be replaced in future with read/write support based on SparkSQL, in which case SparkSQL is the preferred approach.
**Note** this feature is currently marked ```Experimental``` and is intended for advanced users. It may be replaced in future with read/write support based on Spark SQL, in which case Spark SQL is the preferred approach.

**Writable Support**

Expand Down
6 changes: 3 additions & 3 deletions python/pyspark/sql.py
Original file line number Diff line number Diff line change
Expand Up @@ -900,7 +900,7 @@ def __reduce__(self):

class SQLContext:

"""Main entry point for SparkSQL functionality.
"""Main entry point for Spark SQL functionality.
A SQLContext can be used create L{SchemaRDD}s, register L{SchemaRDD}s as
tables, execute SQL over tables, cache tables, and read parquet files.
Expand Down Expand Up @@ -946,7 +946,7 @@ def __init__(self, sparkContext, sqlContext=None):

@property
def _ssql_ctx(self):
"""Accessor for the JVM SparkSQL context.
"""Accessor for the JVM Spark SQL context.
Subclasses can override this property to provide their own
JVM Contexts.
Expand Down Expand Up @@ -1507,7 +1507,7 @@ class SchemaRDD(RDD):
"""An RDD of L{Row} objects that has an associated schema.
The underlying JVM object is a SchemaRDD, not a PythonRDD, so we can
utilize the relational query api exposed by SparkSQL.
utilize the relational query api exposed by Spark SQL.
For normal L{pyspark.rdd.RDD} operations (map, count, etc.) the
L{SchemaRDD} is not operated on directly, as it's underlying
Expand Down
2 changes: 1 addition & 1 deletion python/run-tests
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ FAILED=0

rm -f unit-tests.log

# Remove the metastore and warehouse directory created by the HiveContext tests in SparkSQL
# Remove the metastore and warehouse directory created by the HiveContext tests in Spark SQL
rm -rf metastore warehouse

function run_test() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ import scala.math.BigDecimal
import org.apache.spark.sql.catalyst.expressions.{Row => ScalaRow}

/**
* A result row from a SparkSQL query.
* A result row from a Spark SQL query.
*/
class Row(private[spark] val row: ScalaRow) extends Serializable {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector
import org.apache.hadoop.io.Writable

/**
* A placeholder that allows SparkSQL users to create metastore tables that are stored as
* A placeholder that allows Spark SQL users to create metastore tables that are stored as
* parquet files. It is only intended to pass the checks that the serde is valid and exists
* when a CREATE TABLE is run. The actual work of decoding will be done by ParquetTableScan
* when "spark.sql.hive.convertMetastoreParquet" is set to true.
Expand Down

0 comments on commit dc1ba9e

Please sign in to comment.