diff --git a/docs/python-programming-guide.md b/docs/python-programming-guide.md index 99395d66f58a3..0d389893b56a0 100644 --- a/docs/python-programming-guide.md +++ b/docs/python-programming-guide.md @@ -141,6 +141,10 @@ conf = (SparkConf() sc = SparkContext(conf = conf) {% endhighlight %} +`spark-submit` supports launching Python applications on standalone, Mesos or YARN clusters, through +its `--master` argument. However, it currently requires the Python driver program to run on the local +machine, not the cluster (i.e. the `--deploy-mode` parameter cannot be `cluster`). + # SequenceFile and Hadoop InputFormats In addition to reading text files, PySpark supports reading Hadoop SequenceFile and arbitrary InputFormats. @@ -214,11 +218,6 @@ Future support for 'wrapper' functions for keys/values that allows this to be wr and called from Python, as well as support for writing data out as SequenceFile format and other OutputFormats, is forthcoming. -`spark-submit` supports launching Python applications on standalone, Mesos or YARN clusters, through -its `--master` argument. However, it currently requires the Python driver program to run on the local -machine, not the cluster (i.e. the `--deploy-mode` parameter cannot be `cluster`). - - # API Docs [API documentation](api/python/index.html) for PySpark is available as Epydoc.