Skip to content

Commit

Permalink
[SPARK-3773][PySpark][Doc] Sphinx build warning
Browse files Browse the repository at this point in the history
Remove all warnings on document building
  • Loading branch information
cocoatomo committed Oct 4, 2014
1 parent cf1d32e commit 6f65661
Show file tree
Hide file tree
Showing 6 changed files with 28 additions and 23 deletions.
7 changes: 0 additions & 7 deletions python/docs/modules.rst

This file was deleted.

1 change: 1 addition & 0 deletions python/pyspark/context.py
Original file line number Diff line number Diff line change
Expand Up @@ -410,6 +410,7 @@ def sequenceFile(self, path, keyClass=None, valueClass=None, keyConverter=None,
Read a Hadoop SequenceFile with arbitrary key and value Writable class from HDFS,
a local file system (available on all nodes), or any Hadoop-supported file system URI.
The mechanism is as follows:
1. A Java RDD is created from the SequenceFile or other InputFormat, and the key
and value Writable classes
2. Serialization is attempted via Pyrolite pickling
Expand Down
26 changes: 16 additions & 10 deletions python/pyspark/mllib/classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,11 +89,14 @@ def train(cls, data, iterations=100, step=1.0, miniBatchFraction=1.0,
@param regParam: The regularizer parameter (default: 1.0).
@param regType: The type of regularizer used for training
our model.
Allowed values: "l1" for using L1Updater,
"l2" for using
SquaredL2Updater,
"none" for no regularizer.
(default: "none")
:Allowed values:
- "l1" for using L1Updater
- "l2" for using SquaredL2Updater
- "none" for no regularizer
(default: "none")
@param intercept: Boolean parameter which indicates the use
or not of the augmented representation for
training data (i.e. whether bias features
Expand Down Expand Up @@ -158,11 +161,14 @@ def train(cls, data, iterations=100, step=1.0, regParam=1.0,
@param initialWeights: The initial weights (default: None).
@param regType: The type of regularizer used for training
our model.
Allowed values: "l1" for using L1Updater,
"l2" for using
SquaredL2Updater,
"none" for no regularizer.
(default: "none")
:Allowed values:
- "l1" for using L1Updater
- "l2" for using SquaredL2Updater,
- "none" for no regularizer.
(default: "none")
@param intercept: Boolean parameter which indicates the use
or not of the augmented representation for
training data (i.e. whether bias features
Expand Down
15 changes: 9 additions & 6 deletions python/pyspark/mllib/regression.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
from pyspark.mllib.linalg import SparseVector, _convert_to_vector
from pyspark.serializers import PickleSerializer, AutoBatchedSerializer

__all__ = ['LabeledPoint', 'LinearModel', 'LinearRegressionModel', 'RidgeRegressionModel'
__all__ = ['LabeledPoint', 'LinearModel', 'LinearRegressionModel', 'RidgeRegressionModel',
'LinearRegressionWithSGD', 'LassoWithSGD', 'RidgeRegressionWithSGD']


Expand Down Expand Up @@ -152,11 +152,14 @@ def train(cls, data, iterations=100, step=1.0, miniBatchFraction=1.0,
@param regParam: The regularizer parameter (default: 1.0).
@param regType: The type of regularizer used for training
our model.
Allowed values: "l1" for using L1Updater,
"l2" for using
SquaredL2Updater,
"none" for no regularizer.
(default: "none")
:Allowed values:
- "l1" for using L1Updater,
- "l2" for using SquaredL2Updater,
- "none" for no regularizer.
(default: "none")
@param intercept: Boolean parameter which indicates the use
or not of the augmented representation for
training data (i.e. whether bias features
Expand Down
1 change: 1 addition & 0 deletions python/pyspark/mllib/tree.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ def __del__(self):
def predict(self, x):
"""
Predict the label of one or more examples.
:param x: Data point (feature vector),
or an RDD of data points (feature vectors).
"""
Expand Down
1 change: 1 addition & 0 deletions python/pyspark/rdd.py
Original file line number Diff line number Diff line change
Expand Up @@ -1208,6 +1208,7 @@ def saveAsSequenceFile(self, path, compressionCodecClass=None):
Output a Python RDD of key-value pairs (of form C{RDD[(K, V)]}) to any Hadoop file
system, using the L{org.apache.hadoop.io.Writable} types that we convert from the
RDD's key and value types. The mechanism is as follows:
1. Pyrolite is used to convert pickled Python RDD into RDD of Java objects.
2. Keys and values of this Java RDD are converted to Writables and written out.
Expand Down

0 comments on commit 6f65661

Please sign in to comment.