From 0e128bbc25259981d558ad0807bfdec3ea9550d4 Mon Sep 17 00:00:00 2001 From: Kan Zhang Date: Sat, 14 Jun 2014 11:55:56 -0700 Subject: [PATCH] [SPARK-2013] minor update --- docs/programming-guide.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/programming-guide.md b/docs/programming-guide.md index 2beb0a949db96..ef0c0e34301f3 100644 --- a/docs/programming-guide.md +++ b/docs/programming-guide.md @@ -381,7 +381,7 @@ Apart from text files, Spark's Python API also supports several other data forma * `SparkContext.wholeTextFiles` lets you read a directory containing multiple small text files, and returns each of them as (filename, content) pairs. This is in contrast with `textFile`, which would return one record per line in each file. -* `RDD.saveAsPickleFile` and `SparkContext.pickleFile` support saving and reading an RDD in a simple format consisting of pickled Python objects. Batching is used on pickle serialization, with default batch size 10. +* `RDD.saveAsPickleFile` and `SparkContext.pickleFile` support saving an RDD in a simple format consisting of pickled Python objects. Batching is used on pickle serialization, with default batch size 10. * Details on reading `SequenceFile` and arbitrary Hadoop `InputFormat` are given below.