diff --git a/docs/quick-start.md b/docs/quick-start.md index d34a4e60f3253..64023994771b7 100644 --- a/docs/quick-start.md +++ b/docs/quick-start.md @@ -336,7 +336,7 @@ As with the Scala example, we initialize a SparkContext, though we use the speci `JavaSparkContext` class to get a Java-friendly one. We also create RDDs (represented by `JavaRDD`) and run transformations on them. Finally, we pass functions to Spark by creating classes that extend `spark.api.java.function.Function`. The -[Java programming guide](java-programming-guide.html) describes these differences in more detail. +[Spark programming guide](programming-guide.html) describes these differences in more detail. To build the program, we also write a Maven `pom.xml` file that lists Spark as a dependency. Note that Spark artifacts are tagged with a Scala version. diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md index 00ac1e2b875b9..3d02e010b3f3d 100644 --- a/docs/streaming-programming-guide.md +++ b/docs/streaming-programming-guide.md @@ -813,10 +813,8 @@ output operators are defined: The complete list of DStream operations is available in the API documentation. For the Scala API, see [DStream](api/scala/index.html#org.apache.spark.streaming.dstream.DStream) and [PairDStreamFunctions](api/scala/index.html#org.apache.spark.streaming.dstream.PairDStreamFunctions). -For the Java API, see [JavaDStream](api/scala/index.html#org.apache.spark.streaming.api.java.dstream.DStream) -and [JavaPairDStream](api/scala/index.html#org.apache.spark.streaming.api.java.JavaPairDStream). -Specifically for the Java API, see [Spark's Java programming guide](java-programming-guide.html) -for more information. +For the Java API, see [JavaDStream](api/java/org/apache/spark/streaming/api/java/JavaDStream.html) +and [JavaPairDStream](api/java/org/apache/spark/streaming/api/java/JavaPairDStream.html). ## Persistence Similar to RDDs, DStreams also allow developers to persist the stream's data in memory. That is,