Skip to content

Commit

Permalink
Keep frames in JavaDoc links, and other small tweaks
Browse files Browse the repository at this point in the history
  • Loading branch information
mateiz committed May 28, 2014
1 parent 1bf4112 commit ef671d4
Show file tree
Hide file tree
Showing 5 changed files with 75 additions and 37 deletions.
23 changes: 20 additions & 3 deletions docs/js/api-docs.js
Original file line number Diff line number Diff line change
@@ -1,10 +1,27 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

/* Dynamically injected post-processing code for the API docs */

$(document).ready(function() {
var annotations = $("dt:contains('Annotations')").next("dd").children("span.name");
addBadges(annotations, "AlphaComponent", ":: AlphaComponent ::", "<span class='alphaComponent badge'>Alpha Component</span>");
addBadges(annotations, "DeveloperApi", ":: DeveloperApi ::", "<span class='developer badge'>Developer API</span>");
addBadges(annotations, "Experimental", ":: Experimental ::", "<span class='experimental badge'>Experimental</span>");
addBadges(annotations, "AlphaComponent", ":: AlphaComponent ::", '<span class="alphaComponent badge">Alpha Component</span>');
addBadges(annotations, "DeveloperApi", ":: DeveloperApi ::", '<span class="developer badge">Developer API</span>');
addBadges(annotations, "Experimental", ":: Experimental ::", '<span class="experimental badge">Experimental</span>');
});

function addBadges(allAnnotations, name, tag, html) {
Expand Down
21 changes: 21 additions & 0 deletions docs/js/main.js
Original file line number Diff line number Diff line change
@@ -1,3 +1,23 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

/* Custom JavaScript code in the MarkDown docs */

// Enable language-specific code tabs
function codeTabs() {
var counter = 0;
var langImages = {
Expand Down Expand Up @@ -62,6 +82,7 @@ function makeCollapsable(elt, accordionClass, accordionBodyId, title) {
);
}

// Enable "view solution" sections (for exercises)
function viewSolution() {
var counter = 0
$("div.solution").each(function() {
Expand Down
4 changes: 2 additions & 2 deletions docs/mllib-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,9 +84,9 @@ val vector: Vector = Vectors.dense(array) // a dense vector
<div data-lang="java" markdown="1">

We used to represent a feature vector by `double[]`, which is replaced by
[`Vector`](api/scala/index.html#org.apache.spark.mllib.linalg.Vector) in v1.0. Algorithms that used
[`Vector`](api/java/index.html?org/apache/spark/mllib/linalg/Vector.html) in v1.0. Algorithms that used
to accept `RDD<double[]>` now take
`RDD<Vector>`. [`LabeledPoint`](api/scala/index.html#org.apache.spark.mllib.regression.LabeledPoint)
`RDD<Vector>`. [`LabeledPoint`](api/java/index.html?org/apache/spark/mllib/regression/LabeledPoint.html)
is now a wrapper of `(double, Vector)` instead of `(double, double[])`. Converting `double[]` to
`Vector` is straightforward:

Expand Down
30 changes: 15 additions & 15 deletions docs/programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ import org.apache.spark.SparkConf
Spark {{site.SPARK_VERSION}} works with Java 6 and higher. If you are using Java 8, Spark supports
[lambda expressions](http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html)
for concisely writing functions, otherwise you can use the classes in the
[org.apache.spark.api.java.function](api/java/org/apache/spark/api/java/function/package-summary.html) package.
[org.apache.spark.api.java.function](api/java/index.html?org/apache/spark/api/java/function/package-summary.html) package.

To write a Spark application in Java, you need to add a dependency on Spark. Spark is available through Maven Central at:

Expand Down Expand Up @@ -126,8 +126,8 @@ new SparkContext(conf)

<div data-lang="java" markdown="1">

The first thing a Spark program must do is to create a [JavaSparkContext](api/java/org/apache/spark/api/java/JavaSparkContext.html) object, which tells Spark
how to access a cluster. To create a `SparkContext` you first need to build a [SparkConf](api/java/org/apache/spark/SparkConf.html) object
The first thing a Spark program must do is to create a [JavaSparkContext](api/java/index.html?org/apache/spark/api/java/JavaSparkContext.html) object, which tells Spark
how to access a cluster. To create a `SparkContext` you first need to build a [SparkConf](api/java/index.html?org/apache/spark/SparkConf.html) object
that contains information about your application.

{% highlight java %}
Expand Down Expand Up @@ -265,7 +265,7 @@ We describe operations on distributed datasets later on.

**Note:** *In this guide, we'll often use the concise Java 8 lambda syntax to specify Java functions, but
in older versions of Java you can implement the interfaces in the
[org.apache.spark.api.java.function](api/java/org/apache/spark/api/java/function/package-summary.html) package.
[org.apache.spark.api.java.function](api/java/index.html?org/apache/spark/api/java/function/package-summary.html) package.
We describe [passing functions to Spark](#passing-functions-to-spark) in more detail below.*

</div>
Expand Down Expand Up @@ -546,7 +546,7 @@ def doStuff(rdd: RDD[String]): RDD[String] = {

Spark's API relies heavily on passing functions in the driver program to run on the cluster.
In Java, functions are represented by classes implementing the interfaces in the
[org.apache.spark.api.java.function](api/java/org/apache/spark/api/java/function/package-summary.html) package.
[org.apache.spark.api.java.function](api/java/index.html?org/apache/spark/api/java/function/package-summary.html) package.
There are two ways to create such functions:

* Implement the Function interfaces in your own class, either as an anonymous inner class or a named one,
Expand Down Expand Up @@ -697,7 +697,7 @@ from the Scala standard library. You can simply call `new Tuple2(a, b)` to creat
its fields later with `tuple._1()` and `tuple._2()`.

RDDs of key-value pairs are represented by the
[JavaPairRDD](api/java/org/apache/spark/api/java/JavaPairRDD.html) class. You can construct
[JavaPairRDD](api/java/index.html?org/apache/spark/api/java/JavaPairRDD.html) class. You can construct
JavaPairRDDs from JavaRDDs using special versions of the `map` operations, like
`mapToPair` and `flatMapToPair`. The JavaPairRDD will have both standard RDD functions and special
key-value ones.
Expand Down Expand Up @@ -749,11 +749,11 @@ We could also use `counts.sortByKey()`, for example, to sort the pairs alphabeti
The following table lists some of the common transformations supported by Spark. Refer to the
RDD API doc
([Scala](api/scala/index.html#org.apache.spark.rdd.RDD),
[Java](api/java/org/apache/spark/api/java/JavaRDD.html),
[Java](api/java/index.html?org/apache/spark/api/java/JavaRDD.html),
[Python](api/python/pyspark.rdd.RDD-class.html))
and pair RDD functions doc
([Scala](api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions),
[Java](api/java/org/apache/spark/api/java/JavaPairRDD.html))
[Java](api/java/index.html?org/apache/spark/api/java/JavaPairRDD.html))
for details.

<table class="table">
Expand Down Expand Up @@ -852,11 +852,11 @@ for details.
The following table lists some of the common actions supported by Spark. Refer to the
RDD API doc
([Scala](api/scala/index.html#org.apache.spark.rdd.RDD),
[Java](api/java/org/apache/spark/api/java/JavaRDD.html),
[Java](api/java/index.html?org/apache/spark/api/java/JavaRDD.html),
[Python](api/python/pyspark.rdd.RDD-class.html))
and pair RDD functions doc
([Scala](api/scala/index.html#org.apache.spark.rdd.PairRDDFunctions),
[Java](api/java/org/apache/spark/api/java/JavaPairRDD.html))
[Java](api/java/index.html?org/apache/spark/api/java/JavaPairRDD.html))
for details.

<table class="table">
Expand Down Expand Up @@ -931,7 +931,7 @@ to persist the dataset on disk, persist it in memory but as serialized Java obje
replicate it across nodes, or store it off-heap in [Tachyon](http://tachyon-project.org/).
These levels are set by passing a
`StorageLevel` object ([Scala](api/scala/index.html#org.apache.spark.storage.StorageLevel),
[Java](api/java/org/apache/spark/storage/StorageLevel.html),
[Java](api/java/index.html?org/apache/spark/storage/StorageLevel.html),
[Python](api/python/pyspark.storagelevel.StorageLevel-class.html))
to `persist()`. The `cache()` method is a shorthand for using the default storage level,
which is `StorageLevel.MEMORY_ONLY` (store deserialized objects in memory). The full set of
Expand Down Expand Up @@ -1150,7 +1150,7 @@ accum.value();
{% endhighlight %}

While this code used the built-in support for accumulators of type Integer, programmers can also
create their own types by subclassing [AccumulatorParam](api/java/org/apache/spark/AccumulatorParam.html).
create their own types by subclassing [AccumulatorParam](api/java/index.html?org/apache/spark/AccumulatorParam.html).
The AccumulatorParam interface has two methods: `zero` for providing a "zero value" for your data
type, and `addInPlace` for adding two values together. For example, supposing we had a `Vector` class
representing mathematical vectors, we could write:
Expand All @@ -1166,10 +1166,10 @@ class VectorAccumulatorParam implements AccumulatorParam<Vector> {
}

// Then, create an Accumulator of this type:
Accumulator<Vector> vecAccum = sc.accumulator(new Vector(...))(new VectorAccumulatorParam());
Accumulator<Vector> vecAccum = sc.accumulator(new Vector(...), new VectorAccumulatorParam());
{% endhighlight %}

In Java, Spark also supports the more general [Accumulable](api/java/org/apache/spark/Accumulable.html)
In Java, Spark also supports the more general [Accumulable](api/java/index.html?org/apache/spark/Accumulable.html)
interface to accumulate data where the resulting type is not the same as the elements added (e.g. build
a list by collecting together elements).

Expand Down Expand Up @@ -1205,7 +1205,7 @@ class VectorAccumulatorParam(AccumulatorParam):
return v1

# Then, create an Accumulator of this type:
vecAccum = sc.accumulator(Vector(...))(VectorAccumulatorParam())
vecAccum = sc.accumulator(Vector(...), VectorAccumulatorParam())
{% endhighlight %}

</div>
Expand Down
34 changes: 17 additions & 17 deletions docs/streaming-programming-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ The complete code can be found in the Spark Streaming example
<div data-lang="java" markdown="1">

First, we create a
[JavaStreamingContext](api/java/org/apache/spark/streaming/api/java/JavaStreamingContext.html) object,
[JavaStreamingContext](api/java/index.html?org/apache/spark/streaming/api/java/JavaStreamingContext.html) object,
which is the main entry point for all streaming
functionality. Besides Spark's configuration, we specify that any DStream would be processed
in 1 second batches.
Expand Down Expand Up @@ -215,7 +215,7 @@ jssc.awaitTermination(); // Wait for the computation to terminate
{% endhighlight %}

The complete code can be found in the Spark Streaming example
[JavaNetworkWordCount]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaNetworkWordCount.java).
[JavaNetworkWordCount]({{site.SPARK_GITHUB_URL}}/blob/master/examples/src/main/java/index.html?org/apache/spark/examples/streaming/JavaNetworkWordCount.java).
<br>

</div>
Expand Down Expand Up @@ -813,8 +813,8 @@ output operators are defined:
The complete list of DStream operations is available in the API documentation. For the Scala API,
see [DStream](api/scala/index.html#org.apache.spark.streaming.dstream.DStream)
and [PairDStreamFunctions](api/scala/index.html#org.apache.spark.streaming.dstream.PairDStreamFunctions).
For the Java API, see [JavaDStream](api/java/org/apache/spark/streaming/api/java/JavaDStream.html)
and [JavaPairDStream](api/java/org/apache/spark/streaming/api/java/JavaPairDStream.html).
For the Java API, see [JavaDStream](api/java/index.html?org/apache/spark/streaming/api/java/JavaDStream.html)
and [JavaPairDStream](api/java/index.html?org/apache/spark/streaming/api/java/JavaPairDStream.html).

## Persistence
Similar to RDDs, DStreams also allow developers to persist the stream's data in memory. That is,
Expand Down Expand Up @@ -876,7 +876,7 @@ sending the data to two destinations (i.e., the earlier and upgraded application

- The existing application is shutdown gracefully (see
[`StreamingContext.stop(...)`](api/scala/index.html#org.apache.spark.streaming.StreamingContext)
or [`JavaStreamingContext.stop(...)`](api/java/org/apache/spark/streaming/api/java/JavaStreamingContext.html)
or [`JavaStreamingContext.stop(...)`](api/java/index.html?org/apache/spark/streaming/api/java/JavaStreamingContext.html)
for graceful shutdown options) which ensure data that have been received is completely
processed before shutdown. Then the
upgraded application can be started, which will start processing from the same point where the earlier
Expand Down Expand Up @@ -1311,10 +1311,10 @@ This section elaborates the steps required to migrate your existing code to 1.0.
`FlumeUtils.createStream`, etc.) now returns
[InputDStream](api/scala/index.html#org.apache.spark.streaming.dstream.InputDStream) /
[ReceiverInputDStream](api/scala/index.html#org.apache.spark.streaming.dstream.ReceiverInputDStream)
(instead of DStream) for Scala, and [JavaInputDStream](api/java/org/apache/spark/streaming/api/java/JavaInputDStream.html) /
[JavaPairInputDStream](api/java/org/apache/spark/streaming/api/java/JavaPairInputDStream.html) /
[JavaReceiverInputDStream](api/java/org/apache/spark/streaming/api/java/JavaReceiverInputDStream.html) /
[JavaPairReceiverInputDStream](api/java/org/apache/spark/streaming/api/java/JavaPairReceiverInputDStream.html)
(instead of DStream) for Scala, and [JavaInputDStream](api/java/index.html?org/apache/spark/streaming/api/java/JavaInputDStream.html) /
[JavaPairInputDStream](api/java/index.html?org/apache/spark/streaming/api/java/JavaPairInputDStream.html) /
[JavaReceiverInputDStream](api/java/index.html?org/apache/spark/streaming/api/java/JavaReceiverInputDStream.html) /
[JavaPairReceiverInputDStream](api/java/index.html?org/apache/spark/streaming/api/java/JavaPairReceiverInputDStream.html)
(instead of JavaDStream) for Java. This ensures that functionality specific to input streams can
be added to these classes in the future without breaking binary compatibility.
Note that your existing Spark Streaming applications should not require any change
Expand Down Expand Up @@ -1365,14 +1365,14 @@ package and renamed for better clarity.
[ZeroMQUtils](api/scala/index.html#org.apache.spark.streaming.zeromq.ZeroMQUtils$), and
[MQTTUtils](api/scala/index.html#org.apache.spark.streaming.mqtt.MQTTUtils$)
- Java docs
* [JavaStreamingContext](api/java/org/apache/spark/streaming/api/java/JavaStreamingContext.html),
[JavaDStream](api/java/org/apache/spark/streaming/api/java/JavaDStream.html) and
[PairJavaDStream](api/java/org/apache/spark/streaming/api/java/PairJavaDStream.html)
* [KafkaUtils](api/java/org/apache/spark/streaming/kafka/KafkaUtils.html),
[FlumeUtils](api/java/org/apache/spark/streaming/flume/FlumeUtils.html),
[TwitterUtils](api/java/org/apache/spark/streaming/twitter/TwitterUtils.html),
[ZeroMQUtils](api/java/org/apache/spark/streaming/zeromq/ZeroMQUtils.html), and
[MQTTUtils](api/java/org/apache/spark/streaming/mqtt/MQTTUtils.html)
* [JavaStreamingContext](api/java/index.html?org/apache/spark/streaming/api/java/JavaStreamingContext.html),
[JavaDStream](api/java/index.html?org/apache/spark/streaming/api/java/JavaDStream.html) and
[PairJavaDStream](api/java/index.html?org/apache/spark/streaming/api/java/PairJavaDStream.html)
* [KafkaUtils](api/java/index.html?org/apache/spark/streaming/kafka/KafkaUtils.html),
[FlumeUtils](api/java/index.html?org/apache/spark/streaming/flume/FlumeUtils.html),
[TwitterUtils](api/java/index.html?org/apache/spark/streaming/twitter/TwitterUtils.html),
[ZeroMQUtils](api/java/index.html?org/apache/spark/streaming/zeromq/ZeroMQUtils.html), and
[MQTTUtils](api/java/index.html?org/apache/spark/streaming/mqtt/MQTTUtils.html)

* More examples in [Scala]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/scala/org/apache/spark/examples/streaming)
and [Java]({{site.SPARK_GITHUB_URL}}/tree/master/examples/src/main/java/org/apache/spark/examples/streaming)
Expand Down

0 comments on commit ef671d4

Please sign in to comment.