Skip to content

Commit

Permalink
Create a getting started section and add a new linear regression example
Browse files Browse the repository at this point in the history
  • Loading branch information
Saransh-cpp committed Jul 5, 2022
1 parent b2ee216 commit c52dc2c
Show file tree
Hide file tree
Showing 9 changed files with 513 additions and 11 deletions.
3 changes: 3 additions & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,11 @@
BSON = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54"
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"

[compat]
Documenter = "0.26"
11 changes: 7 additions & 4 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,17 +1,20 @@
using Documenter, Flux, NNlib, Functors, MLUtils, BSON
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Plots, MLDatasets, Statistics


DocMeta.setdocmeta!(Flux, :DocTestSetup, :(using Flux); recursive = true)

makedocs(
modules = [Flux, NNlib, Functors, MLUtils, BSON],
modules = [Flux, NNlib, Functors, MLUtils, BSON, Plots, MLDatasets, Statistics],
doctest = false,
sitename = "Flux",
pages = [
"Home" => "index.md",
"Getting Started" => [
"Overview" => "getting_started/overview.md",
"Basics" => "getting_started/basics.md",
"Linear Regression" => "getting_started/linear_regression.md",
],
"Building Models" => [
"Overview" => "models/overview.md",
"Basics" => "models/basics.md",
"Recurrence" => "models/recurrence.md",
"Model Reference" => "models/layers.md",
"Loss Functions" => "models/losses.md",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -221,4 +221,4 @@ Flux.@functor Affine

This enables a useful extra set of functionality for our `Affine` layer, such as [collecting its parameters](../training/optimisers.md) or [moving it to the GPU](../gpu.md).

For some more helpful tricks, including parameter freezing, please checkout the [advanced usage guide](advanced.md).
For some more helpful tricks, including parameter freezing, please checkout the [advanced usage guide](../models/advanced.md).
496 changes: 496 additions & 0 deletions docs/src/getting_started/linear_regression.md

Large diffs are not rendered by default.

File renamed without changes.
2 changes: 1 addition & 1 deletion docs/src/gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ true

Support for array operations on other hardware backends, like GPUs, is provided by external packages like [CUDA](https://github.com/JuliaGPU/CUDA.jl). Flux is agnostic to array types, so we simply need to move model weights and data to the GPU and Flux will handle it.

For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](models/basics.md) on an NVIDIA GPU.
For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](getting_started/basics.md) on an NVIDIA GPU.

(Note that you need to have CUDA available to use CUDA.CuArray – please see the [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) instructions for more details.)

Expand Down
2 changes: 1 addition & 1 deletion docs/src/models/advanced.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ For an intro to Flux and automatic differentiation, see this [tutorial](https://

## Customising Parameter Collection for a Model

Taking reference from our example `Affine` layer from the [basics](basics.md#Building-Layers-1).
Taking reference from our example `Affine` layer from the [basics](../getting_started/basics.md#Building-Layers-1).

By default all the fields in the `Affine` type are collected as its parameters, however, in some cases it may be desired to hold other metadata in our "layers" that may not be needed for training, and are hence supposed to be ignored while the parameters are collected. With Flux, it is possible to mark the fields of our layers that are trainable in two ways.

Expand Down
2 changes: 1 addition & 1 deletion docs/src/training/optimisers.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Optimisers

Consider a [simple linear regression](../models/basics.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
Consider a [simple linear regression](../getting_started/linear_regression.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.

```julia
using Flux
Expand Down
6 changes: 3 additions & 3 deletions docs/src/training/training.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ more information can be found on [Custom Training Loops](../models/advanced.md).

## Loss Functions

The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](../models/basics.md) will work as an objective.
The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](../getting_started/basics.md) will work as an objective.
In addition to custom losses, model can be trained in conjuction with
the commonly used losses that are grouped under the `Flux.Losses` module.
We can also define an objective in terms of some model:
Expand All @@ -64,11 +64,11 @@ At first glance it may seem strange that the model that we want to train is not

## Model parameters

The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../models/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../getting_started/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.

Such an object contains a reference to the model's parameters, not a copy, such that after their training, the model behaves according to their updated values.

Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](../models/basics.md) section. Also, for freezing model parameters, see the [Advanced Usage Guide](../models/advanced.md).
Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](../getting_started/basics.md) section. Also, for freezing model parameters, see the [Advanced Usage Guide](../models/advanced.md).

```@docs
Flux.params
Expand Down

0 comments on commit c52dc2c

Please sign in to comment.