Skip to content

Commit

Permalink
[CI] Fix centos CI & website build (apache#20512)
Browse files Browse the repository at this point in the history
* disable ssl verfication & fix 404 links

* fix

* fix CI: merge apache#20516 & clean up

* update
  • Loading branch information
barry-jin authored Aug 12, 2021
1 parent dba682a commit f70a695
Show file tree
Hide file tree
Showing 13 changed files with 18 additions and 15 deletions.
3 changes: 3 additions & 0 deletions docs/python_docs/python/scripts/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,9 @@
autosummary_generate = True
numpydoc_show_class_members = False

# Disable SSL verification in link check.
tls_verify = False

autodoc_member_order = 'alphabetical'

autodoc_default_flags = ['members', 'show-inheritance']
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@

# Custom Layers

While Gluon API for Apache MxNet comes with [a decent number of pre-defined layers](https://mxnet.apache.org/api/python/gluon/nn.html), at some point one may find that a new layer is needed. Adding a new layer in Gluon API is straightforward, yet there are a few things that one needs to keep in mind.
While Gluon API for Apache MxNet comes with [a decent number of pre-defined layers](https://mxnet.apache.org/versions/master/api/python/docs/api/gluon/nn/index.html), at some point one may find that a new layer is needed. Adding a new layer in Gluon API is straightforward, yet there are a few things that one needs to keep in mind.

In this article, I will cover how to create a new layer from scratch, how to use it, what are possible pitfalls and how to avoid them.

Expand Down Expand Up @@ -54,7 +54,7 @@ The rest of methods of the `Block` class are already implemented, and majority o

## Hybridization and the difference between Block and HybridBlock

Looking into implementation of [existing layers](https://mxnet.apache.org/api/python/gluon/nn.html), one may find that more often a block inherits from a [HybridBlock](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L428), instead of directly inheriting from `Block`.
Looking into implementation of [existing layers](https://mxnet.apache.org/versions/master/api/python/docs/api/gluon/nn/index.html), one may find that more often a block inherits from a [HybridBlock](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L428), instead of directly inheriting from `Block`.

The reason for that is that `HybridBlock` allows to write custom layers in imperative programming style, while computing in a symbolic way. It unifies the flexibility of imperative programming with the performance benefits of symbolic programming. You can learn more about the difference between symbolic and imperative programming from [this article](https://mxnet.apache.org/api/architecture/overview.html).

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Tip: A `BatchNorm` layer at the start of your network can have a similar effect

Warning: You should calculate the normalization means and standard deviations using the training dataset only. Any leakage of information from you testing dataset will effect the reliability of your testing metrics.

When using pre-trained models from the [Gluon Model Zoo](https://mxnet.apache.org/api/python/gluon/model_zoo.html) you'll usually see the normalization statistics used for training (i.e. statistics from step 1). You'll want to use these statistics to normalize your own input data for fine-tuning or inference with these models. Using `transforms.Normalize` is one way of applying the normalization, and this should be used in the `Dataset`.
When using pre-trained models from the [Gluon Model Zoo](https://mxnet.apache.org/versions/master/api/python/docs/api/gluon/model_zoo/index.html) you'll usually see the normalization statistics used for training (i.e. statistics from step 1). You'll want to use these statistics to normalize your own input data for fine-tuning or inference with these models. Using `transforms.Normalize` is one way of applying the normalization, and this should be used in the `Dataset`.

```{.python .input}
import mxnet as mx
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ rsp_retained = mx.nd.sparse.retain(rsp, mx.nd.array([0, 1]))

## Sparse Operators and Storage Type Inference

Operators that have specialized implementation for sparse arrays can be accessed in ``mx.nd.sparse``. You can read the [mxnet.ndarray.sparse API documentation](http://mxnet.apache.org/api/python/ndarray/sparse.html) to find what sparse operators are available.
Operators that have specialized implementation for sparse arrays can be accessed in ``mx.nd.sparse``. You can read the [mxnet.ndarray.sparse API documentation](https://mxnet.apache.org/versions/master/api/python/docs/api/legacy/ndarray/sparse/index.html) to find what sparse operators are available.


```{.python .input}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ for param in aux_params:
net_params[param]._load_init(aux_params[param], ctx=ctx)
```

We can now cache the computational graph through [hybridization](https://mxnet.apache.org/tutorials/gluon/hybrid.html) to gain some performance
We can now cache the computational graph through [hybridization](https://mxnet.apache.org/versions/master/api/python/docs/tutorials/packages/gluon/blocks/hybridize.html) to gain some performance



Expand Down Expand Up @@ -248,6 +248,6 @@ Lucky for us, the [Caltech101 dataset](http://www.vision.caltech.edu/Image_Datas
We show that in our next tutorial:


- [Fine-tuning an ONNX Model using the modern imperative MXNet/Gluon](http://mxnet.apache.org/tutorials/onnx/fine_tuning_gluon.html)
- [Fine-tuning an ONNX Model using the modern imperative MXNet/Gluon](https://mxnet.apache.org/versions/master/api/python/docs/tutorials/packages/onnx/fine_tuning_gluon.html)

<!-- INSERT SOURCE DOWNLOAD BUTTONS -->
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ Here, we have created a custom operator called `MyAddOne`, and within its `forwa

As shown by the screenshot, in the **Custom Operator** domain where all the custom operator-related events fall into, we can easily visualize the execution time of each segment of `MyAddOne`. We can tell that `MyAddOne::pure_python` is executed first. We also know that `CopyCPU2CPU` and `_plus_scalr` are two "sub-operators" of `MyAddOne` and the sequence in which they are executed.

Please note that: to be able to see the previously described information, you need to set `profile_imperative` to `True` even when you are using custom operators in [symbolic mode](https://mxnet.apache.org/versions/master/tutorials/basic/symbol.html) (refer to the code snippet below, which is the symbolic-mode equivelent of the code example above). The reason is that within custom operators, pure python code and sub-operators are still called imperatively.
Please note that: to be able to see the previously described information, you need to set `profile_imperative` to `True` even when you are using custom operators in [symbolic mode](https://mxnet.apache.org/versions/master/api/python/docs/api/legacy/symbol/index.html) (refer to the code snippet below, which is the symbolic-mode equivelent of the code example above). The reason is that within custom operators, pure python code and sub-operators are still called imperatively.

```{.python .input}
# Set profile_all to True
Expand Down
4 changes: 2 additions & 2 deletions python/mxnet/gluon/block.py
Original file line number Diff line number Diff line change
Expand Up @@ -1033,8 +1033,8 @@ def forward(self, x):
References
----------
`Hybrid - Faster training and easy deployment
<https://mxnet.io/tutorials/gluon/hybrid.html>`_
`Hybridize - A Hybrid of Imperative and Symbolic Programming
<https://mxnet.apache.org/versions/master/api/python/docs/tutorials/packages/gluon/blocks/hybridize.html>`_
"""
def __init__(self):
super(HybridBlock, self).__init__()
Expand Down
2 changes: 1 addition & 1 deletion python/mxnet/gluon/nn/basic_layers.py
Original file line number Diff line number Diff line change
Expand Up @@ -550,7 +550,7 @@ class Embedding(HybridBlock):
AdaGrad and Adam. By default lazy updates is turned on, which may perform
differently from standard updates. For more details, please check the
Optimization API at:
https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
https://mxnet.apache.org/versions/master/api/python/docs/api/optimizer/index.html
Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion python/mxnet/ndarray/numpy_extension/_op.py
Original file line number Diff line number Diff line change
Expand Up @@ -1072,7 +1072,7 @@ def embedding(data, weight, input_dim=None, output_dim=None, dtype="float32", sp
"row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad
and Adam. Note that by default lazy updates is turned on, which may perform differently
from standard updates. For more details, please check the Optimization API at:
https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
https://mxnet.apache.org/versions/master/api/python/docs/api/optimizer/index.html
Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion python/mxnet/numpy_extension/_op.py
Original file line number Diff line number Diff line change
Expand Up @@ -1001,7 +1001,7 @@ def embedding(data, weight, input_dim=None, output_dim=None, dtype="float32", sp
"row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad
and Adam. Note that by default lazy updates is turned on, which may perform differently
from standard updates. For more details, please check the Optimization API at:
https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
https://mxnet.apache.org/versions/master/api/python/docs/api/optimizer/index.html
Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion src/common/cuda/rtc.cc
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ namespace rtc {
#if defined(_WIN32) || defined(_WIN64) || defined(__WINDOWS__)
const char cuda_lib_name[] = "nvcuda.dll";
#else
const char cuda_lib_name[] = "libcuda.so";
const char cuda_lib_name[] = "libcuda.so.1";
#endif

std::mutex lock;
Expand Down
2 changes: 1 addition & 1 deletion src/operator/tensor/dot.cc
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ above patterns, ``dot`` will fallback and generate output with default storage.
"row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad
and Adam. Note that by default lazy updates is turned on, which may perform differently
from standard updates. For more details, please check the Optimization API at:
https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
https://mxnet.apache.org/versions/master/api/python/docs/api/optimizer/index.html
)doc" ADD_FILELINE)
.set_num_inputs(2)
Expand Down
2 changes: 1 addition & 1 deletion src/operator/tensor/indexing_op.cc
Original file line number Diff line number Diff line change
Expand Up @@ -597,7 +597,7 @@ The storage type of weight can be either row_sparse or default.
"row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad
and Adam. Note that by default lazy updates is turned on, which may perform differently
from standard updates. For more details, please check the Optimization API at:
https://mxnet.incubator.apache.org/api/python/optimization/optimization.html
https://mxnet.apache.org/versions/master/api/python/docs/api/optimizer/index.html
)code" ADD_FILELINE)
.set_num_inputs(2)
Expand Down

0 comments on commit f70a695

Please sign in to comment.