Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
Merge pull request #175 from microsoft/master
Browse files Browse the repository at this point in the history
merge master
  • Loading branch information
SparkSnail authored May 30, 2019
2 parents bee8f84 + e267a73 commit e1a4a80
Show file tree
Hide file tree
Showing 34 changed files with 426 additions and 167 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ Windows
```bash
git clone -b v0.7 https://github.com/Microsoft/nni.git
cd nni
powershell ./install.ps1
powershell .\install.ps1
```

For the system requirements of NNI, please refer to [Install NNI](docs/en_US/Installation.md)
Expand Down Expand Up @@ -185,7 +185,7 @@ Windows
* Run the MNIST example.

```bash
nnictl create --config nni/examples/trials/mnist/config_windows.yml
nnictl create --config nni\examples\trials\mnist\config_windows.yml
```

* Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. You can explore the experiment using the `Web UI url`.
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/AnnotationSpec.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@ There are 10 types to express your search space as follows:

* `@nni.variable(nni.choice(option1,option2,...,optionN),name=variable)`
Which means the variable value is one of the options, which should be a list The elements of options can themselves be stochastic expressions
* `@nni.variable(nni.randint(upper),name=variable)`
Which means the variable value is a random integer in the range [0, upper).
* `@nni.variable(nni.randint(lower, upper),name=variable)`
Which means the variable value is a value like round(uniform(low, high)). For now, the type of chosen value is float. If you want to use integer value, please convert it explicitly.
* `@nni.variable(nni.uniform(low, high),name=variable)`
Which means the variable value is a value uniformly between low and high.
* `@nni.variable(nni.quniform(low, high, q),name=variable)`
Expand Down
142 changes: 142 additions & 0 deletions docs/en_US/GeneralNasInterfaces.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
# General Programming Interface for Neural Architecture Search

Automatic neural architecture search is taking an increasingly important role on finding better models. Recent research works have proved the feasibility of automatic NAS, and also found some models that could beat manually designed and tuned models. Some of representative works are [NASNet][2], [ENAS][1], [DARTS][3], [Network Morphism][4], and [Evolution][5]. There are new innovations keeping emerging. However, it takes great efforts to implement those algorithms, and it is hard to reuse code base of one algorithm for implementing another.

To facilitate NAS innovations (e.g., design/implement new NAS models, compare different NAS models side-by-side), an easy-to-use and flexibile programming interface is crucial.

## Programming interface

A new programming interface for designing and searching for a model is often demanded in two scenarios. 1) When designing a neural network, the designer may have multiple choices for a layer, sub-model, or connection, and not sure which one or a combination performs the best. It would be appealing to have an easy way to express the candidate layers/sub-models they want to try. 2) For the researchers who are working on automatic NAS, they want to have an unified way to express the search space of neural architectures. And making unchanged trial code adapted to different searching algorithms.

We designed a simple and flexible programming interface based on [NNI annotation](./AnnotationSpec.md). It is elaborated through examples below.

### Example: choose an operator for a layer

When designing the following model there might be several choices in the fourth layer that may make this model perform good. In the script of this model, we can use annotation for the fourth layer as shown in the figure. In this annotation, there are five fields in total:

![](../img/example_layerchoice.png)

* __layer_choice__: It is a list of function calls, each function should have defined in user's script or imported libraries. The input arguments of the function should follow the format: `def XXX(inputs, arg2, arg3, ...)`, where `inputs` is a list with two elements. One is the list of `fixed_inputs`, and the other is a list of the chosen inputs from `optional_inputs`. `conv` and `pool` in the figure are examples of function definition. For the function calls in this list, no need to write the first argument (i.e., `input`). Note that only one of the function calls are chosen for this layer.
* __fixed_inputs__: It is a list of variables, the variable could be an output tensor from a previous layer. The variable could be `layer_output` of another nni.mutable_layer before this layer, or other python variables before this layer. All the variables in this list will be fed into the chosen function in `layer_choice` (as the first element of the `input` list).
* __optional_inputs__: It is a list of variables, the variable could be an output tensor from a previous layer. The variable could be `layer_output` of another nni.mutable_layer before this layer, or other python variables before this layer. Only `input_num` variables will be fed into the chosen function in `layer_choice` (as the second element of the `input` list).
* __optional_input_size__: It indicates how many inputs are chosen from `input_candidates`. It could be a number or a range. A range [1,3] means it chooses 1, 2, or 3 inputs.
* __layer_output__: The name of the output(s) of this layer, in this case it represents the return of the function call in `layer_choice`. This will be a variable name that can be used in the following python code or nni.mutable_layer(s).

There are two ways to write annotation for this example. For the upper one, `input` of the function calls is `[[],[out3]]`. For the bottom one, `input` is `[[out3],[]]`.

### Example: choose input connections for a layer

Designing connections of layers is critical for making a high performance model. With our provided interface, users could annotate which connections a layer takes (as inputs). They could choose several ones from a set of connections. Below is an example which chooses two inputs from three candidate inputs for `concat`. Here `concat` always takes the output of its previous layer using `fixed_inputs`.

![](../img/example_connectchoice.png)

### Example: choose both operators and connections

In this example, we choose one from the three operators and choose two connections for it. As there are multiple variables in `inputs`, we call `concat` at the beginning of the functions.

![](../img/example_combined.png)

### Example: [ENAS][1] macro search space

To illustrate the convenience of the programming interface, we use the interface to implement the trial code of "ENAS + macro search space". The left figure is the macro search space in ENAS paper.

![](../img/example_enas.png)


## Unified NAS search space specification

After finishing the trial code through the annotation above, users have implicitly specified the search space of neural architectures in the code. Based on the code, NNI will automatcailly generate a search space file which could be fed into tuning algorithms. This search space file follows the following `json` format.

```json
{
"mutable_1": {
"layer_1": {
"layer_choice": ["conv(ch=128)", "pool", "identity"],
"optional_inputs": ["out1", "out2", "out3"],
"optional_input_size": 2
},
"layer_2": {
...
}
}
}
```

Accordingly, a specified neural architecture (generated by tuning algorithm) is expressed as follows:

```json
{
"mutable_1": {
"layer_1": {
"chosen_layer": "pool",
"chosen_inputs": ["out1", "out3"]
},
"layer_2": {
...
}
}
}
```

With the specification of the format of search space and architecture (choice) expression, users are free to implement various (general) tuning algorithms for neural architecture search on NNI. One future work is to provide a general NAS algorihtm.

=============================================================

## Neural architecture search on NNI

### Basic flow of experiment execution

NNI's annotation compiler transforms the annotated trial code to the code that could receive architecture choice and build the corresponding model (i.e., graph). The NAS search space can be seen as a full graph (here, full graph means enabling all the provided operators and connections to build a graph), the architecture chosen by the tuning algorithm is a subgraph in it. By default, the compiled trial code only builds and executes the subgraph.

![](../img/nas_on_nni.png)

The above figure shows how the trial code runs on NNI. `nnictl` processes user trial code to generate a search space file and compiled trial code. The former is fed to tuner, and the latter is used to run trilas.

[__TODO__] Simple example of NAS on NNI.

### Weight sharing

Sharing weights among chosen architectures (i.e., trials) could speedup model search. For example, properly inheriting weights of completed trials could speedup the converge of new trials. One-Shot NAS (e.g., ENAS, Darts) is more aggressive, the training of different architectures (i.e., subgraphs) shares the same copy of the weights in full graph.

![](../img/nas_weight_share.png)

We believe weight sharing (transferring) plays a key role on speeding up NAS, while finding efficient ways of sharing weights is still a hot research topic. We provide a key-value store for users to store and load weights. Tuners and Trials use a provided KV client lib to access the storage.

[__TODO__] Example of weight sharing on NNI.

### Support of One-Shot NAS

One-Shot NAS is a popular approach to find good neural architecture within a limited time and resource budget. Basically, it builds a full graph based on the search space, and uses gradient descent to at last find the best subgraph. There are different training approaches, such as [training subgraphs (per mini-batch)][1], [training full graph through dropout][6], [training with architecture weights (regularization)][3]. Here we focus on the first approach, i.e., training subgraphs (ENAS).

With the same annotated trial code, users could choose One-Shot NAS as execution mode on NNI. Specifically, the compiled trial code builds the full graph (rather than subgraph demonstrated above), it receives a chosen architecture and training this architecture on the full graph for a mini-batch, then request another chosen architecture. It is supported by [NNI multi-phase](./multiPhase.md). We support this training approach because training a subgraph is very fast, building the graph every time training a subgraph induces too much overhead.

![](../img/one-shot_training.png)

The design of One-Shot NAS on NNI is shown in the above figure. One-Shot NAS usually only has one trial job with full graph. NNI supports running multiple such trial jobs each of which runs independently. As One-Shot NAS is not stable, running multiple instances helps find better model. Moreover, trial jobs are also able to synchronize weights during running (i.e., there is only one copy of weights, like asynchroneous parameter-server mode). This may speedup converge.

[__TODO__] Example of One-Shot NAS on NNI.


## General tuning algorithms for NAS

Like hyperparameter tuning, a relatively general algorithm for NAS is required. The general programming interface makes this task easier to some extent. We have a RL-based tuner algorithm for NAS from our contributors. We expect efforts from community to design and implement better NAS algorithms.

[__TODO__] More tuning algorithms for NAS.

## Export best neural architecture and code

[__TODO__] After the NNI experiment is done, users could run `nnictl experiment export --code` to export the trial code with the best neural architecture.

## Conclusion and Future work

There could be different NAS algorithms and execution modes, but they could be supported with the same programming interface as demonstrated above.

There are many interesting research topics in this area, both system and machine learning.


[1]: https://arxiv.org/abs/1802.03268
[2]: https://arxiv.org/abs/1707.07012
[3]: https://arxiv.org/abs/1806.09055
[4]: https://arxiv.org/abs/1806.10282
[5]: https://arxiv.org/abs/1703.01041
[6]: http://proceedings.mlr.press/v80/bender18a/bender18a.pdf
6 changes: 3 additions & 3 deletions docs/en_US/Installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Currently we support installation on Linux, Mac and Windows(local, remote and pa

Prerequisite: `python >=3.5`, `git`, `wget`
```bash
git clone -b v0.7 https://github.com/Microsoft/nni.git
git clone -b v0.8 https://github.com/Microsoft/nni.git
cd nni
./install.sh
```
Expand Down Expand Up @@ -48,9 +48,9 @@ Currently we support installation on Linux, Mac and Windows(local, remote and pa
you can install NNI as administrator or current user as follows:

```bash
git clone -b v0.7 https://github.com/Microsoft/nni.git
git clone -b v0.8 https://github.com/Microsoft/nni.git
cd nni
powershell ./install.ps1
powershell .\install.ps1
```

## **System requirements**
Expand Down
26 changes: 1 addition & 25 deletions docs/en_US/NniOnWindows.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,31 +4,7 @@ Currently we support local, remote and pai mode on Windows. Windows 10.1809 is w

## **Installation on Windows**

**Anaconda or Miniconda python(64-bit) is highly recommended.**

When you use PowerShell to run script for the first time, you need **run PowerShell as administrator** with this command:

```bash
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
```

* __Install NNI through pip__

Prerequisite: `python(64-bit) >= 3.5`

```bash
python -m pip install --upgrade nni
```

* __Install NNI through source code__

Prerequisite: `python >=3.5`, `git`, `PowerShell`

```bash
git clone -b v0.8 https://github.com/Microsoft/nni.git
cd nni
powershell -file install.ps1
```
please refer to [Installation](Installation.md#installation-on-windows) for more details.

When these things are done, use the **config_windows.yml** configuration to start an experiment for validation.

Expand Down
46 changes: 6 additions & 40 deletions docs/en_US/Nnictl.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ nnictl support commands:
* [nnictl tensorboard](#tensorboard)
* [nnictl package](#package)
* [nnictl --version](#version)
* [nnictl hdfs](#hdfs)

### Manage an experiment

Expand Down Expand Up @@ -125,21 +124,21 @@ Debug mode will disable version check function in Trialkeeper.
nnictl stop
```

1. If there is an id specified, and the id matches the running experiment, nnictl will stop the corresponding experiment, or will print error message.
2. If there is an id specified, and the id matches the running experiment, nnictl will stop the corresponding experiment, or will print error message.

```bash
nnictl stop [experiment_id]
```

1. Users could use 'nnictl stop all' to stop all experiments.
3. Users could use 'nnictl stop all' to stop all experiments.

```bash
nnictl stop all
```

1. If the id ends with *, nnictl will stop all experiments whose ids matchs the regular.
1. If the id does not exist but match the prefix of an experiment id, nnictl will stop the matched experiment.
1. If the id does not exist but match multiple prefix of the experiment ids, nnictl will give id information.
4. If the id ends with *, nnictl will stop all experiments whose ids matchs the regular.
5. If the id does not exist but match the prefix of an experiment id, nnictl will stop the matched experiment.
6. If the id does not exist but match multiple prefix of the experiment ids, nnictl will give id information.

<a name="update"></a>

Expand Down Expand Up @@ -651,37 +650,4 @@ Debug mode will disable version check function in Trialkeeper.
```bash
nnictl --version
```
<a name="hdfs"></a>
![](https://placehold.it/15/1589F0/000000?text=+) `Manage hdfs`
* __nnictl hdfs set__
* Description
set the host and userName of hdfs
* Usage
```bash
nnictl hdfs set [OPTIONS]
```
* Options
|Name, shorthand|Required|Default|Description|
|------|------|------ |------|
|--host| True| |The host ip of hdfs, the format is xx.xx.xx.xx, for example, 10.10.10.10|
|--user_name| True| |The userName of hdfs|
* __nnictl hdfs clean__
* Description
Clean up the code files that nni automatically copied to hdfs. This command deletes all such files under the user_name.
* Usage
```bash
nnictl hdfs clean
```
2 changes: 1 addition & 1 deletion docs/en_US/QuickStart.md
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ Run the **config_windows.yml** file from your command line to start MNIST experi
**Note**, if you're using NNI on Windows, it needs to change `python3` to `python` in the config.yml file, or use the config_windows.yml file to start the experiment.

```bash
nnictl create --config nni/examples/trials/mnist/config_windows.yml
nnictl create --config nni\examples\trials\mnist\config_windows.yml
```

Note, **nnictl** is a command line tool, which can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](Nnictl.md) for more usage of `nnictl`
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/SearchSpaceSpec.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@ All types of sampling strategies and their parameter are listed here:
- Anneal
- Evolution

* {"_type":"randint","_value":[upper]}
* {"_type":"randint","_value":[lower, upper]}

* Which means the variable value is a random integer in the range [0, upper). The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions, such as either quniform, qloguniform, qnormal or qlognormal. Note that if you want to change lower bound, you can use `quniform` for now.
* For now, we implment the "randint" distribution with "quniform", which means the variable value is a value like round(uniform(lower, upper)). The type of chosen value is float. If you want to use integer value, please convert it explicitly.

* {"_type":"uniform","_value":[low, high]}
* Which means the variable value is a value uniformly between low and high.
Expand Down
Binary file added docs/img/example_combined.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/example_connectchoice.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/example_enas.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/example_layerchoice.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/nas_on_nni.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/nas_weight_share.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/one-shot_training.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 8 additions & 0 deletions examples/trials/NAS/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
**Run Neural Network Architecture Search in NNI**
===

Now we have an NAS example [NNI-NAS-Example](https://github.com/Crysple/NNI-NAS-Example) run in NNI using NAS interface from our contributors.

Thanks our lovely contributors.

And welcome more and more people to join us!
2 changes: 2 additions & 0 deletions examples/trials/auto-gbdt/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,8 @@ def load_data(train_path='./data/regression.train', test_path='./data/regression
def run(lgb_train, lgb_eval, params, X_test, y_test):
print('Start training...')

params['num_leaves'] = int(params['num_leaves'])

# train
gbm = lgb.train(params,
lgb_train,
Expand Down
2 changes: 1 addition & 1 deletion examples/trials/auto-gbdt/search_space.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"num_leaves":{"_type":"choice","_value":[31, 28, 24, 20]},
"num_leaves":{"_type":"randint","_value":[20, 31]},
"learning_rate":{"_type":"choice","_value":[0.01, 0.05, 0.1, 0.2]},
"bagging_fraction":{"_type":"uniform","_value":[0.7, 1.0]},
"bagging_freq":{"_type":"choice","_value":[1, 2, 4, 8, 10]}
Expand Down
3 changes: 2 additions & 1 deletion src/sdk/pynni/nni/bohb_advisor/bohb_advisor.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@

from nni.protocol import CommandType, send
from nni.msg_dispatcher_base import MsgDispatcherBase
from nni.utils import OptimizeMode, extract_scalar_reward
from nni.utils import OptimizeMode, extract_scalar_reward, randint_to_quniform

from .config_generator import CG_BOHB

Expand Down Expand Up @@ -443,6 +443,7 @@ def handle_update_search_space(self, data):
search space of this experiment
"""
search_space = data
randint_to_quniform(search_space)
cs = CS.ConfigurationSpace()
for var in search_space:
_type = str(search_space[var]["_type"])
Expand Down
3 changes: 2 additions & 1 deletion src/sdk/pynni/nni/evolution_tuner/evolution_tuner.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@

import numpy as np
from nni.tuner import Tuner
from nni.utils import NodeType, OptimizeMode, extract_scalar_reward, split_index
from nni.utils import NodeType, OptimizeMode, extract_scalar_reward, split_index, randint_to_quniform

import nni.parameter_expressions as parameter_expressions

Expand Down Expand Up @@ -175,6 +175,7 @@ def update_search_space(self, search_space):
search_space : dict
"""
self.searchspace_json = search_space
randint_to_quniform(self.searchspace_json)
self.space = json2space(self.searchspace_json)

self.random_state = np.random.RandomState()
Expand Down
Loading

0 comments on commit e1a4a80

Please sign in to comment.