Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Merge master to dev-enas #117

Merged
merged 23 commits into from
Sep 25, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@ language: python
python:
- "3.6"
before_install:
- wget https://nodejs.org/dist/v10.9.0/node-v10.9.0-linux-x64.tar.xz
- tar xf node-v10.9.0-linux-x64.tar.xz
- sudo mv node-v10.9.0-linux-x64 /usr/local/node
- wget https://nodejs.org/dist/v10.10.0/node-v10.10.0-linux-x64.tar.xz
- tar xf node-v10.10.0-linux-x64.tar.xz
- sudo mv node-v10.10.0-linux-x64 /usr/local/node
- export PATH=/usr/local/node/bin:$PATH
- sudo sh -c 'PATH=/usr/local/node/bin:$PATH yarn global add serve'
install:
- make
- make install
- make dev-install
- export PATH=$HOME/.nni/bin:$PATH
before_script:
- cd test/naive
Expand Down
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ else # is normal user
endif

## Dependency information
NODE_VERSION ?= v10.9.0
NODE_VERSION ?= v10.10.0
NODE_TARBALL ?= node-$(NODE_VERSION)-linux-x64.tar.xz
NODE_PATH ?= $(INSTALL_PREFIX)/nni/node

Expand Down Expand Up @@ -294,7 +294,7 @@ ifdef _ROOT
$(error You should not develop NNI as root)
endif
ifdef _MISS_DEPS
$(error Please install Node.js, Yarn, and Serve to develop NNI)
# $(error Please install Node.js, Yarn, and Serve to develop NNI)
endif
#$(_INFO) Pass! $(_END)

Expand Down
31 changes: 20 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,31 +26,40 @@ The tool dispatches and runs trial jobs that generated by tuning algorithms to s
* As a researcher and data scientist, you want to implement your own AutoML algorithms and compare with other algorithms
* As a ML platform owner, you want to support AutoML in your platform

# Getting Started with NNI
# Get Started with NNI

## **Installation**
Install through python pip. (the current version only supports linux, nni on ubuntu 16.04 or newer has been well tested)
* requirements: python >= 3.5, git, wget
pip Installation Prerequisites
* linux (ubuntu 16.04 or newer version has been well tested)
* python >= 3.5
* git, wget

```
pip3 install -v --user git+https://github.com/Microsoft/[email protected]
source ~/.bashrc
```

## **Quick start: run your first experiment at local**
It only requires 3 steps to start an experiment on NNI:
![](./docs/3_steps.jpg)


NNI provides a set of examples in the package to get you familiar with the above process. In the following example [/examples/trials/mnist], we had already set up the configuration and updated the training codes for you. You can directly run the following command to start an experiment.

## **Quick start: run an experiment at local**
Requirements:
* NNI installed on your local machine
* tensorflow installed
**NOTE**: The following example is an experiment built on TensorFlow, make sure you have **TensorFlow installed** before running the following command.

Run the following command to create an experiment for [mnist]
Try it out:
```bash
nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml
nnictl create --config ~/nni/examples/trials/mnist/config.yml
```
This command will start an experiment and a WebUI. The WebUI endpoint will be shown in the output of this command (for example, `http://localhost:8080`). Open this URL in your browser. You can analyze your experiment through WebUI, or browse trials' tensorboard.

In the command output, find out the **Web UI url** and open it in your browser. You can analyze your experiment through WebUI, or browse trials' tensorboard.

To learn more about how this example was constructed and how to analyze the experiement results in NNI Web UI, please refer to [How to write a trial run on NNI (MNIST as an example)?](docs/WriteYourTrial.md)

## **Please refer to [Get Started Tutorial](docs/GetStarted.md) for more detailed information.**
## More tutorials
* [How to write a trial running on NNI (Mnist as an example)?](docs/WriteYourTrial.md)

* [Tutorial of NNI python annotation.](tools/nni_annotation/README.md)
* [Tuners supported by NNI.](src/sdk/pynni/nni/README.md)
* [How to enable early stop (i.e. assessor) in an experiment?](docs/EnableAssessor.md)
Expand Down
2 changes: 1 addition & 1 deletion deployment/Dockerfile.build.base
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ RUN pip3 --no-cache-dir install \
numpy==1.14.3 scipy==1.1.0

#
#Install node 10.9.0, yarn 1.9.4, NNI v0.1
#Install node 10.10.0, yarn 1.9.4, NNI v0.1
#
RUN git clone -b v0.1 https://github.com/Microsoft/nni.git
RUN cd nni && sh install.sh
Expand Down
Binary file added docs/3_steps.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 4 additions & 2 deletions docs/GetStarted.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,16 @@
**Getting Started with NNI**
**Get Started with NNI**
===

## **Installation**
* __Dependencies__

python >= 3.5
git
wget

python pip should also be correctly installed. You could use "which pip" or "pip -V" to check in Linux.

* Note: For now, we don't support virtual environment.
* Note: we don't support virtual environment in current releases.

* __Install NNI through pip__

Expand Down
53 changes: 53 additions & 0 deletions docs/HowToContribute.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
**How to contribute**
===
## Best practice for debug NNI source code

For debugging NNI source code, your development environment should be under Ubuntu 16.04 (or above) system with python 3 and pip 3 installed, then follow the below steps.

**1. Clone the source code**

Run the command
```
git clone https://github.com/Microsoft/nni.git
```
to clone the source code

**2. Prepare the debug environment and install dependencies**

Change directory to the source code folder, then run the command
```
make install-dependencies
```
to install the dependent tools for the environment

**3. Build source code**

Run the command
```
make build
```
to build the source code

**4. Install NNI to development environment**

Run the command
```
make dev-install
```
to install the distribution content to development environment, and create cli scripts

**5. Check if the environment is ready**

Now, you can try to start an experiment to check if your environment is ready
For example, run the command
```
nnictl create --config ~/nni/examples/trials/mnist/config.yml
```
And open web ui to check if everything is OK

**6. Redeploy**

After you change some code, just use **step 4** to rebuild your code, then the change will take effect immediately

---
At last, wish you have a wonderful day.
3 changes: 0 additions & 3 deletions docs/ToContribute.md

This file was deleted.

98 changes: 72 additions & 26 deletions docs/WriteYourTrial.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,14 @@
**Write a Trial which can Run on NNI**
**Write a Trial Run on NNI**
===
There would be only a few changes on your existing trial(model) code to make the code runnable on NNI. We provide two approaches for you to modify your code: `Python annotation` and `NNI APIs for trial`

## NNI APIs
We also support NNI APIs for trial code. By using this approach, you should first prepare a search space file. An example is shown below:
A **Trial** in NNI is an individual attempt at applying a set of parameters on a model.

To define a NNI trial, you need to firstly define the set of parameters and then update the model. NNI provide two approaches for you to define a trial: `NNI API` and `NNI Python annotation`.

## NNI API
>Step 1 - Prepare a SearchSpace parameters file.

An example is shown below:
```
{
"dropout_rate":{"_type":"uniform","_value":[0.1,0.5]},
Expand All @@ -12,32 +17,71 @@ We also support NNI APIs for trial code. By using this approach, you should firs
"learning_rate":{"_type":"uniform","_value":[0.0001, 0.1]}
}
```
You can refer to [here](SearchSpaceSpec.md) for the tutorial of search space.
Refer to [SearchSpaceSpec.md](SearchSpaceSpec.md) to learn more about search space.

Then, include `import nni` in your trial code to use NNI APIs. Using the line:
```
RECEIVED_PARAMS = nni.get_parameters()
```
to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example:
```
{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}
```
>Step 2 - Update model codes
~~~~
2.1 Declare NNI API
Include `import nni` in your trial code to use NNI APIs.

2.2 Get predefined parameters
Use the following code snippet:

RECEIVED_PARAMS = nni.get_parameters()

to get hyper-parameters' values assigned by tuner. `RECEIVED_PARAMS` is an object, for example:

{"conv_size": 2, "hidden_size": 124, "learning_rate": 0.0307, "dropout_rate": 0.2029}

2.3 Report NNI results
Use the API:

On the other hand, you can use the API: `nni.report_intermediate_result(accuracy)` to send `accuracy` to assessor. And use `nni.report_final_result(accuracy)` to send `accuracy` to tuner. Here `accuracy` could be any python data type, but **NOTE that if you use built-in tuner/assessor, `accuracy` should be a numerical variable(e.g. float, int)**.
`nni.report_intermediate_result(accuracy)`

to send `accuracy` to assessor.

Use the API:

The assessor will decide which trial should early stop based on the history performance of trial(intermediate result of one trial).
The tuner will generate next parameters/architecture based on the explore history(final result of all trials).
`nni.report_final_result(accuracy)`

to send `accuracy` to tuner.
~~~~

**NOTE**:
~~~~
accuracy - The `accuracy` could be any python object, but if you use NNI built-in tuner/assessor, `accuracy` should be a numerical variable (e.g. float, int).
assessor - The assessor will decide which trial should early stop based on the history performance of trial (intermediate result of one trial).
tuner - The tuner will generate next parameters/architecture based on the explore history (final result of all trials).
~~~~

>Step 3 - Enable NNI API

To enable NNI API mode, you need to set useAnnotation to *false* and provide the path of SearchSpace file (you just defined in step 1):

In the yaml configure file, you need two lines to enable NNI APIs:
```
useAnnotation: false
searchSpacePath: /path/to/your/search_space.json
```

You can refer to [here](../examples/trials/README.md) for more information about how to write trial code using NNI APIs.
You can refer to [here](ExperimentConfig.md) for more information about how to set up experiment configurations.

(../examples/trials/README.md) for more information about how to write trial code using NNI APIs.

## NNI Python Annotation
An alternative to write a trial is to use NNI's syntax for python. Simple as any annotation, NNI annotation is working like comments in your codes. You don't have to make structure or any other big changes to your existing codes. With a few lines of NNI annotation, you will be able to:
* annotate the variables you want to tune
* specify in which range you want to tune the variables
* annotate which variable you want to report as intermediate result to `assessor`
* annotate which variable you want to report as the final result (e.g. model accuracy) to `tuner`.

Again, take MNIST as an example, it only requires 2 steps to write a trial with NNI Annotation.

>Step 1 - Update codes with annotations

Please refer the following tensorflow code snippet for NNI Annotation, the highlighted 4 lines are annotations that help you to: (1) tune batch\_size and (2) dropout\_rate, (3) report test\_acc every 100 steps, and (4) at last report test\_acc as final result.

>What noteworthy is: as these new added codes are annotations, it does not actually change your previous codes logic, you can still run your code as usual in environments without NNI installed.

## NNI Annotation
We designed a new syntax for users to annotate the variables they want to tune and in what range they want to tune the variables. Also, they can annotate which variable they want to report as intermediate result to `assessor`, and which variable to report as the final result (e.g. model accuracy) to `tuner`. A really appealing feature of our NNI annotation is that it exists as comments in your code, which means you can run your code as before without NNI. Let's look at an example, below is a piece of tensorflow code:
```diff
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
Expand All @@ -64,14 +108,16 @@ with tf.Session() as sess:
+ """@nni.report_final_result(test_acc)"""
```

Let's say you want to tune batch\_size and dropout\_rate, and report test\_acc every 100 steps, at last report test\_acc as final result. With our NNI annotation, your code would look like below:
>NOTE
>>`@nni.variable` will take effect on its following line
>>
>>`@nni.report_intermediate_result`/`@nni.report_final_result` will send the data to assessor/tuner at that line.
>>
>>Please refer to [Annotation README](../tools/annotation/README.md) for more information about annotation syntax and its usage.


Simply adding four lines would make your code runnable on NNI. You can still run your code independently. `@nni.variable` works on its next line assignment, and `@nni.report_intermediate_result`/`@nni.report_final_result` would send the data to assessor/tuner at that line. Please refer to [here](../tools/annotation/README.md) for more annotation syntax and more powerful usage. In the yaml configure file, you need one line to enable NNI annotation:
>Step 2 - Enable NNI Annotation
In the yaml configure file, you need to set *useAnnotation* to true to enable NNI annotation:
```
useAnnotation: true
```

For users to correctly leverage NNI annotation, we briefly introduce how NNI annotation works here: NNI precompiles users' trial code to find all the annotations each of which is one line with `"""@nni` at the head of the line. Then NNI replaces each annotation with a corresponding NNI API at the location where the annotation is.

**Note that: in your trial code, you can use either one of NNI APIs and NNI annotation, but not both of them simultaneously.**
Loading