Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Add release note and update version numbers for v1.5 #2300

Merged
merged 5 commits into from
Apr 16, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The tool manages automated machine learning (AutoML) experiments, **dispatches a
* Researchers and data scientists who want to easily **implement and experiment new AutoML algorithms**, may it be: hyperparameter tuning algorithm, neural architect search algorithm or model compression algorithm.
* ML Platform owners who want to **support AutoML in their platform**.

### **NNI v1.4 has been released! &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**
### **NNI v1.5 has been released! &nbsp;<a href="#nni-released-reminder"><img width="48" src="docs/img/release_icon.png"></a>**

## **NNI capabilities in a glance**

Expand Down Expand Up @@ -236,7 +236,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Download the examples via clone the source code.

```bash
git clone -b v1.4 https://github.com/Microsoft/nni.git
git clone -b v1.5 https://github.com/Microsoft/nni.git
```

* Run the MNIST example.
Expand Down
42 changes: 42 additions & 0 deletions docs/en_US/Release.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,47 @@
# ChangeLog

## Release 1.5 - 4/13/2020

### New Features and Documentation

#### Hyper-Parameter Optimizing

* New tuner: [Population Based Training (PBT)](https://github.com/microsoft/nni/blob/master/docs/en_US/Tuner/PBTTuner.md)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

* Trials can now report infinity and NaN as result

#### Neural Architecture Search

* New NAS algorithm: [TextNAS](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/TextNAS.md)
* ENAS and DARTS now support [visualization](https://github.com/microsoft/nni/blob/master/docs/en_US/NAS/Visualization.md) through web UI.

#### Model Compression

* New Pruner: [GradientRankFilterPruner](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/Pruner.md#gradientrankfilterpruner)
* Compressors will validate configuration by default
* Refactor: Adding optimizer as an input argument of pruner, for easy support of DataParallel and more efficient iterative pruning. This is a broken change for the usage of iterative pruning algorithms.
* Model compression examples are refactored and improved
* Added documentation for [implementing compressing algorithm](https://github.com/microsoft/nni/blob/master/docs/en_US/Compressor/Framework.md)

#### Training Service

* Kubeflow now supports pytorchjob crd v1 (thanks external contributor @jiapinai)
* Experimental [DLTS](https://github.com/microsoft/nni/blob/master/docs/en_US/TrainingService/DLTSMode.md) support

#### Overall Documentation Improvement

* Documentation is significantly improved on grammar, spelling, and wording (thanks external contributor @AHartNtkn)

### Fixed Bugs

* ENAS cannot have more than one LSTM layers (thanks external contributor @marsggbo)
* NNI manager's timers will never unsubscribe (thanks external contributor @guilhermehn)
* NNI manager may exhaust head memory (thanks external contributor @Sundrops)
* Batch tuner does not support customized trials (#2075)
* Experiment cannot be killed if it failed on start (#2080)
* Non-number type metrics break web UI (#2278)
* A bug in lottery ticket pruner
* Other minor glitches

## Release 1.4 - 2/19/2020

### Major Features
Expand Down
4 changes: 2 additions & 2 deletions docs/en_US/Tutorial/InstallationLinux.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Installation on Linux and macOS follow the same instructions, given below.
Prerequisites: `python 64-bit >=3.5`, `git`, `wget`

```bash
git clone -b v1.4 https://github.com/Microsoft/nni.git
git clone -b v1.5 https://github.com/Microsoft/nni.git
cd nni
./install.sh
```
Expand All @@ -35,7 +35,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Download the examples via cloning the source code.

```bash
git clone -b v1.4 https://github.com/Microsoft/nni.git
git clone -b v1.5 https://github.com/Microsoft/nni.git
```

* Run the MNIST example.
Expand Down
6 changes: 3 additions & 3 deletions docs/en_US/Tutorial/InstallationWin.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Anaconda or Miniconda is highly recommended to manage multiple Python environmen
Prerequisites: `python 64-bit >=3.5`, `git`, `PowerShell`.

```bash
git clone -b v1.4 https://github.com/Microsoft/nni.git
git clone -b v1.5 https://github.com/Microsoft/nni.git
cd nni
powershell -ExecutionPolicy Bypass -file install.ps1
```
Expand All @@ -31,7 +31,7 @@ The following example is built on TensorFlow 1.x. Make sure **TensorFlow 1.x is
* Download the examples via clone the source code.

```bash
git clone -b v1.4 https://github.com/Microsoft/nni.git
git clone -b v1.5 https://github.com/Microsoft/nni.git
```

* Run the MNIST example.
Expand Down Expand Up @@ -136,4 +136,4 @@ Note:
* [How to run an experiment on multiple machines?](../TrainingService/RemoteMachineMode.md)
* [How to run an experiment on OpenPAI?](../TrainingService/PaiMode.md)
* [How to run an experiment on Kubernetes through Kubeflow?](../TrainingService/KubeflowMode.md)
* [How to run an experiment on Kubernetes through FrameworkController?](../TrainingService/FrameworkControllerMode.md)
* [How to run an experiment on Kubernetes through FrameworkController?](../TrainingService/FrameworkControllerMode.md)
2 changes: 1 addition & 1 deletion docs/en_US/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
# The short X.Y version
version = ''
# The full version, including alpha/beta/rc tags
release = 'v1.4'
release = 'v1.5'

# -- General configuration ---------------------------------------------------

Expand Down