diff --git a/README.md b/README.md index 64c49ff943..0c47f769fb 100644 --- a/README.md +++ b/README.md @@ -88,16 +88,17 @@ You can use these commands to get more information about the experiment ## **Documentation** -* [Overview](docs/Overview.md) -* [Get started](docs/GetStarted.md) +* [NNI overview](docs/Overview.md) +* [Quick start](docs/GetStarted.md) + ## **How to** -* [Installation](docs/InstallNNI_Ubuntu.md) +* [Install NNI](docs/InstallNNI_Ubuntu.md) * [Use command line tool nnictl](docs/NNICTLDOC.md) * [Use NNIBoard](docs/WebUI.md) * [How to define search space](docs/SearchSpaceSpec.md) +* [How to define a trial](docs/howto_1_WriteTrial.md) * [Config an experiment](docs/ExperimentConfig.md) -* [Use annotation](docs/AnnotationSpec.md) -* [Debug](docs/HowToDebug.md) +* [How to use annotation](docs/howto_1_WriteTrial.md#nni-python-annotation) ## **Tutorials** * [Run an experiment on local (with multiple GPUs)?](docs/tutorial_1_CR_exp_local_api.md) * [Run an experiment on multiple machines?](docs/tutorial_2_RemoteMachineMode.md) diff --git a/docs/GetStarted.md b/docs/GetStarted.md index 096804da2a..6e124c574f 100644 --- a/docs/GetStarted.md +++ b/docs/GetStarted.md @@ -34,7 +34,7 @@ An experiment is to run multiple trial jobs, each trial job tries a configuratio python3 ~/nni/examples/trials/mnist-annotation/mnist.py -This command will be filled in the yaml configure file below. Please refer to [here]() for how to write your own trial. +This command will be filled in the yaml configure file below. Please refer to [here](howto_1_WriteTrial.md) for how to write your own trial. **Prepare tuner**: NNI supports several popular automl algorithms, including Random Search, Tree of Parzen Estimators (TPE), Evolution algorithm etc. Users can write their own tuner (refer to [here](howto_2_CustomizedTuner.md), but for simplicity, here we choose a tuner provided by NNI as below: @@ -43,7 +43,7 @@ This command will be filled in the yaml configure file below. Please refer to [h classArgs: optimize_mode: maximize -*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner (the spec of builtin tuners can be found [here]()), *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result. +*builtinTunerName* is used to specify a tuner in NNI, *classArgs* are the arguments pass to the tuner, *optimization_mode* is to indicate whether you want to maximize or minimize your trial's result. **Prepare configure file**: Since you have already known which trial code you are going to run and which tuner you are going to use, it is time to prepare the yaml configure file. NNI provides a demo configure file for each trial example, `cat ~/nni/examples/trials/mnist-annotation/config.yml` to see it. Its content is basically shown below: @@ -86,7 +86,8 @@ You can refer to [here](NNICTLDOC.md) for more usage guide of *nnictl* command l ## View experiment results The experiment has been running now, NNI provides WebUI for you to view experiment progress, to control your experiment, and some other appealing features. The WebUI is opened by default by `nnictl create`. -## Further reading +## Read more +* [Tuners supported in the latest NNI release](../src/sdk/pynni/nni/README.md) * [Overview](Overview.md) * [Installation](InstallNNI_Ubuntu.md) * [Use command line tool nnictl](NNICTLDOC.md) diff --git a/docs/Overview.md b/docs/Overview.md index d072af179b..f93764266f 100644 --- a/docs/Overview.md +++ b/docs/Overview.md @@ -2,6 +2,13 @@ NNI (Neural Network Intelligence) is a toolkit to help users run automated machine learning experiments. For each experiment, user only need to define a search space and update a few lines of code, and then leverage NNI build-in algorithms and training services to search the best hyper parameters and/or neural architecture. +>Step 1: [Define search space](SearchSpaceSpec.md) + +>Step 2: [Update model codes](howto_1_WriteTrial.md) + +>Step 3: [Define Experiment](ExperimentConfig.md) + +

drawing

@@ -15,11 +22,6 @@ After user submits the experiment through a command line tool [nnictl](../tools/ User can use the nnictl and/or a visualized Web UI nniboard to monitor and debug a given experiment. -

-drawing -

- - NNI provides a set of examples in the package to get you familiar with the above process. In the following example [/examples/trials/mnist], we had already set up the configuration and updated the training codes for you. You can directly run the following command to start an experiment. ## Key Concepts @@ -35,28 +37,13 @@ NNI provides a set of examples in the package to get you familiar with the above ### **Assessor** **Assessor** in NNI is an implementation of Assessor API for optimizing the execution of experiment. - ## Learn More * [Get started](GetStarted.md) -### **How to** -* [Installation](InstallNNI_Ubuntu.md) +* [Install NNI](InstallNNI_Ubuntu.md) * [Use command line tool nnictl](NNICTLDOC.md) * [Use NNIBoard](WebUI.md) -* [Define search space](InstallNNI_Ubuntu.md) -* [Use NNI sdk] - *coming soon* -* [Config an experiment](SearchSpaceSpec.md) -* [Use annotation](AnnotationSpec.md) -* [Debug](HowToDebug.md) +* [Use annotation](howto_1_WriteTrial.md#nni-python-annotation) ### **Tutorials** * [How to run an experiment on local (with multiple GPUs)?](tutorial_1_CR_exp_local_api.md) * [How to run an experiment on multiple machines?](tutorial_2_RemoteMachineMode.md) -* [How to run an experiment on OpenPAI?](PAIMode.md) -* [Try different tuners and assessors] - *coming soon* -* [How to run an experiment on K8S services?] - *coming soon* -* [Implement a customized tuner] - *coming soon* -* [Implement a customized assessor] - *coming soon* -* [Implement a custmoized weight sharing algorithm] - *coming soon* -* [How to integrate NNI with your own custmoized training service] - *coming soon* -### **Best practice** -* [Compare different AutoML algorithms] - *coming soon* -* [Serve NNI as a capability of a ML Platform] - *coming soon* +* [How to run an experiment on OpenPAI?](PAIMode.md) \ No newline at end of file diff --git a/src/webui/src/components/Tensor.tsx b/src/webui/src/components/Tensor.tsx deleted file mode 100644 index 20d18b96e7..0000000000 --- a/src/webui/src/components/Tensor.tsx +++ /dev/null @@ -1,94 +0,0 @@ -import * as React from 'react'; -import axios from 'axios'; -import { message } from 'antd'; -import { MANAGER_IP } from '../static/const'; -import '../static/style/tensor.scss'; - -interface TensorState { - urlTensor: string; - idTensor: string; -} - -message.config({ - top: 250, - duration: 2, -}); - -class Tensor extends React.Component<{}, TensorState> { - - public _isMounted = false; - - constructor(props: {}) { - super(props); - this.state = { - urlTensor: '', - idTensor: '' - }; - } - - geturl(): void { - Object.keys(this.props).forEach(item => { - if (item === 'location') { - let tensorId = this.props[item].state; - if (tensorId !== undefined && this._isMounted) { - this.setState({ idTensor: tensorId }, () => { - axios(`${MANAGER_IP}/tensorboard`, { - method: 'POST', - headers: { - 'Content-Type': 'application/json;charset=utf-8' - }, - params: { - job_ids: tensorId - } - }).then(res => { - if (res.status === 200) { - setTimeout( - () => { - const url = new URL(res.data.endPoint); - if (url.hostname === 'localhost') { - url.hostname = window.location.hostname; - } - this.setState( - { urlTensor: url.href }, - () => message.success('Successful send')); - }, - 1000); - } else { - message.error('fail to link to tensorboard'); - } - }); - }); - } else { - message.warning('Please link to Trial Status page to select a trial!'); - } - } - }); - } - - componentDidMount() { - this._isMounted = true; - this.geturl(); - } - - componentWillUnmount() { - this._isMounted = false; - } - - render() { - const { urlTensor } = this.state; - return ( -
-
TensorBoard
-
-