diff --git a/README.md b/README.md index 10c3391fe7..6ecca474c3 100644 --- a/README.md +++ b/README.md @@ -197,7 +197,7 @@ Within the following table, we summarized the current NNI capabilities, we are g diff --git a/docs/en_US/NAS/Benchmarks.md b/docs/en_US/NAS/Benchmarks.md index a5de456001..ffe9460edc 100644 --- a/docs/en_US/NAS/Benchmarks.md +++ b/docs/en_US/NAS/Benchmarks.md @@ -16,13 +16,13 @@ To avoid storage and legal issues, we do not provide any prepared databases. We strongly recommend users to use docker to run the generation scripts, to ease the burden of installing multiple dependencies. Please follow the following steps. -1. Clone NNI repo. Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.6`. +**Step 1.** Clone NNI repo. Replace `${NNI_VERSION}` with a released version name or branch name, e.g., `v1.6`. ```bash git clone -b ${NNI_VERSION} https://github.com/microsoft/nni ``` -2. Run docker. +**Step 2.** Run docker. For NAS-Bench-101, diff --git a/docs/en_US/TrainingService/HowToImplementTrainingService.md b/docs/en_US/TrainingService/HowToImplementTrainingService.md index 123401b6bf..923b59684a 100644 --- a/docs/en_US/TrainingService/HowToImplementTrainingService.md +++ b/docs/en_US/TrainingService/HowToImplementTrainingService.md @@ -1,10 +1,11 @@ -**How to Implement TrainingService in NNI** -=== +# How to Implement Training Service in NNI ## Overview + TrainingService is a module related to platform management and job schedule in NNI. TrainingService is designed to be easily implemented, we define an abstract class TrainingService as the parent class of all kinds of TrainingService, users just need to inherit the parent class and complete their own child class if they want to implement customized TrainingService. ## System architecture + ![](../../img/NNIDesign.jpg) The brief system architecture of NNI is shown in the picture. NNIManager is the core management module of system, in charge of calling TrainingService to manage trial jobs and the communication between different modules. Dispatcher is a message processing center responsible for message dispatch. TrainingService is a module to manage trial jobs, it communicates with nniManager module, and has different instance according to different training platform. For the time being, NNI supports [local platfrom](LocalMode.md), [remote platfrom](RemoteMachineMode.md), [PAI platfrom](PaiMode.md), [kubeflow platform](KubeflowMode.md) and [FrameworkController platfrom](FrameworkControllerMode.md). diff --git a/docs/en_US/TrainingService/Overview.md b/docs/en_US/TrainingService/Overview.md new file mode 100644 index 0000000000..77e46fafdf --- /dev/null +++ b/docs/en_US/TrainingService/Overview.md @@ -0,0 +1,50 @@ +# Training Service + +## What is Training Service? + +NNI training service is designed to allow users to focus on AutoML itself, agnostic to the underlying computing infrastructure where the trials are actually run. When migrating from one cluster to another (e.g., local machine to Kubeflow), users only need to tweak several configurations, and the experiment can be easily scaled. + +Users can use training service provided by NNI, to run trial jobs on [local machine](./LocalMode.md), [remote machines](./RemoteMachineMode.md), and on clusters like [PAI](./PaiMode.md), [Kubeflow](./KubeflowMode.md) and [FrameworkController](./FrameworkControllerMode.md). These are called *built-in training services*. + +If the computing resource customers try to use is not listed above, NNI provides interface that allows users to build their own training service easily. Please refer to "[how to implement training service](./HowToImplementTrainingService)" for details. + +## How to use Training Service? + +Training service needs to be chosen and configured properly in experiment configuration YAML file. Users could refer to the document of each training service for how to write the configuration. Also, [reference](../Tutorial/ExperimentConfig) provides more details on the specification of the experiment configuration file. + +Next, users should prepare code directory, which is specified as `codeDir` in config file. Please note that in non-local mode, the code directory will be uploaded to remote or cluster before the experiment. Therefore, we limit the number of files to 2000 and total size to 300MB. If the code directory contains too many files, users can choose which files and subfolders should be excluded by adding a `.nniignore` file that works like a `.gitignore` file. For more details on how to write this file, see the [git documentation](https://git-scm.com/docs/gitignore#_pattern_format). + +In case users intend to use large files in their experiment (like large-scaled datasets) and they are not using local mode, they can either: 1) download the data before each trial launches by putting it into trial command; or 2) use a shared storage that is accessible to worker nodes. Usually, training platforms are equipped with shared storage, and NNI allows users to easily use them. Refer to docs of each built-in training service for details. + +## Built-in Training Services + +|TrainingService|Brief Introduction| +|---|---| +|[__Local__](./LocalMode.html)|NNI supports running an experiment on local machine, called local mode. Local mode means that NNI will run the trial jobs and nniManager process in same machine, and support gpu schedule function for trial jobs.| +|[__Remote__](./RemoteMachineMode.html)|NNI supports running an experiment on multiple machines through SSH channel, called remote mode. NNI assumes that you have access to those machines, and already setup the environment for running deep learning training code. NNI will submit the trial jobs in remote machine, and schedule suitable machine with enough gpu resource if specified.| +|[__PAI__](./PaiMode.html)|NNI supports running an experiment on [OpenPAI](https://github.com/Microsoft/pai) (aka PAI), called PAI mode. Before starting to use NNI PAI mode, you should have an account to access an [OpenPAI](https://github.com/Microsoft/pai) cluster. See [here](https://github.com/Microsoft/pai#how-to-deploy) if you don't have any OpenPAI account and want to deploy an OpenPAI cluster. In PAI mode, your trial program will run in PAI's container created by Docker.| +|[__Kubeflow__](./KubeflowMode.html)|NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/kubeflow), called kubeflow mode. Before starting to use NNI kubeflow mode, you should have a Kubernetes cluster, either on-premises or [Azure Kubernetes Service(AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/), a Ubuntu machine on which [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) is setup to connect to your Kubernetes cluster. If you are not familiar with Kubernetes, [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/) is a good start. In kubeflow mode, your trial program will run as Kubeflow job in Kubernetes cluster.| +|[__FrameworkController__](./FrameworkControllerMode.html)|NNI supports running experiment using [FrameworkController](https://github.com/Microsoft/frameworkcontroller), called frameworkcontroller mode. FrameworkController is built to orchestrate all kinds of applications on Kubernetes, you don't need to install Kubeflow for specific deep learning framework like tf-operator or pytorch-operator. Now you can use FrameworkController as the training service to run NNI experiment.| +|[__DLTS__](./DLTSMode.html)|NNI supports running experiment using [DLTS](https://github.com/microsoft/DLWorkspace.git), which is an open source toolkit, developed by Microsoft, that allows AI scientists to spin up an AI cluster in turn-key fashion.| + +## What does Training Service do? + +

+drawing +

+ +According to the architecture shown in [Overview](../Overview), training service (platform) is actually responsible for two events: 1) initiating a new trial; 2) collecting metrics and communicating with NNI core (NNI manager); 3) monitoring trial job status. To demonstrated in detail how training service works, we show the workflow of training service from the very beginning to the moment when first trial succeeds. + +Step 1. **Validate config and prepare the training platform.** Training service will first check whether the training platform user specifies is valid (e.g., is there anything wrong with authentication). After that, training service will start to prepare for the experiment by making the code directory (`codeDir`) accessible to training platform. + +```eval_rst +.. Note:: Different training services have different ways to handle ``codeDir``. For example, local training service directly runs trials in ``codeDir``. Remote training service packs ``codeDir`` into a zip and uploads it to each machine. K8S-based training services copy ``codeDir`` onto a shared storage, which is either provided by training platform itself, or configured by users in config file. +``` + +Step 2. **Submit the first trial.** To initiate a trial, usually (in non-reuse mode), NNI copies another few files (including parameters, launch script and etc.) onto training platform. After that, NNI launches the trial through subprocess, SSH, RESTful API, and etc. + +```eval_rst +.. Warning:: The working directory of trial command has exactly the same content as ``codeDir``, but can have a differen path (even on differen machines) Local mode is the only training service that shares one ``codeDir`` across all trials. Other training services copies a ``codeDir`` from the shared copy prepared in step 1 and each trial has an independent working directory. We strongly advise users not to rely on the shared behavior in local mode, as it will make your experiments difficult to scale to other training services. +``` + +Step 3. **Collect metrics.** NNI then monitors the status of trial, updates the status (e.g., from `WAITING` to `RUNNING`, `RUNNING` to `SUCCEEDED`) recorded, and also collects the metrics. Currently, most training services are implemented in an "active" way, i.e., training service will call the RESTful API on NNI manager to update the metrics. Note that this usually requires the machine that runs NNI manager to be at least accessible to the worker node. diff --git a/docs/en_US/TrainingService/SupportTrainingService.md b/docs/en_US/TrainingService/SupportTrainingService.md deleted file mode 100644 index ca2b9283fc..0000000000 --- a/docs/en_US/TrainingService/SupportTrainingService.md +++ /dev/null @@ -1,39 +0,0 @@ -# TrainingService - -NNI TrainingService provides the training platform for running NNI trial jobs. NNI supports [local](./LocalMode.md), [remote](./RemoteMachineMode.md), [pai](./PaiMode.md), [kubeflow](./KubeflowMode.md) and [frameworkcontroller](./FrameworkControllerMode.md) built-in training services. -NNI not only provides few built-in training service options, but also provides a method for customers to build their own training service easily. - -## Built-in TrainingService - -|TrainingService|Brief Introduction| -|---|---| -|[__Local__](./LocalMode.md)|NNI supports running an experiment on local machine, called local mode. Local mode means that NNI will run the trial jobs and nniManager process in same machine, and support gpu schedule function for trial jobs.| -|[__Remote__](./RemoteMachineMode.md)|NNI supports running an experiment on multiple machines through SSH channel, called remote mode. NNI assumes that you have access to those machines, and already setup the environment for running deep learning training code. NNI will submit the trial jobs in remote machine, and schedule suitable machine with enough gpu resource if specified.| -|[__Pai__](./PaiMode.md)|NNI supports running an experiment on [OpenPAI](https://github.com/Microsoft/pai) (aka pai), called pai mode. Before starting to use NNI pai mode, you should have an account to access an [OpenPAI](https://github.com/Microsoft/pai) cluster. See [here](https://github.com/Microsoft/pai#how-to-deploy) if you don't have any OpenPAI account and want to deploy an OpenPAI cluster. In pai mode, your trial program will run in pai's container created by Docker.| -|[__Kubeflow__](./KubeflowMode.md)|NNI supports running experiment on [Kubeflow](https://github.com/kubeflow/kubeflow), called kubeflow mode. Before starting to use NNI kubeflow mode, you should have a Kubernetes cluster, either on-premises or [Azure Kubernetes Service(AKS)](https://azure.microsoft.com/en-us/services/kubernetes-service/), a Ubuntu machine on which [kubeconfig](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/) is setup to connect to your Kubernetes cluster. If you are not familiar with Kubernetes, [here](https://kubernetes.io/docs/tutorials/kubernetes-basics/) is a good start. In kubeflow mode, your trial program will run as Kubeflow job in Kubernetes cluster.| -|[__FrameworkController__](./FrameworkControllerMode.md)|NNI supports running experiment using [FrameworkController](https://github.com/Microsoft/frameworkcontroller), called frameworkcontroller mode. FrameworkController is built to orchestrate all kinds of applications on Kubernetes, you don't need to install Kubeflow for specific deep learning framework like tf-operator or pytorch-operator. Now you can use FrameworkController as the training service to run NNI experiment.| - -## TrainingService Implementation - -TrainingService is designed to be easily implemented, we define an abstract class TrainingService as the parent class of all kinds of TrainingService, users just need to inherit the parent class and complete their own child class if they want to implement customized TrainingService. -The abstract function in TrainingService is shown below: - -```javascript -abstract class TrainingService { - public abstract listTrialJobs(): Promise; - public abstract getTrialJob(trialJobId: string): Promise; - public abstract addTrialJobMetricListener(listener: (metric: TrialJobMetric) => void): void; - public abstract removeTrialJobMetricListener(listener: (metric: TrialJobMetric) => void): void; - public abstract submitTrialJob(form: JobApplicationForm): Promise; - public abstract updateTrialJob(trialJobId: string, form: JobApplicationForm): Promise; - public abstract get isMultiPhaseJobSupported(): boolean; - public abstract cancelTrialJob(trialJobId: string, isEarlyStopped?: boolean): Promise; - public abstract setClusterMetadata(key: string, value: string): Promise; - public abstract getClusterMetadata(key: string): Promise; - public abstract cleanUp(): Promise; - public abstract run(): Promise; -} -``` - -The parent class of TrainingService has a few abstract functions, users need to inherit the parent class and implement all of these abstract functions. -For more information about how to write your own TrainingService, please [refer](https://github.com/microsoft/nni/blob/master/docs/en_US/TrainingService/HowToImplementTrainingService.md). diff --git a/docs/en_US/Tuner/PBTTuner.md b/docs/en_US/Tuner/PBTTuner.md index 6d321ff2ab..1554039a33 100644 --- a/docs/en_US/Tuner/PBTTuner.md +++ b/docs/en_US/Tuner/PBTTuner.md @@ -11,7 +11,7 @@ PBTTuner initializes a population with several trials (i.e., `population_size`). ### Provide checkpoint directory -Since some trials need to load other trial's checkpoint, users should provide a directory (i.e., `all_checkpoint_dir`) which is accessible by every trial. It is easy for local mode, users could directly use the default directory or specify any directory on the local machine. For other training services, users should follow [the document of those training services](../TrainingService/SupportTrainingService.md) to provide a directory in a shared storage, such as NFS, Azure storage. +Since some trials need to load other trial's checkpoint, users should provide a directory (i.e., `all_checkpoint_dir`) which is accessible by every trial. It is easy for local mode, users could directly use the default directory or specify any directory on the local machine. For other training services, users should follow [the document of those training services](../TrainingService/Overview.md) to provide a directory in a shared storage, such as NFS, Azure storage. ### Modify your trial code diff --git a/docs/en_US/Tutorial/ExperimentConfig.md b/docs/en_US/Tutorial/ExperimentConfig.md index bfe7baf3f0..c3b1cb247d 100644 --- a/docs/en_US/Tutorial/ExperimentConfig.md +++ b/docs/en_US/Tutorial/ExperimentConfig.md @@ -228,7 +228,7 @@ Note: The maxExecDuration spec set the time of an experiment, not a trial job. I ### versionCheck -Optional. Bool. Default: false. +Optional. Bool. Default: true. NNI will check the version of nniManager process and the version of trialKeeper in remote, pai and kubernetes platform. If you want to disable version check, you could set versionCheck be false. diff --git a/docs/en_US/Tutorial/QuickStart.md b/docs/en_US/Tutorial/QuickStart.md index 81c5b9e05d..5b8e2f50be 100644 --- a/docs/en_US/Tutorial/QuickStart.md +++ b/docs/en_US/Tutorial/QuickStart.md @@ -4,23 +4,29 @@ We currently support Linux, macOS, and Windows. Ubuntu 16.04 or higher, macOS 10.14.1, and Windows 10.1809 are tested and supported. Simply run the following `pip install` in an environment that has `python >= 3.5`. -**Linux and macOS** +### Linux and macOS ```bash - python3 -m pip install --upgrade nni +python3 -m pip install --upgrade nni ``` -**Windows** +### Windows ```bash - python -m pip install --upgrade nni +python -m pip install --upgrade nni ``` -Note: +```eval_rst +.. Note:: For Linux and macOS, ``--user`` can be added if you want to install NNI in your home directory; this does not require any special privileges. +``` -* For Linux and macOS, `--user` can be added if you want to install NNI in your home directory; this does not require any special privileges. -* If there is an error like `Segmentation fault`, please refer to the [FAQ](FAQ.md). -* For the `system requirements` of NNI, please refer to [Install NNI on Linux&Mac](InstallationLinux.md) or [Windows](InstallationWin.md). +```eval_rst +.. Note:: If there is an error like ``Segmentation fault``, please refer to the :doc:`FAQ `. +``` + +```eval_rst +.. Note:: For the system requirements of NNI, please refer to :doc:`Install NNI on Linux & Mac ` or :doc:`Windows `. +``` ## "Hello World" example on MNIST @@ -33,7 +39,12 @@ def run_trial(params): # Input data mnist = input_data.read_data_sets(params['data_dir'], one_hot=True) # Build network - mnist_network = MnistNetwork(channel_1_num=params['channel_1_num'], channel_2_num=params['channel_2_num'], conv_size=params['conv_size'], hidden_size=params['hidden_size'], pool_size=params['pool_size'], learning_rate=params['learning_rate']) + mnist_network = MnistNetwork(channel_1_num=params['channel_1_num'], + channel_2_num=params['channel_2_num'], + conv_size=params['conv_size'], + hidden_size=params['hidden_size'], + pool_size=params['pool_size'], + learning_rate=params['learning_rate']) mnist_network.build_network() test_acc = 0.0 @@ -44,11 +55,20 @@ def run_trial(params): test_acc = mnist_network.evaluate(mnist) if __name__ == '__main__': - params = {'data_dir': '/tmp/tensorflow/mnist/input_data', 'dropout_rate': 0.5, 'channel_1_num': 32, 'channel_2_num': 64, 'conv_size': 5, 'pool_size': 2, 'hidden_size': 1024, 'learning_rate': 1e-4, 'batch_num': 2000, 'batch_size': 32} + params = {'data_dir': '/tmp/tensorflow/mnist/input_data', + 'dropout_rate': 0.5, + 'channel_1_num': 32, + 'channel_2_num': 64, + 'conv_size': 5, + 'pool_size': 2, + 'hidden_size': 1024, + 'learning_rate': 1e-4, + 'batch_num': 2000, + 'batch_size': 32} run_trial(params) ``` -Note: If you want to see the full implementation, please refer to [examples/trials/mnist-tfv1/mnist_before.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist_before.py). +If you want to see the full implementation, please refer to [examples/trials/mnist-tfv1/mnist_before.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist_before.py). The above code can only try one set of parameters at a time; if we want to tune learning rate, we need to manually modify the hyperparameter and start the trial again and again. @@ -69,9 +89,9 @@ output: one optimal hyperparameter configuration If you want to use NNI to automatically train your model and find the optimal hyper-parameters, you need to do three changes based on your code: -**Three steps to start an experiment** +### Three steps to start an experiment -**Step 1**: Give a `Search Space` file in JSON, including the `name` and the `distribution` (discrete-valued or continuous-valued) of all the hyperparameters you need to search. +**Step 1**: Write a `Search Space` file in JSON, including the `name` and the `distribution` (discrete-valued or continuous-valued) of all the hyperparameters you need to search. ```diff - params = {'data_dir': '/tmp/tensorflow/mnist/input_data', 'dropout_rate': 0.5, 'channel_1_num': 32, 'channel_2_num': 64, @@ -85,7 +105,7 @@ If you want to use NNI to automatically train your model and find the optimal hy + } ``` -*Implemented code directory: [search_space.json](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/search_space.json)* +*Example: [search_space.json](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/search_space.json)* **Step 2**: Modify your `Trial` file to get the hyperparameter set from NNI and report the final result to NNI. @@ -110,7 +130,7 @@ If you want to use NNI to automatically train your model and find the optimal hy run_trial(params) ``` -*Implemented code directory: [mnist.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist.py)* +*Example: [mnist.py](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/mnist.py)* **Step 3**: Define a `config` file in YAML which declares the `path` to the search space and trial files. It also gives other information such as the tuning algorithm, max trial number, and max duration arguments. @@ -133,31 +153,37 @@ trial: gpuNum: 0 ``` -Note, **for Windows, you need to change the trial command from `python3` to `python`**. +```eval_rst +.. Note:: If you are planning to use remote machines or clusters as your :doc:`training service <../TrainingService/Overview>`, to avoid too much pressure on network, we limit the number of files to 2000 and total size to 300MB. If your codeDir contains too many files, you can choose which files and subfolders should be excluded by adding a ``.nniignore`` file that works like a ``.gitignore`` file. For more details on how to write this file, see the `git documentation `_. +``` -*Implemented code directory: [config.yml](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/config.yml)* +*Example: [config.yml](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/config.yml) [.nniignore](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1/.nniignore)* -All the cod above is already prepared and stored in [examples/trials/mnist-tfv1/](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1). +All the code above is already prepared and stored in [examples/trials/mnist-tfv1/](https://github.com/Microsoft/nni/tree/master/examples/trials/mnist-tfv1). -**Linux and macOS** +#### Linux and macOS Run the **config.yml** file from your command line to start an MNIST experiment. ```bash - nnictl create --config nni/examples/trials/mnist-tfv1/config.yml +nnictl create --config nni/examples/trials/mnist-tfv1/config.yml ``` -**Windows** +#### Windows Run the **config_windows.yml** file from your command line to start an MNIST experiment. -Note: if you're using NNI on Windows, you need to change `python3` to `python` in the config.yml file or use the config_windows.yml file to start the experiment. - ```bash - nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml +nnictl create --config nni\examples\trials\mnist-tfv1\config_windows.yml +``` + +```eval_rst +.. Note:: If you're using NNI on Windows, you probably need to change ``python3`` to ``python`` in the config.yml file or use the config_windows.yml file to start the experiment. ``` -Note: `nnictl` is a command line tool that can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click [here](Nnictl.md) for more usage of `nnictl` +```eval_rst +.. Note:: ``nnictl`` is a command line tool that can be used to control experiments, such as start/stop/resume an experiment, start/stop NNIBoard, etc. Click :doc:`here ` for more usage of ``nnictl``. +``` Wait for the message `INFO: Successfully started experiment!` in the command line. This message indicates that your experiment has been successfully started. And this is what we expect to get: diff --git a/docs/en_US/_templates/index.html b/docs/en_US/_templates/index.html index 9ca37b459c..5cc8257298 100644 --- a/docs/en_US/_templates/index.html +++ b/docs/en_US/_templates/index.html @@ -219,7 +219,7 @@

NNI capabilities in a glance

diff --git a/docs/en_US/conf.py b/docs/en_US/conf.py index 16808e2143..41ac9c939c 100644 --- a/docs/en_US/conf.py +++ b/docs/en_US/conf.py @@ -46,6 +46,7 @@ 'sphinxarg.ext', 'sphinx.ext.napoleon', 'sphinx.ext.viewcode', + 'sphinx.ext.intersphinx', 'nbsphinx', ] diff --git a/docs/en_US/training_services.rst b/docs/en_US/training_services.rst index fd63f9bcaf..435abc0c26 100644 --- a/docs/en_US/training_services.rst +++ b/docs/en_US/training_services.rst @@ -2,7 +2,7 @@ Introduction to NNI Training Services ===================================== .. toctree:: - Overview <./TrainingService/SupportTrainingService> + Overview <./TrainingService/Overview> Local<./TrainingService/LocalMode> Remote<./TrainingService/RemoteMachineMode> OpenPAI<./TrainingService/PaiMode> diff --git a/docs/static/css/custom.css b/docs/static/css/custom.css index ba10407098..e1a667bf0a 100644 --- a/docs/static/css/custom.css +++ b/docs/static/css/custom.css @@ -113,4 +113,8 @@ td.framework{ .or{ vertical-align: middle; -} \ No newline at end of file +} + +.wy-plain-list-disc li, .rst-content .section ul li, .rst-content .toctree-wrapper ul li, article ul li { + margin-bottom: 0px; +} diff --git a/examples/trials/mnist-tfv1/.nniignore b/examples/trials/mnist-tfv1/.nniignore new file mode 100644 index 0000000000..04886e1d80 --- /dev/null +++ b/examples/trials/mnist-tfv1/.nniignore @@ -0,0 +1,7 @@ +# Exclude the following directories when uploading codeDir. +data +logs +checkpoints + +# They can also be files +outputs.log