Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

update doc of community sharing #2640

Merged
merged 9 commits into from
Jul 16, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/en_US/CommunitySharings/RecommendersSvd.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Automatically tuning SVD on NNI
# Automatically tuning SVD (NNI in Recommenders)

In this tutorial, we first introduce a github repo [Recommenders](https://github.com/Microsoft/Recommenders). It is a repository that provides examples and best practices for building recommendation systems, provided as Jupyter notebooks. It has various models that are popular and widely deployed in recommendation systems. To provide a complete end-to-end experience, they present each example in five key tasks, as shown below:

Expand Down
10 changes: 0 additions & 10 deletions docs/en_US/CommunitySharings/TuningSystems.md

This file was deleted.

13 changes: 13 additions & 0 deletions docs/en_US/CommunitySharings/automodel.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
######################
Automatic Model Tuning
######################

NNI can be applied on various model tuning tasks. Some state-of-the-art model search algorithms, such as EfficientNet, can be easily built on NNI. Popular models, e.g., recommendation models, can be tuned with NNI. The following are some use cases to illustrate how to leverage NNI in your model tuning tasks and how to build your own pipeline with NNI.

.. toctree::
:maxdepth: 1

Tuning SVD automatically <RecommendersSvd>
EfficientNet on NNI <../TrialExample/EfficientNet>
Automatic Model Architecture Search for Reading Comprehension <../TrialExample/SquadEvolutionExamples>
Parallelizing Optimization for TPE <ParallelizingTpeSearch>
12 changes: 12 additions & 0 deletions docs/en_US/CommunitySharings/autosys.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#######################
Automatic System Tuning
#######################

The performance of systems, such as database, tensor operator implementaion, often need to be tuned to adapt to specific hardware configuration, targeted workload, etc. Manually tuning a system is complicated and often requires detailed understanding of hardware and workload. NNI can make such tasks much easier and help system owners find the best configuration to the system automatically. The detailed design philosophy of automatic system tuning can be found in [this paper](https://dl.acm.org/doi/10.1145/3352020.3352031). The following are some typical cases that NNI can help.

.. toctree::
:maxdepth: 1

Tuning SPTAG (Space Partition Tree And Graph) automatically <SptagAutoTune>
Tuning the performance of RocksDB <../TrialExample/RocksdbExamples>
Tuning Tensor Operators automatically <../TrialExample/OpEvoExamples>
20 changes: 9 additions & 11 deletions docs/en_US/CommunitySharings/community_sharings.rst
Original file line number Diff line number Diff line change
@@ -1,16 +1,14 @@
######################
Community Sharings
######################
#######################
Use Cases and Solutions
#######################

In addtion to the official tutorilas and examples, we encourage community contributors to share their AutoML practices especially the NNI usage practices from their experience.
Different from the tutorials and examples in the rest of the document which show the usage of a feature, this part mainly introduces end-to-end scenarios and use cases to help users further understand how NNI can help them. NNI can be widely adopted in various scenarios. We also encourage community contributors to share their AutoML practices especially the NNI usage practices from their experience.

.. toctree::
:maxdepth: 2

NNI in Recommenders <RecommendersSvd>
Automatically tuning SPTAG with NNI <SptagAutoTune>
Neural Architecture Search Comparison <NasComparison>
Hyper-parameter Tuning Algorithm Comparison <HpoComparison>
Parallelizing Optimization for TPE <ParallelizingTpeSearch>
Automatically tune systems with NNI <TuningSystems>
NNI review article from Zhihu: - By Garvin Li <NNI_AutoFeatureEng>
Automatic Model Tuning (HPO/NAS) <automodel>
Automatic System Tuning (AutoSys) <autosys>
Model Compression <model_compression>
Feature Engineering <feature_engineering>
Performance measurement, comparison and analysis <perf_compare>
10 changes: 10 additions & 0 deletions docs/en_US/CommunitySharings/feature_engineering.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
###################
Feature Engineering
###################

The following is an article about how NNI helps in auto feature engineering shared by a community contributor. More use cases and solutions will be added in the future.

.. toctree::
:maxdepth: 1

NNI review article from Zhihu: - By Garvin Li <NNI_AutoFeatureEng>
10 changes: 10 additions & 0 deletions docs/en_US/CommunitySharings/model_compression.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#################
Model Compression
#################

The following one shows how to apply knowledge distillation on NNI model compression. More use cases and solutions will be added in the future.

.. toctree::
:maxdepth: 1

Knowledge distillation with NNI model compression <../TrialExample/KDExample>
11 changes: 11 additions & 0 deletions docs/en_US/CommunitySharings/perf_compare.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
################################################
Performance Measurement, Comparison and Analysis
################################################

Performance comparison and analysis can help users decide a proper algorithm (e.g., tuner, NAS algorithm) for their scenario. The following are some measurement and comparison data for users' reference.

.. toctree::
:maxdepth: 1

Neural Architecture Search Comparison <NasComparison>
Hyper-parameter Tuning Algorithm Comparsion <HpoComparison>
2 changes: 1 addition & 1 deletion docs/en_US/contents.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Neural Network Intelligence
Model Compression <model_compression>
Feature Engineering <feature_engineering>
References <reference>
Community Sharings <CommunitySharings/community_sharings>
Use Cases and Solutions <CommunitySharings/community_sharings>
FAQ <Tutorial/FAQ>
How to Contribute <contribution>
Changelog <Release>
7 changes: 1 addition & 6 deletions docs/en_US/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,4 @@ Examples
MNIST<./TrialExample/MnistExamples>
Cifar10<./TrialExample/Cifar10Examples>
Scikit-learn<./TrialExample/SklearnExamples>
EvolutionSQuAD<./TrialExample/SquadEvolutionExamples>
GBDT<./TrialExample/GbdtExample>
RocksDB <./TrialExample/RocksdbExamples>
OpEvo <./TrialExample/OpEvoExamples>
KDExample <./TrialExample/KDExample>
EfficientNet <./TrialExample/EfficientNet>
GBDT<./TrialExample/GbdtExample>