Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
use doc in master branch instead of release branch (#2826)
Browse files Browse the repository at this point in the history
  • Loading branch information
suiguoxin authored Aug 26, 2020
1 parent beeea32 commit 9f44d54
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 8 deletions.
14 changes: 7 additions & 7 deletions docs/en_US/CommunitySharings/ModelCompressionComparison.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In addition, we provide friendly instructions on the re-implementation of these

The experiments are performed with the following pruners/datasets/models:

* Models: [VGG16, ResNet18, ResNet50](https://github.com/microsoft/nni/tree/v1.8/examples/model_compress/models/cifar10)
* Models: [VGG16, ResNet18, ResNet50](https://github.com/microsoft/nni/tree/master/examples/model_compress/models/cifar10)

* Datasets: CIFAR-10

Expand All @@ -23,7 +23,7 @@ The experiments are performed with the following pruners/datasets/models:

For the pruners with scheduling, `L1Filter Pruner` is used as the base algorithm. That is to say, after the sparsities distribution is decided by the scheduling algorithm, `L1Filter Pruner` is used to performn real pruning.

- All the pruners listed above are implemented in [nni](https://github.com/microsoft/nni/tree/v1.8/docs/en_US/Compressor/Overview.md).
- All the pruners listed above are implemented in [nni](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/Overview.md).

## Experiment Result

Expand Down Expand Up @@ -60,14 +60,14 @@ From the experiment result, we get the following conclusions:

* The experiment results are all collected with the default configuration of the pruners in nni, which means that when we call a pruner class in nni, we don't change any default class arguments.

* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/v1.8/docs/en_US/Compressor/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/v1.8/docs/en_US/Compressor/ModelSpeedup.md).
* Both FLOPs and the number of parameters are counted with [Model FLOPs/Parameters Counter](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/CompressionUtils.md#model-flopsparameters-counter) after [model speed up](https://github.com/microsoft/nni/tree/master/docs/en_US/Compressor/ModelSpeedup.md).
This avoids potential issues of counting them of masked models.

* The experiment code can be found [here]( https://github.com/microsoft/nni/tree/v1.8/examples/model_compress/auto_pruners_torch.py).
* The experiment code can be found [here]( https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py).

### Experiment Result Rendering

* If you follow the practice in the [example]( https://github.com/microsoft/nni/tree/v1.8/examples/model_compress/auto_pruners_torch.py), for every single pruning experiment, the experiment result will be saved in JSON format as follows:
* If you follow the practice in the [example]( https://github.com/microsoft/nni/tree/master/examples/model_compress/auto_pruners_torch.py), for every single pruning experiment, the experiment result will be saved in JSON format as follows:
``` json
{
"performance": {"original": 0.9298, "pruned": 0.1, "speedup": 0.1, "finetuned": 0.7746},
Expand All @@ -76,8 +76,8 @@ This avoids potential issues of counting them of masked models.
}
```

* The experiment results are saved [here](https://github.com/microsoft/nni/tree/v1.8/examples/model_compress/comparison_of_pruners).
You can refer to [analyze](https://github.com/microsoft/nni/tree/v1.8/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures.
* The experiment results are saved [here](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners).
You can refer to [analyze](https://github.com/microsoft/nni/tree/master/examples/model_compress/comparison_of_pruners/analyze.py) to plot new performance comparison figures.

## Contribution

Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Compressor/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Pruning algorithms compress the original network by removing redundant weights o
| [SimulatedAnnealing Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#simulatedannealing-pruner) | Automatic pruning with a guided heuristic search method, Simulated Annealing algorithm [Reference Paper](https://arxiv.org/abs/1907.03141) |
| [AutoCompress Pruner](https://nni.readthedocs.io/en/latest/Compressor/Pruner.html#autocompress-pruner) | Automatic pruning by iteratively call SimulatedAnnealing Pruner and ADMM Pruner [Reference Paper](https://arxiv.org/abs/1907.03141) |

You can refer to this [benchmark](https://github.com/microsoft/nni/tree/v1.8/docs/en_US/CommunitySharings/ModelCompressionComparison.md) for the performance of these pruners on some benchmark problems.
You can refer to this [benchmark](https://github.com/microsoft/nni/tree/master/docs/en_US/CommunitySharings/ModelCompressionComparison.md) for the performance of these pruners on some benchmark problems.

### Quantization Algorithms

Expand Down

0 comments on commit 9f44d54

Please sign in to comment.