Skip to content

Commit

Permalink
Resolve comments in PR 1571 (microsoft#1590)
Browse files Browse the repository at this point in the history
* Resolve comments in PR 1571

* try to pass ut

* fix typo

* format doc-string

* use tensorflow.compat.v1

* Revert "use tensorflow.compat.v1"

This reverts commit 97a4ed9.
  • Loading branch information
liuzhe-lz authored Oct 14, 2019
1 parent ca2253c commit d6b61e2
Show file tree
Hide file tree
Showing 13 changed files with 90 additions and 125 deletions.
6 changes: 3 additions & 3 deletions docs/en_US/Compressor/Overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ We have provided two naive compression algorithms and four popular ones for user
|Name|Brief Introduction of Algorithm|
|---|---|
| [Level Pruner](./Pruner.md#level-pruner) | Pruning the specified ratio on each weight based on absolute values of weights |
| [AGP Pruner](./Pruner.md#agp-pruner) | To prune, or not to prune: exploring the efficacy of pruning for model compression. [Reference Paper](https://arxiv.org/abs/1710.01878)|
| [AGP Pruner](./Pruner.md#agp-pruner) | Automated gradual pruning (To prune, or not to prune: exploring the efficacy of pruning for model compression) [Reference Paper](https://arxiv.org/abs/1710.01878)|
| [Sensitivity Pruner](./Pruner.md#sensitivity-pruner) | Learning both Weights and Connections for Efficient Neural Networks. [Reference Paper](https://arxiv.org/abs/1506.02626)|
| [Naive Quantizer](./Quantizer.md#naive-quantizer) | Quantize weights to default 8 bits |
| [QAT Quantizer](./Quantizer.md#qat-quantizer) | Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference. [Reference Paper](http://openaccess.thecvf.com/content_cvpr_2018/papers/Jacob_Quantization_and_Training_CVPR_2018_paper.pdf)|
Expand Down Expand Up @@ -72,7 +72,7 @@ It means following the algorithm's default setting for compressed operations wit

### Other APIs

Some compression algorithms use epochs to control the progress of compression, and some algorithms need to do something after every minibatch. Therefore, we provide another two APIs for users to invoke. One is `update_epoch`, you can use it as follows:
Some compression algorithms use epochs to control the progress of compression (e.g. [AGP](./Pruner.md#agp-pruner)), and some algorithms need to do something after every minibatch. Therefore, we provide another two APIs for users to invoke. One is `update_epoch`, you can use it as follows:

Tensorflow code
```python
Expand Down Expand Up @@ -138,7 +138,7 @@ Some algorithms may want global information for generating masks, for example, a

The interface for customizing quantization algorithm is similar to that of pruning algorithms. The only difference is that `calc_mask` is replaced with `quantize_weight`. `quantize_weight` directly returns the quantized weights rather than mask, because for quantization the quantized weights cannot be obtained by applying mask.

```
```python
# This is writing a Quantizer in tensorflow.
# For writing a Quantizer in PyTorch, you can simply replace
# nni.compression.tensorflow.Quantizer with
Expand Down
2 changes: 1 addition & 1 deletion docs/en_US/Compressor/Pruner.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ In [To prune, or not to prune: exploring the efficacy of pruning for model compr
>The binary weight masks are updated every ∆t steps as the network is trained to gradually increase the sparsity of the network while allowing the network training steps to recover from any pruning-induced loss in accuracy. In our experience, varying the pruning frequency ∆t between 100 and 1000 training steps had a negligible impact on the final model quality. Once the model achieves the target sparsity sf , the weight masks are no longer updated. The intuition behind this sparsity function in equation
### Usage
You can prune all weight from %0 to 80% sparsity in 10 epoch with the code below.
You can prune all weight from 0% to 80% sparsity in 10 epoch with the code below.

First, you should import pruner and add mask to model.

Expand Down
4 changes: 3 additions & 1 deletion examples/model_compress/main_tf_pruner.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,4 +127,6 @@ def main():
})
print('final result is', test_acc)

main()

if __name__ == '__main__':
main()
4 changes: 3 additions & 1 deletion examples/model_compress/main_tf_quantizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,4 +114,6 @@ def main():
})
print('final result is', test_acc)

main()

if __name__ == '__main__':
main()
6 changes: 3 additions & 3 deletions examples/model_compress/main_torch_pruner.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ def main():
test(model, device, test_loader)

pruner.update_epoch(epoch)



main()

if __name__ == '__main__':
main()
5 changes: 2 additions & 3 deletions examples/model_compress/main_torch_quantizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,6 @@ def main():
train(model, device, train_loader, optimizer)
test(model, device, test_loader)




main()
if __name__ == '__main__':
main()
26 changes: 12 additions & 14 deletions src/sdk/pynni/nni/compression/tensorflow/builtin_pruners.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@
class LevelPruner(Pruner):
def __init__(self, config_list):
"""
Configure Args:
sparsity
config_list: supported keys:
- sparsity
"""
super().__init__(config_list)

Expand All @@ -21,8 +21,7 @@ def calc_mask(self, weight, config, **kwargs):


class AGP_Pruner(Pruner):
"""
An automated gradual pruning algorithm that prunes the smallest magnitude
"""An automated gradual pruning algorithm that prunes the smallest magnitude
weights to achieve a preset level of network sparsity.
Michael Zhu and Suyog Gupta, "To prune, or not to prune: exploring the
Expand All @@ -32,12 +31,12 @@ class AGP_Pruner(Pruner):
"""
def __init__(self, config_list):
"""
Configure Args
initial_sparsity:
final_sparsity: you should make sure initial_sparsity <= final_sparsity
start_epoch: start epoch numer begin update mask
end_epoch: end epoch number stop update mask
frequency: if you want update every 2 epoch, you can set it 2
config_list: supported keys:
- initial_sparsity
- final_sparsity: you should make sure initial_sparsity <= final_sparsity
- start_epoch: start epoch numer begin update mask
- end_epoch: end epoch number stop update mask
- frequency: if you want update every 2 epoch, you can set it 2
"""
super().__init__(config_list)
self.now_epoch = tf.Variable(0)
Expand Down Expand Up @@ -77,17 +76,16 @@ def update_epoch(self, epoch, sess):


class SensitivityPruner(Pruner):
"""
Use algorithm from "Learning both Weights and Connections for Efficient Neural Networks"
"""Use algorithm from "Learning both Weights and Connections for Efficient Neural Networks"
https://arxiv.org/pdf/1506.02626v3.pdf
I.e.: "The pruning threshold is chosen as a quality parameter multiplied
by the standard deviation of a layers weights."
"""
def __init__(self, config_list):
"""
Configure Args:
sparsity: chosen pruning sparsity
config_list: supported keys
- sparsity: chosen pruning sparsity
"""
super().__init__(config_list)
self.layer_mask = {}
Expand Down
17 changes: 7 additions & 10 deletions src/sdk/pynni/nni/compression/tensorflow/builtin_quantizers.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,7 @@


class NaiveQuantizer(Quantizer):
"""
quantize weight to 8 bits
"""quantize weight to 8 bits
"""
def __init__(self, config_list):
super().__init__(config_list)
Expand All @@ -24,15 +23,14 @@ def quantize_weight(self, weight, config, op_name, **kwargs):


class QAT_Quantizer(Quantizer):
"""
Quantizer using the DoReFa scheme, as defined in:
"""Quantizer using the DoReFa scheme, as defined in:
Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
http://openaccess.thecvf.com/content_cvpr_2018/papers/Jacob_Quantization_and_Training_CVPR_2018_paper.pdf
"""
def __init__(self, config_list):
"""
Configure Args:
q_bits
config_list: supported keys:
- q_bits
"""
super().__init__(config_list)

Expand All @@ -50,15 +48,14 @@ def quantize_weight(self, weight, config, **kwargs):


class DoReFaQuantizer(Quantizer):
"""
Quantizer using the DoReFa scheme, as defined in:
"""Quantizer using the DoReFa scheme, as defined in:
Zhou et al., DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
(https://arxiv.org/abs/1606.06160)
"""
def __init__(self, config_list):
"""
Configure Args:
q_bits
config_list: supported keys:
- q_bits
"""
super().__init__(config_list)

Expand Down
52 changes: 22 additions & 30 deletions src/sdk/pynni/nni/compression/tensorflow/compressor.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,20 +13,21 @@ def __init__(self, op):


class Compressor:
"""
Abstract base TensorFlow compressor
"""
"""Abstract base TensorFlow compressor"""

def __init__(self, config_list):
self._bound_model = None
self._config_list = config_list

def __call__(self, model):
"""Compress given graph with algorithm implemented by subclass.
The graph will be editted and returned.
"""
self.compress(model)
return model

def compress(self, model):
"""
Compress given graph with algorithm implemented by subclass.
"""Compress given graph with algorithm implemented by subclass.
This will edit the graph.
"""
assert self._bound_model is None, "Each NNI compressor instance can only compress one model"
Expand All @@ -39,30 +40,26 @@ def compress(self, model):
self._instrument_layer(layer, config)

def compress_default_graph(self):
"""
Compress the default graph with algorithm implemented by subclass.
This will edit the graph.
"""Compress the default graph with algorithm implemented by subclass.
This will edit the default graph.
"""
self.compress(tf.get_default_graph())


def bind_model(self, model):
"""
This method is called when a model is bound to the compressor.
Users can optionally overload this method to do model-specific initialization.
"""This method is called when a model is bound to the compressor.
Compressors can optionally overload this method to do model-specific initialization.
It is guaranteed that only one model will be bound to each compressor instance.
"""
pass

def update_epoch(self, epoch, sess):
"""
if user want to update mask every epoch, user can override this method
"""If user want to update mask every epoch, user can override this method
"""
pass

def step(self, sess):
"""
if user want to update mask every step, user can override this method
"""If user want to update mask every step, user can override this method
"""
pass

Expand All @@ -87,29 +84,25 @@ def _select_config(self, layer):


class Pruner(Compressor):
"""
Abstract base TensorFlow pruner
"""
"""Abstract base TensorFlow pruner"""

def __init__(self, config_list):
super().__init__(config_list)

def calc_mask(self, weight, config, op, op_type, op_name):
"""
Pruners should overload this method to provide mask for weight tensors.
"""Pruners should overload this method to provide mask for weight tensors.
The mask must have the same shape and type comparing to the weight.
It will be applied with `multiply()` operation.
This method works as a subgraph which will be inserted into the bound model.
"""
raise NotImplementedError("Pruners must overload calc_mask()")

def _instrument_layer(self, layer, config):
"""
it seems the graph editor can only swap edges of nodes or remove all edges from a node
it cannot remove one edge from a node, nor can it assign a new edge to a node
we assume there is a proxy operation between the weight and the Conv2D layer
this is true as long as the weight is `tf.Value`
not sure what will happen if the weight is calculated from other operations
"""
# it seems the graph editor can only swap edges of nodes or remove all edges from a node
# it cannot remove one edge from a node, nor can it assign a new edge to a node
# we assume there is a proxy operation between the weight and the Conv2D layer
# this is true as long as the weight is `tf.Value`
# not sure what will happen if the weight is calculated from other operations
weight_index = _detect_weight_index(layer)
if weight_index is None:
_logger.warning('Failed to detect weight for layer {}'.format(layer.name))
Expand All @@ -122,9 +115,8 @@ def _instrument_layer(self, layer, config):


class Quantizer(Compressor):
"""
Abstract base TensorFlow quantizer
"""
"""Abstract base TensorFlow quantizer"""

def __init__(self, config_list):
super().__init__(config_list)

Expand Down
37 changes: 12 additions & 25 deletions src/sdk/pynni/nni/compression/torch/builtin_pruners.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,19 +12,8 @@ class LevelPruner(Pruner):
"""
def __init__(self, config_list):
"""
we suggest user to use json configure list, like [{},{}...], to set configure
format :
[
{
'sparsity': 0,
'support_type': 'default'
},
{
'sparsity': 50,
'support_op': conv1
}
]
if you want input multiple configure from file, you'd better use load_configure_file(path) to load
config_list: supported keys:
- sparsity
"""
super().__init__(config_list)

Expand All @@ -38,8 +27,7 @@ def calc_mask(self, weight, config, **kwargs):


class AGP_Pruner(Pruner):
"""
An automated gradual pruning algorithm that prunes the smallest magnitude
"""An automated gradual pruning algorithm that prunes the smallest magnitude
weights to achieve a preset level of network sparsity.
Michael Zhu and Suyog Gupta, "To prune, or not to prune: exploring the
Expand All @@ -49,12 +37,12 @@ class AGP_Pruner(Pruner):
"""
def __init__(self, config_list):
"""
Configure Args
initial_sparsity
final_sparsity: you should make sure initial_sparsity <= final_sparsity
start_epoch: start epoch numer begin update mask
end_epoch: end epoch number stop update mask, you should make sure start_epoch <= end_epoch
frequency: if you want update every 2 epoch, you can set it 2
config_list: supported keys:
- initial_sparsity
- final_sparsity: you should make sure initial_sparsity <= final_sparsity
- start_epoch: start epoch numer begin update mask
- end_epoch: end epoch number stop update mask, you should make sure start_epoch <= end_epoch
- frequency: if you want update every 2 epoch, you can set it 2
"""
super().__init__(config_list)
self.mask_list = {}
Expand Down Expand Up @@ -99,17 +87,16 @@ def update_epoch(self, epoch):


class SensitivityPruner(Pruner):
"""
Use algorithm from "Learning both Weights and Connections for Efficient Neural Networks"
"""Use algorithm from "Learning both Weights and Connections for Efficient Neural Networks"
https://arxiv.org/pdf/1506.02626v3.pdf
I.e.: "The pruning threshold is chosen as a quality parameter multiplied
by the standard deviation of a layers weights."
"""
def __init__(self, config_list):
"""
configure Args:
sparsity: chosen pruning sparsity
config_list: supported keys:
- sparsity: chosen pruning sparsity
"""
super().__init__(config_list)
self.mask_list = {}
Expand Down
Loading

0 comments on commit d6b61e2

Please sign in to comment.