Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Sync model compression doc and implementation #1575

Merged
merged 4 commits into from
Sep 29, 2019

Conversation

liuzhe-lz
Copy link
Contributor

@liuzhe-lz liuzhe-lz commented Sep 27, 2019

TODO: It seems PyTorch does not use the glossary "op". Maybe we should use "layer".

## LevelPruner

This is one basic pruner: you can set a target sparsity level(expressed as a fraction, 0.6 means we will prune 60%).
This is one basic pruner: you can set a target sparsity level (expressed as a fraction, 0.6 means we will prune 60%).

We first sort the weights in the specified layer by their absolute values. And then mask to zero the smallest magnitude weights until the desired sparsity level is reached.

### Usage

Tensorflow code
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about also adding the import line here?

@liuzhe-lz liuzhe-lz merged commit 24d364e into microsoft:dev-mc Sep 29, 2019
liuzhe-lz added a commit that referenced this pull request Oct 9, 2019
* [Proposal] demo compressor (#1402)

model compression

* update doc for model compression (#1509)

* Update Overview.md

* Change Doc (#1510)

* refactor compression sdk (#1562)

* refactor compression sdk

* bugfix

* bugfix

* update ut

* Sync model compression doc and implementation (#1575)

* update doc

* formatting

* bugfix

* add import to examples
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants