Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

can't speed up the model #2232

Closed
fishhead2zju opened this issue Mar 25, 2020 · 4 comments · Fixed by #2241
Closed

can't speed up the model #2232

fishhead2zju opened this issue Mar 25, 2020 · 4 comments · Fixed by #2241

Comments

@fishhead2zju
Copy link

fishhead2zju commented Mar 25, 2020

hi, it's the first time using nni tools. It's a really fancy project. However, I met some problems.
I followed the instructions and prune the model successfully, the file size of checkpoint decreased(104M - > 34M ).
but when I tried to speed up the model, it failed . the error message :
[03/25/2020, 05:34:15 PM] INFO (nni.compression.speedup.torch.compressor) start to speed up the model
[03/25/2020, 05:34:15 PM] INFO (nni.compression.speedup.torch.compressor) infer module masks...
infer mask of module de_pred.0.conv with op_type Conv2d
Traceback (most recent call last):
File "val_c.py", line 119, in
m_speedup.speedup_model()
File "/root/anaconda3/envs/cc/lib/python3.6/site-packages/nni/compression/speedup/torch/compressor.py", line 548, in speedup_model
self.infer_modules_masks()
File "/root/anaconda3/envs/cc/lib/python3.6/site-packages/nni/compression/speedup/torch/compressor.py", line 511, in infer_modules_masks
self.infer_module_mask(module_name, mask=mask)
File "/root/anaconda3/envs/cc/lib/python3.6/site-packages/nni/compression/speedup/torch/compressor.py", line 467, in infer_module_mask
m_type = self.name_to_gnode[module_name].op_type
KeyError: 'de_pred.1.conv' ]

my torch version is 1.3.1.

Also, I tried to change torch model to tensorrt, but I find the pruned model is slower than origin model. is that right?

here is my code
checkpoint = torch.load('ck.pth.tar')
masks_file='./mask.pth'
model.load_state_dict(checkpoint)
apply_compression_results(model,masks_file)
img = Image.open(img_paths[i])
img = transform(Image.open(img_paths[i]).convert('RGB')).cuda()
img = img.unsqueeze(0)
m_speedup = ModelSpeedup(model, img, masks_file)
m_speedup.speedup_model()`

@QuanluZhang QuanluZhang self-assigned this Mar 25, 2020
@QuanluZhang
Copy link
Contributor

@fishhead2zju thanks for reporting this issue.

First, you said the checkpoint decreases from 104M to 34M, but the pruners are only for finding masks, the size of checkpointed model should not change. Could you double check this number?

Second, could you share your code with us, or share a code snippet that is executable and has the abovementioned error?

@fishhead2zju
Copy link
Author

fishhead2zju commented Mar 26, 2020

.

@fishhead2zju thanks for reporting this issue.

First, you said the checkpoint decreases from 104M to 34M, but the pruners are only for finding masks, the size of checkpointed model should not change. Could you double check this number?

Second, could you share your code with us, or share a code snippet that is executable and has the abovementioned error?

thanks for your quickly reply.
1、Sorry I had made a mistake. I check the number and found that the checkpoint is not change.
2、here is the code .
test.zip

In test.zip, you can run 'python val_github.py' and would get the error.
the pruned.pth and mask.pth are generated by code below.
pruner.export_model('pruned.pth'), mask_path='mask.pth')

@QuanluZhang
Copy link
Contributor

@fishhead2zju the error is induced by a bug in ModelSpeedup, we will fix it very soon.

BTW, current ModelSpeedup cannot speed up a model with fine-grained masks.

@QuanluZhang QuanluZhang linked a pull request Mar 26, 2020 that will close this issue
@fishhead2zju
Copy link
Author

@fishhead2zju the error is induced by a bug in ModelSpeedup, we will fix it very soon.

BTW, current ModelSpeedup cannot speed up a model with fine-grained masks.

Thank you very much !!!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants