You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.
hi, it's the first time using nni tools. It's a really fancy project. However, I met some problems.
I followed the instructions and prune the model successfully, the file size of checkpoint decreased(104M - > 34M ).
but when I tried to speed up the model, it failed . the error message :
[03/25/2020, 05:34:15 PM] INFO (nni.compression.speedup.torch.compressor) start to speed up the model
[03/25/2020, 05:34:15 PM] INFO (nni.compression.speedup.torch.compressor) infer module masks...
infer mask of module de_pred.0.conv with op_type Conv2d
Traceback (most recent call last):
File "val_c.py", line 119, in
m_speedup.speedup_model()
File "/root/anaconda3/envs/cc/lib/python3.6/site-packages/nni/compression/speedup/torch/compressor.py", line 548, in speedup_model
self.infer_modules_masks()
File "/root/anaconda3/envs/cc/lib/python3.6/site-packages/nni/compression/speedup/torch/compressor.py", line 511, in infer_modules_masks
self.infer_module_mask(module_name, mask=mask)
File "/root/anaconda3/envs/cc/lib/python3.6/site-packages/nni/compression/speedup/torch/compressor.py", line 467, in infer_module_mask
m_type = self.name_to_gnode[module_name].op_type
KeyError: 'de_pred.1.conv' ]
my torch version is 1.3.1.
Also, I tried to change torch model to tensorrt, but I find the pruned model is slower than origin model. is that right?
here is my code checkpoint = torch.load('ck.pth.tar') masks_file='./mask.pth'
model.load_state_dict(checkpoint)
apply_compression_results(model,masks_file)
img = Image.open(img_paths[i])
img = transform(Image.open(img_paths[i]).convert('RGB')).cuda()
img = img.unsqueeze(0)
m_speedup = ModelSpeedup(model, img, masks_file)
m_speedup.speedup_model()`
The text was updated successfully, but these errors were encountered:
First, you said the checkpoint decreases from 104M to 34M, but the pruners are only for finding masks, the size of checkpointed model should not change. Could you double check this number?
Second, could you share your code with us, or share a code snippet that is executable and has the abovementioned error?
First, you said the checkpoint decreases from 104M to 34M, but the pruners are only for finding masks, the size of checkpointed model should not change. Could you double check this number?
Second, could you share your code with us, or share a code snippet that is executable and has the abovementioned error?
thanks for your quickly reply.
1、Sorry I had made a mistake. I check the number and found that the checkpoint is not change.
2、here is the code . test.zip
In test.zip, you can run 'python val_github.py' and would get the error.
the pruned.pth and mask.pth are generated by code below. pruner.export_model('pruned.pth'), mask_path='mask.pth')
hi, it's the first time using nni tools. It's a really fancy project. However, I met some problems.
I followed the instructions and prune the model successfully, the file size of checkpoint decreased(104M - > 34M ).
but when I tried to speed up the model, it failed . the error message :
[03/25/2020, 05:34:15 PM] INFO (nni.compression.speedup.torch.compressor) start to speed up the model
[03/25/2020, 05:34:15 PM] INFO (nni.compression.speedup.torch.compressor) infer module masks...
infer mask of module de_pred.0.conv with op_type Conv2d
Traceback (most recent call last):
File "val_c.py", line 119, in
m_speedup.speedup_model()
File "/root/anaconda3/envs/cc/lib/python3.6/site-packages/nni/compression/speedup/torch/compressor.py", line 548, in speedup_model
self.infer_modules_masks()
File "/root/anaconda3/envs/cc/lib/python3.6/site-packages/nni/compression/speedup/torch/compressor.py", line 511, in infer_modules_masks
self.infer_module_mask(module_name, mask=mask)
File "/root/anaconda3/envs/cc/lib/python3.6/site-packages/nni/compression/speedup/torch/compressor.py", line 467, in infer_module_mask
m_type = self.name_to_gnode[module_name].op_type
KeyError: 'de_pred.1.conv' ]
my torch version is 1.3.1.
Also, I tried to change torch model to tensorrt, but I find the pruned model is slower than origin model. is that right?
here is my code
checkpoint = torch.load('ck.pth.tar')
masks_file='./mask.pth'
model.load_state_dict(checkpoint)
apply_compression_results(model,masks_file)
img = Image.open(img_paths[i])
img = transform(Image.open(img_paths[i]).convert('RGB')).cuda()
img = img.unsqueeze(0)
m_speedup = ModelSpeedup(model, img, masks_file)
m_speedup.speedup_model()`
The text was updated successfully, but these errors were encountered: