You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.
Describe the bug:
I am attempting to run this pruned mirror detection model [https://github.com/memgonzales/mirror-segmentation], but when I attempt to perform speedup_model(), I get the following error:
File "C:\Users\pillai.k\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\pillai.k\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\pillai.k\PMDLite\mirror-segmentation-main\prune.py", line 178, in <module>
main()
File "C:\Users\pillai.k\PMDLite\mirror-segmentation-main\prune.py", line 127, in main
ModelSpeedup(net, dummy, masks).speedup_model()
File "C:\Users\pillai.k\MirrorEnv\lib\site-packages\nni\compression\speedup\model_speedup.py", line 429, in speedup_model
self.fix_mask_conflict()
File "C:\Users\pillai.k\MirrorEnv\lib\site-packages\nni\compression\speedup\model_speedup.py", line 243, in fix_mask_conflict
fix_channel_mask_conflict(self.graph_module, self.masks)
File "C:\Users\pillai.k\MirrorEnv\lib\site-packages\nni\compression\speedup\mask_conflict.py", line 229, in fix_channel_mask_conflict
prune_axis = detect_mask_prune_dim(graph_module, masks)
File "C:\Users\pillai.k\MirrorEnv\lib\site-packages\nni\compression\speedup\mask_conflict.py", line 400, in detect_mask_prune_dim
sub_module = graph_module.get_submodule(layer_name)
File "C:\Users\pillai.k\MirrorEnv\lib\site-packages\torch\nn\modules\module.py", line 686, in get_submodule
raise AttributeError(mod._get_name() + " has no "
AttributeError: BFE_Module has no attribute `cbam`
This error occurs for the mask element edge_extract.cbam.ChannelGate.mlp.1. Given below is the model description for this module:
def main():
net = model.to(device)
# Load model weights and biases. Change the device ordinal as needed.
old_state_dict=torch.load(pruned_weights_path, map_location=device)
new_state_dict={}
for key in old_state_dict.keys():
new_key=key.replace('module.','_nni_wrapper.')
if('_mask' in key):
tmp=new_key.split('.')
tmp.insert(-1,"_nni_wrapper")
new_key=(".".join(tmp))
new_state_dict[new_key]=old_state_dict[key]
net.load_state_dict(new_state_dict)
pruner.unwrap_model()
fp=open('model_desc.txt','w')
print(net,file=fp)
fp.close()
from nni.compression.speedup import ModelSpeedup
ModelSpeedup(net, dummy, masks).speedup_model()
...rest of main()
How to reproduce:
set the path to pruned weights file and run prune.py
The text was updated successfully, but these errors were encountered:
krteyu
changed the title
Model speedup fails with torch._assert TypeError
Model speedup fails due to Attribute Error
Jun 19, 2024
This problem appears to be due to cbam being instantiated in edge extract but not being used in the forward function. Commenting out cbam appears to fix this issue.
However I am now getting an assert len(set(num_channels_list)) == 1 AssertionError similar to #4160 which is also open. I am not able to find the model dependency causing this issue, any help would be appreciated
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Describe the bug:
I am attempting to run this pruned mirror detection model [https://github.com/memgonzales/mirror-segmentation], but when I attempt to perform speedup_model(), I get the following error:
This error occurs for the mask element
edge_extract.cbam.ChannelGate.mlp.1
. Given below is the model description for this module:Environment:
Reproduce the problem
Model weights can be downloaded at https://drive.google.com/file/d/18zsqjK1aHVC4D8Ky530C--fwxdQylQ37/view?usp=sharing
clone the above mentioned github and add the following to the main function of
prune.py
:prune.py
The text was updated successfully, but these errors were encountered: