Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cut the model to save time #13

Open
DeepKnowledge1 opened this issue Mar 12, 2021 · 5 comments
Open

Cut the model to save time #13

DeepKnowledge1 opened this issue Mar 12, 2021 · 5 comments

Comments

@DeepKnowledge1
Copy link

the Resnet has 4 layers, so if you want to use only 3 layers (1-3), you can cut the model, this will save the time

I tried it in my case, I used only 2 layers, so the new model contains only layer1 and layer2.

you do not need to compute the whole forward pass with all layers, so the inference time was decreased

@DeepKnowledge1 DeepKnowledge1 changed the title Cut the model Cut the model to save time Mar 12, 2021
@xfby2016
Copy link

Could you share your code?Thank you!

@DeepKnowledge1
Copy link
Author

@xfby2016, I hope you can get the idea,

You can also see this blog

`def CutModel():

pretrained_model = resnet18(pretrained=True, progress=True)
NetFeatureSize = OrderedDict([('layer1', [64]), ('layer2', [128]), ('layer3', [256]), ('layer4', [512])])
Feature_layers = 'layer1,layer2'   
layer_ids = Feature_layers.split(',') ## 
output_layer =  max(layer_ids)

layers = list(pretrained_model ._modules.keys())
layer_count = 0
for l in layers:
            if l != output_layer:
                layer_count += 1
            else:
                break
            
            
for i in range(1,len(layers)-layer_count):
    pretrained_model ._modules.pop(layers[-i])
newmodel= nn.Sequential(pretrained_model ._modules)


return newmodel,NetFeatureSize`

`
model,NetFeatureSize= CutModel()
layers = []
train_outputs = OrderedDict()
test_outputs = OrderedDict()
layer_ids = Feature_layers.split(',')

# set model's intermediate outputs
outputs = []
def hook(module, input, output):
    outputs.append(output)

for layer in layer_ids:
    layers.append(layer)
    train_outputs[layer]= []
    test_outputs[layer]= []
    if  layer == 'layer1':
        model.layer1[-1].register_forward_hook(hook)
    elif  layer == 'layer2':
        model.layer2[-1].register_forward_hook(hook)
    elif  layer == 'layer3':
        model.layer3[-1].register_forward_hook(hook)
    elif  layer == 'layer4':
        model.layer4[-1].register_forward_hook(hook)

`

I will upload the whole stuff when it finished, there are many changes and improvements.
I tried to make simple, hope this can help you

@QQR1
Copy link

QQR1 commented Apr 19, 2021

Thanks for your share!Do you have the idea for accelerating Mahalanobis distance calculation?

@DeepKnowledge1
Copy link
Author

@QQR1 please have a look

#8 (comment)

@Yangly0
Copy link

Yangly0 commented Jul 16, 2021

A nice idea, thanks for your code!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants