Skip to content

Latest commit

 

History

History
21 lines (17 loc) · 1.07 KB

README.md

File metadata and controls

21 lines (17 loc) · 1.07 KB

This repo consist of LRP (Layer wise propagation) implementation in pytorch. Here you can find nice but in keras implementation https://github.com/sebastian-lapuschkin/lrp_toolbox

This implementation use custom autograd functions to make possiable pass though relevance in backward pass instead gradients. This gives ability to propagate relevance from skip connections (opposite to sequantial approach from another branch where i am using hooks)

Personally I was curious to implement resnet LRP. I found one pytorch LRP implementation here https://github.com/moboehle/Pytorch-LRP . But implementation principally is the same as mine through hooks - sequantial propagation from NN. The quiastion is - is it valid for skip connection. I guess not. Current implemention could be used for resnet with little hack - needness of substitube identity summation operation with custom autograd function.

I overload layers forward pass, where substitute default autograd function with custom one, excepting as argument - default function, input, default function arguments, and LRP_rule function