Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
77 commits
Select commit Hold shift + click to select a range
55f0821
test_pattern.py test don't fail anymore
leanderweber Feb 18, 2020
eb65cc2
test_base.py passes. Some xfails.
leanderweber Feb 25, 2020
d5e65b9
LRP-methods seem to work
leanderweber Mar 10, 2020
462eab9
fixed tensor equality in misc
leanderweber Mar 10, 2020
dde765c
added new base (core functionalities)
Jul 2, 2020
95a396c
just a save, not a working version
Jul 11, 2020
b0065a0
Some LRP-Rules working for tf2.0. Changed LRPSequentialPreset* (depre…
Jul 14, 2020
77f4f65
Updated Deprecation Warnings
Jul 14, 2020
c009022
small fixes
Jul 14, 2020
da90e87
cleaned up a bit, checked for efficiency
leanderweber Jul 20, 2020
64d5190
tested base; found and fixed some bugs; rewrote small_tests.py more c…
leanderweber Jul 30, 2020
57ad069
updated smalltests
leanderweber Aug 4, 2020
5eca9eb
test push, please ignore (:
rachtibat Aug 4, 2020
f2f554c
Added LeNet and weights for "plot_test.py"
rachtibat Aug 4, 2020
622f964
Finished and Debugged LRP-Rules. Added LRP-Gamma.
leanderweber Aug 4, 2020
3870534
cleaned up code a bit
leanderweber Aug 7, 2020
828e793
tf 2.3 hotfix
leanderweber Aug 21, 2020
5a7df1e
small fixes
leanderweber Aug 21, 2020
e8ba36c
relative imports
leanderweber Aug 26, 2020
d7ad079
added documentation and ideas for network canonization
leanderweber Aug 27, 2020
ed2c4b4
excluded small bug, fix later
leanderweber Aug 27, 2020
fdc2905
Change default analysis method to "max_activation". If you forget to …
rachtibat Sep 3, 2020
79ff128
addressed comments regardign python six and .gitignore
leanderweber Sep 8, 2020
a910be9
Merge branch 'updates_towards_tf2.0' of https://github.com/albermax/i…
leanderweber Sep 8, 2020
fe601c8
updated most Gradient-Based analyzers - careful: code is not tested y…
leanderweber Sep 8, 2020
5c67fd5
slight bugfix for GradientMethods
leanderweber Sep 10, 2020
9dcd748
corrected neuron_selection, included redundant apply method into wrap…
leanderweber Sep 10, 2020
cdfaac7
Add _head_mapping functionality. But is still a "passive feature". In…
rachtibat Sep 10, 2020
c6cbdee
fixed small bug of stop_mapping_at_layers
leanderweber Sep 11, 2020
70b05b6
Merge branch 'updates_towards_tf2.0' of https://github.com/albermax/i…
leanderweber Sep 11, 2020
966ee99
_head_mapping functionality activated: new argument in apply function…
rachtibat Sep 11, 2020
9368553
small fix in description
rachtibat Sep 11, 2020
514e9ad
small bug fix
rachtibat Sep 11, 2020
de964ef
BUG FIX: stop_mapping_at_layers
rachtibat Sep 14, 2020
1263680
new Feature: get intermediate explanations without doing analysis again.
rachtibat Sep 14, 2020
eece4b1
new Feature: get intermediate explanations without doing analysis again.
rachtibat Sep 14, 2020
15c35c1
small fixes
leanderweber Sep 15, 2020
d4cfd4c
Merge branch 'updates_towards_tf2.0' of https://github.com/albermax/i…
leanderweber Sep 15, 2020
e63bde8
some comments
leanderweber Sep 15, 2020
f1a162b
Merge branch 'updates_towards_tf2.0' of https://github.com/albermax/i…
leanderweber Sep 15, 2020
582daf9
integrated get_intermediate function.
rachtibat Sep 16, 2020
b5a0546
Updated SmoothGrad [untested]
leanderweber Sep 17, 2020
d2dc092
Merge branch 'updates_towards_tf2.0' of https://github.com/albermax/i…
leanderweber Sep 17, 2020
5f06fac
Did some first refactoring for readability/understandability. More fo…
leanderweber Sep 18, 2020
3364f3d
mini bug fix
rachtibat Sep 18, 2020
8ca3f55
Restructured reverse_model functionality into new class ReverseModel
leanderweber Sep 18, 2020
bbfafbc
Merge branch 'updates_towards_tf2.0' of https://github.com/albermax/i…
leanderweber Sep 18, 2020
91402b2
Fixed small bug with stop_mapping_at_layers=None
leanderweber Sep 18, 2020
d289a6f
check if model is resnet like for stop_mapping feature.
rachtibat Sep 20, 2020
0211187
Merge remote-tracking branch 'origin/updates_towards_tf2.0' into upda…
rachtibat Sep 20, 2020
01eec3c
check if model is resnet like for stop_mapping feature.
rachtibat Sep 20, 2020
bbe8b86
small bug fix
rachtibat Sep 20, 2020
8f0170b
small fixes. implemented Integrated Gradients [Untested]
leanderweber Sep 20, 2020
2f2fb6a
Merge branch 'updates_towards_tf2.0' of https://github.com/albermax/i…
leanderweber Sep 20, 2020
db5e909
Fixed bug with callback logic for layers with multiple inputs when st…
leanderweber Sep 20, 2020
c853fcf
gradient_based methods run without errors on the test cases. Qualitat…
leanderweber Sep 20, 2020
4f270f8
small bugfixes. Checked visual results of gradient based methods.
leanderweber Sep 21, 2020
d24fce0
Added initialization value parameter for layer input. Allowed r_init …
leanderweber Sep 21, 2020
8b0b68b
Added A Configuration to apply a specified LRP-rule to all layers unt…
leanderweber Sep 21, 2020
d76d0c8
Tested LRP-Rule-Until_Index Configuration. Small Bugfixes.
leanderweber Sep 21, 2020
7e2eb8c
Tested f_init parameter. Small Bugfixes.
leanderweber Sep 21, 2020
c25f09a
Small fix
leanderweber Sep 21, 2020
a43ae36
stop_mapping: innvestigate stops at layer -> more performance
rachtibat Sep 22, 2020
400aa83
new feature: no_forward_pass in analyze method.
rachtibat Sep 22, 2020
77ad231
small bug fix
rachtibat Sep 23, 2020
4f15e20
small bug fix
rachtibat Sep 23, 2020
478ed08
small bug fix
rachtibat Sep 23, 2020
9bf42f6
important bug fix. False gradient of r_init
rachtibat Sep 28, 2020
b759768
new feature: get_hook_activations. Function only for advanced users!
rachtibat Oct 5, 2020
63a1a5d
get_hook_activations expanded
rachtibat Oct 6, 2020
4f558f8
AvgPooling LRPFlat Rule bug solved
rachtibat Oct 8, 2020
eb566bb
Utilized @tf.function to improve performance on repeated calls. Refac…
leanderweber Oct 8, 2020
4d2b10d
Merged
leanderweber Oct 8, 2020
c75d5aa
Small comment change
leanderweber Oct 8, 2020
85e52d3
Re-named some variables. Fixed no_forward_pass.
leanderweber Oct 12, 2020
28011ab
fixed small mistake
leanderweber Oct 15, 2020
d143eb7
fixed small mistake
leanderweber Oct 15, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -120,4 +120,5 @@ nosetests.cfg
*.png

# ignore nbconvert script output
examples/nbconvert_tmp
examples/nbconvert_tmp

Binary file added LeNetWeights.h5
Binary file not shown.
87 changes: 47 additions & 40 deletions innvestigate/analyzer/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,19 +8,18 @@
###############################################################################

from .base import NotAnalyzeableModelException
from .deeplift import DeepLIFTWrapper
from .gradient_based import BaselineGradient
from .gradient_based import Gradient
from .gradient_based import InputTimesGradient
from .gradient_based import GuidedBackprop
from .gradient_based import Deconvnet
from .gradient_based import IntegratedGradients
from .gradient_based import SmoothGrad
from .misc import Input
from .misc import Random
from .pattern_based import PatternNet
from .pattern_based import PatternAttribution
from .relevance_based.relevance_analyzer import BaselineLRPZ
from .base import ReverseAnalyzerBase
# from .deeplift import DeepLIFTWrapper
#from .gradient_based import Gradient
#from .gradient_based import InputTimesGradient
#from .gradient_based import GuidedBackprop
#from .gradient_based import Deconvnet
#from .gradient_based import IntegratedGradients
#from .gradient_based import SmoothGrad
# from .misc import Input
# from .misc import Random
# from .pattern_based import PatternNet
# from .pattern_based import PatternAttribution
from .relevance_based.relevance_analyzer import LRP
from .relevance_based.relevance_analyzer import LRPZ
from .relevance_based.relevance_analyzer import LRPZIgnoreBias
Expand All @@ -31,6 +30,7 @@
from .relevance_based.relevance_analyzer import LRPWSquare
from .relevance_based.relevance_analyzer import LRPFlat
from .relevance_based.relevance_analyzer import LRPAlphaBeta
from .relevance_based.relevance_analyzer import LRPGamma
from .relevance_based.relevance_analyzer import LRPAlpha2Beta1
from .relevance_based.relevance_analyzer import LRPAlpha2Beta1IgnoreBias
from .relevance_based.relevance_analyzer import LRPAlpha1Beta0
Expand All @@ -39,8 +39,13 @@
from .relevance_based.relevance_analyzer import LRPSequentialPresetB
from .relevance_based.relevance_analyzer import LRPSequentialPresetAFlat
from .relevance_based.relevance_analyzer import LRPSequentialPresetBFlat
from .deeptaylor import DeepTaylor
from .deeptaylor import BoundedDeepTaylor
from .relevance_based.relevance_analyzer import LRPSequentialCompositeA
from .relevance_based.relevance_analyzer import LRPSequentialCompositeB
from .relevance_based.relevance_analyzer import LRPSequentialCompositeAFlat
from .relevance_based.relevance_analyzer import LRPSequentialCompositeBFlat
from .relevance_based.relevance_analyzer import LRPRuleUntilIndex
# from .deeptaylor import DeepTaylor
# from .deeptaylor import BoundedDeepTaylor
from .wrapper import WrapperBase
from .wrapper import AugmentReduceBase
from .wrapper import GaussianSmoother
Expand All @@ -49,11 +54,11 @@

# Disable pyflaks warnings:
assert NotAnalyzeableModelException
assert BaselineLRPZ
assert WrapperBase
assert AugmentReduceBase
assert GaussianSmoother
assert PathIntegrator
#assert BaselineLRPZ
# assert WrapperBase
# assert AugmentReduceBase
# assert GaussianSmoother
# assert PathIntegrator


###############################################################################
Expand All @@ -63,17 +68,17 @@

analyzers = {
# Utility.
"input": Input,
"random": Random,

# Gradient based
"gradient": Gradient,
"gradient.baseline": BaselineGradient,
"input_t_gradient": InputTimesGradient,
"deconvnet": Deconvnet,
"guided_backprop": GuidedBackprop,
"integrated_gradients": IntegratedGradients,
"smoothgrad": SmoothGrad,
# "input": Input,
# "random": Random,
#
# # Gradient based
#"gradient": Gradient,
# "gradient.baseline": BaselineGradient,
#"input_t_gradient": InputTimesGradient,
#"deconvnet": Deconvnet,
#"guided_backprop": GuidedBackprop,
#"integrated_gradients": IntegratedGradients,
#"smoothgrad": SmoothGrad,

# Relevance based
"lrp": LRP,
Expand All @@ -100,16 +105,18 @@
"lrp.sequential_preset_a_flat": LRPSequentialPresetAFlat,
"lrp.sequential_preset_b_flat": LRPSequentialPresetBFlat,

# Deep Taylor
"deep_taylor": DeepTaylor,
"deep_taylor.bounded": BoundedDeepTaylor,

# DeepLIFT
"deep_lift.wrapper": DeepLIFTWrapper,
"lrp.rule_until_index": LRPRuleUntilIndex,

# Pattern based
"pattern.net": PatternNet,
"pattern.attribution": PatternAttribution,
# Deep Taylor
#"deep_taylor": DeepTaylor,
#"deep_taylor.bounded": BoundedDeepTaylor,

# # DeepLIFT
# "deep_lift.wrapper": DeepLIFTWrapper,
#
# # Pattern based
# "pattern.net": PatternNet,
# "pattern.attribution": PatternAttribution,
}


Expand Down
Loading