-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Application of composite LRP #255
Comments
you can use the preimplemented LRPPreset* analyzers |
Thank you for answering @sebastian-lapuschkin ! I just have one doubt by using LRPPreset analyzer. If I only want to see the positive contributions of my input variables, I should use LRPPresetA right? And also, is it ok by using it with selu activations? I saw in relevance_analyzer.py that it is not advised... |
|
When I use LPRPresetA or LPRPresetB without a specific neuron selection it works fine but when I specify a neuron (I have 2 in the output) the relevance scores are negative (i.e -3.1056861e-06). Is there any reason for this to happen? |
negative relevance scores for both presents are no indicators that things are not working fine, cf this paper (alt link) for example, where in the examples in fig1 and the appendix blue regions also have attributed negative relevance (read, in the heatmap wrt class cat: "from the model's point of view, bernese mountain dog facial features are not cat tiger features, ie provide evidence to the model for deciding against class tiger cat") especially if the output on the non-dominant logits is negative (which is your case, and is likely if the model has decided otherwise) negative relevance reveals that the model does not decide for your class of choice represented by your selected output neuron because "all that stuff does not look like the neuron's target class" if this does not illuminate your situation sufficienty, please provide some more info regarding decomposed model output (ie output neuron activation) and resulting heatmap in input space (or whatever feature space you are analyzing) best |
Closing this issue as the missing example is tracked in #261. |
Hello everyone.
I was trying to implement the composite LRP like the one presented in: G. Montavon, A. Binder, S. Lapuschkin, W. Samek, K.-R. MüllerLayer-wise Relevance Propagation: An Overviewin Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Springer LNCS, vol. 11700,2019, but unsuccessful...
Anyone knows how can I implement this?
Here is my model:
`
Thank you!
The text was updated successfully, but these errors were encountered: