Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with op.grad in OpFromGraph - Guided Backpropagation #119

Open
HATEM-ELAZAB opened this issue Mar 4, 2022 · 0 comments
Open

Problem with op.grad in OpFromGraph - Guided Backpropagation #119

HATEM-ELAZAB opened this issue Mar 4, 2022 · 0 comments

Comments

@HATEM-ELAZAB
Copy link

HATEM-ELAZAB commented Mar 4, 2022

I have a model, for which i need to compute the gradients of output w.r.t the model's input. But I want to apply some custom gradients for some of the layers in my model. So i tried the code explained in this link. I added the following two classes:

  • The helper class that allows us to replace a nonlinearity with an Op that has the same output, but a custom gradient
class ModifiedBackprop(object):

    def __init__(self, nonlinearity):
        self.nonlinearity = nonlinearity
        self.ops = {}  # memoizes an OpFromGraph instance per tensor type

    def __call__(self, x):
        # OpFromGraph is oblique to Theano optimizations, so we need to move
        # things to GPU ourselves if needed.
        if theano.sandbox.cuda.cuda_enabled:
            maybe_to_gpu = theano.sandbox.cuda.as_cuda_ndarray_variable
        else:
            maybe_to_gpu = lambda x: x
        # We move the input to GPU if needed.
        x = maybe_to_gpu(x)
        # We note the tensor type of the input variable to the nonlinearity
        # (mainly dimensionality and dtype); we need to create a fitting Op.
        tensor_type = x.type
        # If we did not create a suitable Op yet, this is the time to do so.
        if tensor_type not in self.ops:
            # For the graph, we create an input variable of the correct type:
            inp = tensor_type()
            # We pass it through the nonlinearity (and move to GPU if needed).
            outp = maybe_to_gpu(self.nonlinearity(inp))
            # Then we fix the forward expression...
            op = theano.OpFromGraph([inp], [outp])
            # ...and replace the gradient with our own (defined in a subclass).
            op.grad = self.grad
            # Finally, we memoize the new Op
            self.ops[tensor_type] = op
        # And apply the memoized Op to the input we got.
        return self.ops[tensor_type](x)
  • The subclass that does guided backpropagation through a nonlinearity:
class GuidedBackprop(ModifiedBackprop):
    def grad(self, inputs, out_grads):
        (inp,) = inputs
        (grd,) = out_grads
        dtype = inp.dtype
        return (grd * (inp > 0).astype(dtype) * (grd > 0).astype(dtype),)
  • Then i used them in my code as follows:
import lasagne as nn
model_in = T.tensor3()
# model_in = net['input'].input_var
nn.layers.set_all_param_values(net['l_out'], model['param_values'])

relu = nn.nonlinearities.rectify 
relu_layers = [layer for layer in 
          nn.layers.get_all_layers(net['l_out']) if getattr(layer,
          'nonlinearity', None) is relu] 
modded_relu = GuidedBackprop(relu)

for layer in relu_layers:
    layer.nonlinearity = modded_relu   

prop = nn.layers.get_output(
    net['l_out'], model_in, deterministic=True)

for sample in range(ini, batch_len):                                
    model_out = prop[sample, 'z']   # get prop for label 'z'
    gradients = theano.gradient.jacobian(model_out, wrt=model_in) 
    # gradients = theano.grad(model_out, wrt=model_in) 
    get_gradients = theano.function(inputs=[model_in],
                                        outputs=gradients)
    grads = get_gradients(X_batch) # gradient dimension: X_batch == model_in(64, 20, 32) 
    grads = np.array(grads)
    grads = grads[sample]

Now when i run the code, it works without any error, and the shape of the output is also correct. But that's because it executes the default theano.grad function and not the one supposed to override it. In other words, the grad() function in the class GuidedBackprop never been invoked.

  1. I can't understand what is the issue?
  2. is there's a solution?
  3. What is the diffrence between the "op.grad" and "grad_overrides"? Do they have the same effect on computation or there is a diffrence?
  4. If this is an unresolved issue, what's the easiest way to override gradient for partial layers in a model?
@HATEM-ELAZAB HATEM-ELAZAB changed the title Guided Backpropagation: op.grad doesn't work Problem with op.grad in OpFromGraph - Guided Backpropagation Mar 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant