Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

saliency maps for all layers #65

Open
ckchenkuan opened this issue May 17, 2016 · 3 comments
Open

saliency maps for all layers #65

ckchenkuan opened this issue May 17, 2016 · 3 comments

Comments

@ckchenkuan
Copy link

Hi, I am currently modifying the saliency maps of

https://github.com/Lasagne/Recipes/blob/master/examples/Saliency%20Maps%20and%20Guided%20Backpropagation.ipynb

so that I can plot out the saliencies for all layers and every 8 filters in the vgg net. Here is what I did, but the function compilation stage is prohibitively slow. It seems to me that the gradient loop did not properly exploit the stacked structure of the vgg net and has to go through the graph every single time.

I am just wondering is there a better way to do it? Thanks!

def compile_saliency_function1(net,layernamelist,layershapelist,scalefactor):
    inp = net['input'].input_var
    outp = lasagne.layers.get_output([net[layername] for layername in layernamelist], deterministic=True)
    saliencyfnlist=[]    
    for layeri in range(len(layernamelist)):
        filtercount=int(layershapelist[layeri]/scalefactor)
        filterindices=[ii*scalefactor for ii in range(filtercount)]
        layeroutp=outp[layeri]
        saliencylayerlist=[]
        for filterindex in filterindices:
            max_outpi=layeroutp[0,filterindex,]
            saliencylayerlist.append(theano.grad(max_outpi.sum(), wrt=inp))     
        print(len(saliencylayerlist))
        layerfnlist=theano.function([inp], saliencylayerlist)
        saliencyfnlist.append([layerfnlist]) 
    return saliencyfnlist

starttime=time.time()
saliencyfntuple=compile_saliency_function1(net,['conv5_1','conv5_2','conv5_3'],[512,512,512],8)
print('fn time',time.time()-starttime) 
@f0k
Copy link
Member

f0k commented May 17, 2016

I am just wondering is there a better way to do it?

You're already obtaining all the output expressions at once (in a single get_output() call), that's a good start. But you compile a separate function for every layer. Try to compile a single function for all saliency maps.

It seems to me that the gradient loop did not properly exploit the stacked structure of the vgg net and has to go through the graph every single time.

It should be able to share the computation for the forward pass, but the way you defined the expression to take the gradient of, it has to do a separate backward pass for each filter in every layer. You can try to formulate your expression such that it performs a single batched backward pass. You'll need to replicate your input image into a batch of multiple images and take the gradient with respect to that, so you'll get a batch of different saliency maps. Form a single cost that is the sum of maximizing the first filter of the first layer for the first input example, the second filter of the first layer for the second input example, and so on, and take the gradient of that wrt. the input batch. This way your compiled function can do everything in a single backward pass, and should be faster to compile and execute. Hope this makes sense?

@ckchenkuan
Copy link
Author

Hi sorry for the late reply. I "sort of" implemented what you said by compiling the function once for each layer, but the speedup is not obvious. I am not sure if I am doing what you meant correctly.


def compile_saliency_function2(net,layernamelist,layershapelist,scalefactor):
    inp = net['input'].input_var
    outp = lasagne.layers.get_output([net[layername] for layername in layernamelist], deterministic=True)
    saliencyfnlist=[]    
    for layeri in range(len(layernamelist)):
        filtercount=int(layershapelist[layeri]/scalefactor)
        filterindices=[ii*scalefactor for ii in range(filtercount)]
        #layeroutp=outp[layeri]
        saliencylayerlist=[]
        max_outplist=[]
        inplist=[]
        netdict={}
        for filterindex in filterindices:
            netdict[filterindex]=net       
            inp = netdict[filterindex]['input'].input_var
            outp = lasagne.layers.get_output([netdict[filterindex][layername] for layername in layernamelist], deterministic=True)
            max_outpi=outp[layeri][0,filterindex,].sum()
            max_outplist.append(max_outpi)
            inplist.append(inp)
        max_outpall=max_outplist[0]
        for ii in range(1,len(max_outplist)):
            max_outpall+=max_outplist[ii]
        st=time.time()
        saliencylayer=theano.grad(max_outpall, wrt=inplist)   
        print(time.time()-st)
        starttime=time.time()
        print(len(saliencylayerlist))
        layerfnlist=theano.function([inp], saliencylayer)
        print('compile time is ', time.time()-starttime)
        saliencyfnlist.append([layerfnlist]) 
        #print(layeri)
    return saliencyfnlist

starttime=time.time()
saliencyfntuple=compile_saliency_function2(net,['conv5_1','conv5_2','conv5_3'],[512,512,512],8)
print('fn time',time.time()-starttime)

@f0k
Copy link
Member

f0k commented Jun 6, 2016

You have a second lasagne.layers.get_output call in your for loop now! Never do this!

What I said was to do a single theano.grad() call with respect to a single input batch. So make your input_var a tensor4 (it probably is already), replicate it so you have enough copies of the input image, pass it through the network once and then form your cost to maximize some particular unit for the first item in your minibatch, another unit for the second item in your minibatch, and so on. Something like:

inp = net['input'].input_var
inp_rep = T.repeat(inp, len(layers), axis=0)
outp = lasagne.layers.get_output(layers, inp_rep)
cost = 0
for idx, layer in enumerate(layers):
    cost += outp[idx][idx].sum()  // maximize output of idx'th layer for idx'th example
sal = theano.grad(cost, inp)
fn = theano.function([inp], sal)

Now you can pass a single image (as a 4d tensor) and it will give you a batch of all saliency maps. The key is to make sure Theano can do a single batched forward pass and then a single batched backward pass (although I'm actually not sure how well this will work for the backward pass if the costs are computed at different layers -- maybe it only gets compiled into a single backward pass if you can express the cost as a vectorized operation, like maximizing all the different units or feature maps in a single layer).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants