-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ConvTranspose CROWN Bounds #61
Comments
CROWN with a single input (e.g. |
Hi @cherrywoods , could you please share the code for the model definition? |
Sure, sorry for not including it right away: generator = nn.Sequential(
nn.ConvTranspose2d(4, 49, kernel_size=4, stride=1, bias=False), # 49 x 4 x 4
nn.BatchNorm2d(49, affine=True),
nn.LeakyReLU(negative_slope=0.2),
nn.ConvTranspose2d(49, 12, kernel_size=4, stride=4, bias=False), # 12 x 16 x 16
nn.BatchNorm2d(12, affine=True),
nn.LeakyReLU(negative_slope=0.2),
nn.ConvTranspose2d(12, 1, kernel_size=13, stride=1, bias=False), # 1 x 28 x 28
nn.Sigmoid(),
) |
Hi @cherrywoods , the issue is that you need to update |
Hi @shizhouxing, the incorrect batch dimension was indeed a problem in the code I posted, however a very similar error persists also with fixed batch dimensions: import torch
from auto_LiRPA import PerturbationLpNorm, BoundedModule, BoundedTensor
net = torch.load("mnist_conv_generator.pyt")
net = BoundedModule(net, torch.zeros(1, 4, 1, 1))
ptb = PerturbationLpNorm(x_L=torch.zeros(10, 4, 1, 1), x_U=torch.ones(10, 4, 1, 1))
tensor = BoundedTensor(torch.zeros(10, 4, 1, 1), ptb)
net.compute_bounds(x=(tensor,), method="ibp") # works fine, output omitted
net.compute_bounds(x=(tensor,), method="crown")
|
Hi @cherrywoods , You'll need to modify both
|
Hi @shizhouxing, this was only a typo. I updated the code above. The error remains the same. |
Hi @cherrywoods , but I tried your code and it worked fine on my side. I see your output contains |
That indeed seemed to be the issue. I somehow messed up pulling the latest release from Github. Thanks for your patience and sorry for the inconvenience. I'm happy that I can now use ConvTranspose layers :) |
I reopen this because I keep getting errors in the actual code I'm using, which obviously uses different bounds than 0.0 and 1.0. I debugged through this for the past hour and couldn't find anything like the errors that we discussed above. To be on the safe side this time, I made a docker container that reproduces the issue: conv_transpose_issue.zip The container creates a conda environment, downloads and installs the latest auto_LiRPA commit and then runs the following script: import torch
from torch import nn
import auto_LiRPA
from auto_LiRPA import PerturbationLpNorm, BoundedModule, BoundedTensor
print(auto_LiRPA.__version__)
torch.manual_seed(0)
net = nn.Sequential(
nn.ConvTranspose2d(4, 49, kernel_size=4, stride=1, bias=False), # 49 x 4 x 4
nn.BatchNorm2d(49, affine=True),
nn.LeakyReLU(negative_slope=0.2),
nn.ConvTranspose2d(49, 12, kernel_size=4, stride=4, bias=False), # 12 x 16 x 16
nn.BatchNorm2d(12, affine=True),
nn.LeakyReLU(negative_slope=0.2),
nn.ConvTranspose2d(12, 1, kernel_size=13, stride=1, bias=False), # 1 x 28 x 28
nn.Sigmoid(),
)
net = BoundedModule(net, torch.empty(1, 4, 1, 1))
lb = torch.zeros(1, 4, 1, 1)
ub = torch.ones(1, 4, 1, 1)
ptb = PerturbationLpNorm(x_L=lb,x_U=ub)
tensor = BoundedTensor(lb, ptb)
print(lb.shape, ub.shape, tensor.shape)
print(lb, ub, tensor)
bounds = net.compute_bounds(x=(tensor,), method="crown") # works fine
print(bounds)
lb = lb.clone() - 1.0
ptb = PerturbationLpNorm(x_L=lb,x_U=ub)
tensor = BoundedTensor(lb, ptb)
print(lb.shape, ub.shape, tensor.shape)
print(lb, ub, tensor)
bounds = net.compute_bounds(x=(tensor,), method="crown") # fails
print(bounds) When I run this using: docker build . -t auto_lirpa
docker run -t auto_lirpa I get this output:
I know this behaviour is extremely strange, but since I am only subtracting 1.0 from the lower bound for which CROWN works, I don't think it's a shape issue again. |
I also confirmed that the error persists when I use a batch size of |
Thanks for reporting the issue and sorry for delayed response. We have fixed it internally and will push the fix in the upcoming release soon. |
Describe the bug
I was delighted to see that
auto_LiRPA
can bound ConvTranspose layers out of the box, but, unfortunately, CROWN in batch mode doesn't seem to work.To Reproduce
Code to reproduce with the attached network (zipped): mnist_conv_generator.zip
System configuration:
The text was updated successfully, but these errors were encountered: