Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nan CROWN bounds when clamping #63

Open
cherrywoods opened this issue Dec 13, 2023 · 0 comments
Open

nan CROWN bounds when clamping #63

cherrywoods opened this issue Dec 13, 2023 · 0 comments

Comments

@cherrywoods
Copy link

Describe the bug
I am trying to bound a clamp operation. Since using torch.clamp directly produces an error stating that Cast is an unsupported operation, I instead tried torch.minimum(torch.maximum(x, mins), maxs) to replace clamp. This does not report any unsupported operations but yields nan bounds everywhere.

To Reproduce

import torch
from torch import nn
from auto_LiRPA import BoundedModule, BoundedTensor, PerturbationLpNorm

class Test(nn.Module):
    def __init__(self):
        super().__init__()
        x = nn.Parameter(0.5 * torch.ones(1, 4))
        y = nn.Parameter(0.75 * torch.ones(1, 4))
        self.register_buffer("x", x)
        self.register_buffer("y", y)
    def forward(self, z):
        return torch.minimum(torch.maximum(z, self.x), self.y)

module = BoundedModule(Test(), torch.empty(1, 4))
ptb = PerturbationLpNorm(x_L=torch.zeros(1, 4), x_U=torch.ones(1, 4))
t = BoundedTensor(torch.zeros(1, 4), ptb)
bounds = module.compute_bounds(x=(t,), method="ibp")  # produces the correct bounds
print(bounds)
# (tensor([[0.5000, 0.5000, 0.5000, 0.5000]], grad_fn=<MinimumBackward0>), tensor([[0.7500, 0.7500, 0.7500, 0.7500]], grad_fn=<MinimumBackward0>))
bounds = module.compute_bounds(x=(t,), method="CROWN")  # produces nan
print(bounds)
# (tensor([[nan, nan, nan, nan]], grad_fn=<ViewBackward0>), tensor([[nan, nan, nan, nan]], grad_fn=<ViewBackward0>))

System configuration:

  • OS: Ubuntu 22.04.3 LTS
  • Python version: Python 3.10
  • Pytorch Version: PyTorch 1.12.1
  • Hardware: CPU only (also verified on CUDA: GeForce GT 1030)
  • Have you tried to reproduce the problem in a cleanly created conda/virtualenv environment using official installation instructions and the latest code on the main branch?: Yes
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant