Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

question about the MACs of nn.BatchNorm2d #196

Open
DaMiBear opened this issue Jan 7, 2023 · 0 comments
Open

question about the MACs of nn.BatchNorm2d #196

DaMiBear opened this issue Jan 7, 2023 · 0 comments

Comments

@DaMiBear
Copy link

DaMiBear commented Jan 7, 2023

Hi, happy new year!
I'm confused about the method of calculating nn.BatchNorm2d MACs.

def count_normalization(m: nn.modules.batchnorm._BatchNorm, x, y):
# TODO: add test cases
# https://github.com/Lyken17/pytorch-OpCounter/issues/124
# y = (x - mean) / sqrt(eps + var) * weight + bias
x = x[0]
# bn is by default fused in inference
flops = calculate_norm(x.numel())
if (getattr(m, 'affine', False) or getattr(m, 'elementwise_affine', False)):
flops *= 2
m.total_ops += flops

def calculate_norm(input_size):
"""input is a number not a array or tensor"""
return torch.DoubleTensor([2 * input_size])

In my opinion: in calculate_norm(input_size), the 2 * input_size already means the MACs of subtract(mean), divide(var), mul(weight) and add(bias). But why is the flops(w.r.t MACs) multiplied by 2 again in the next?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant