Skip to content

Conversation

@antimora
Copy link
Collaborator

@antimora antimora commented Dec 29, 2025

  • Adds support for asymmetric (explicit) padding in Conv1d, Conv2d, Conv3d, MaxPool1d, MaxPool2d, AvgPool1d, and AvgPool2d operations
  • Extends PaddingConfig1d and PaddingConfig2d with Explicit variants that specify per-side padding values
  • Updates ONNX import to correctly parse and convert asymmetric padding from ONNX models

Checklist

  • Confirmed that cargo run-checks command has been executed.
  • Made sure the book is up to date with changes in this PR.

Related Issues/PRs

Fixes #4253
Fixes #1370
Related #2676

Changes

burn-nn

  • Extended PaddingConfig1d with Explicit(left, right) variant
  • Extended PaddingConfig2d with Explicit(top, left, bottom, right) variant
  • Added forward pass implementations for asymmetric padding in all conv and pool modules
  • Added comprehensive unit tests for asymmetric padding

onnx-ir

  • Updated padding parsing to preserve full asymmetric padding values from ONNX pads attribute
  • Changed from symmetric-only [height, width] to full [top, left, bottom, right] representation

burn-import

  • Updated codegen to emit PaddingConfig::Explicit when ONNX models use asymmetric padding
  • Added ONNX integration tests with explicitly constructed ONNX models (no separate Pad nodes)

Testing

  • Unit tests for asymmetric padding in burn-nn (Conv1d, Conv2d, MaxPool1d, MaxPool2d, AvgPool1d, AvgPool2d)
  • ONNX integration tests with ReferenceEvaluator/InferenceSession verification
  • cargo test --package onnx-tests -- asymmetric passes

Breaking

PaddingConfig's Explicit variant expanded

This commit updates the padding configuration for 1D and 2D convolution and pooling modules to support asymmetric padding. The code now detects asymmetric padding and applies explicit padding using the pad operation before the main convolution or pooling operation. Tests and code generation are updated to handle and verify asymmetric padding cases. Asymmetric 3D padding is not supported and will panic at runtime.
Introduces new ONNX models and Rust/py scripts to test asymmetric padding for AvgPool1d/2d, MaxPool1d/2d, and Conv1d/2d. Updates build.rs to include these models, adds dependency on onnxscript, and extends test modules to verify output shapes and sums for correctness.
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds comprehensive support for asymmetric (per-side) padding in convolution and pooling operations, addressing limitations where only symmetric padding was previously supported. The implementation correctly handles asymmetric padding by applying an explicit pad operation before the convolution/pooling operation.

Key Changes:

  • Extended PaddingConfig1d, PaddingConfig2d, and PaddingConfig3d to support per-side padding values (breaking API change)
  • Implemented asymmetric padding handling in Conv1d, Conv2d, MaxPool1d, MaxPool2d, AvgPool1d, and AvgPool2d (Conv3d panics with clear error as 3D padding is not yet supported)
  • Updated ONNX import pipeline to correctly parse and convert asymmetric padding from ONNX models
  • Added comprehensive test coverage including unit tests and ONNX integration tests

Reviewed changes

Copilot reviewed 35 out of 41 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
crates/onnx-ir/src/node/padding.rs Extended PaddingConfig enums to support per-side padding, removed asymmetric padding restrictions, added helper methods
crates/onnx-ir/src/node/{conv,pool}*.rs Updated tests to match new PaddingConfig API signatures
crates/burn-nn/src/padding.rs Extended PaddingConfig enums, added is_asymmetric() and as_tuple() methods, updated documentation
crates/burn-nn/src/modules/conv/*.rs Implemented asymmetric padding via explicit pad operations for Conv1d/Conv2d, added panic for Conv3d
crates/burn-nn/src/modules/pool/*.rs Implemented asymmetric padding via explicit pad operations with appropriate pad values for each pooling type
crates/burn-import/src/burn/codegen.rs Updated ToTokens implementations to generate code for new PaddingConfig signatures
crates/burn-import/src/burn/node/*.rs Updated codegen for conv and pool nodes to handle asymmetric padding
crates/burn-import/onnx-tests/tests/* Added comprehensive integration tests with PyTorch/ONNX models for asymmetric padding
crates/burn-import/onnx-tests/build.rs Registered new test model files
crates/burn-import/onnx-tests/pyproject.toml Added onnxscript dependency

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

/// Applies explicit padding values.
/// Format: (left, right)
/// For symmetric padding, use the same value for both (e.g., `Explicit(1, 1)`).
Explicit(usize, usize),
Copy link

Copilot AI Dec 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a breaking API change. The previous PaddingConfig1d::Explicit(usize) signature has been changed to PaddingConfig1d::Explicit(usize, usize) to support asymmetric padding. Users of the old API will need to update their code from Explicit(n) to Explicit(n, n) for symmetric padding. Similarly, PaddingConfig2d::Explicit(h, w) becomes Explicit(h, w, h, w) and PaddingConfig3d::Explicit(d, h, w) becomes Explicit(d, h, w, d, h, w). Consider documenting this migration path prominently in release notes or migration guides.

Copilot uses AI. Check for mistakes.
Update to use 4-argument format (top, left, bottom, right) for explicit padding.
Copy link
Member

@laggui laggui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a couple of changes needed, on top of the compilation error due to breaking Explicit(...) enum variant change.

Comment on lines +171 to +197
// Handle asymmetric padding by applying explicit pad operation first
if self.padding.is_asymmetric() {
let (top, left, bottom, right) = self.padding.as_tuple();
// Burn's pad takes (left, right, top, bottom) for the last two dimensions
let padded = input.pad((left, right, top, bottom), PadMode::Constant(0.0));
// Use zero padding for the conv operation since we already padded
conv2d(
padded,
self.weight.val(),
self.bias.as_ref().map(|bias| bias.val()),
ConvOptions::new(self.stride, [0, 0], self.dilation, self.groups),
)
} else {
let [_batch_size, _channels_in, height_in, width_in] = input.dims();
let padding = self.padding.calculate_padding_2d(
height_in,
width_in,
&self.kernel_size,
&self.stride,
);
conv2d(
input,
self.weight.val(),
self.bias.as_ref().map(|bias| bias.val()),
ConvOptions::new(self.stride, padding, self.dilation, self.groups),
)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be applied at the functional conv1d level, otherwise there is a discrepancy when calling the module function directly. Same for the other nn module changes in the PR.

And perhaps we should add a note that asymmetric padding involves an explicit pad operation.

Comment on lines 32 to +38
Self::Same => same_padding(),
Self::Explicit(value) => *value,
Self::Explicit(left, right) => {
if left != right {
panic!("Asymmetric padding should be handled separately via is_asymmetric()")
}
*left
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This only handles explicit asymmetric padding, but when Same padding is used with even kernel sizes the resulting padding could be asymmetric.

So this further reinforces my previous comment, explicit padding check should be applied at the module function level instead (after actual padding sizes have been computed).

@khengari77
Copy link

Hi @antimora @laggui,

I've also been looking into this to support models like Facial-Landmark-Detection which use asymmetric padding and I was working on.

Since native support in burn-nn involves breaking changes to the Explicit enum variants, I experimented with an alternative decomposition strategy strictly within burn-import.

Instead of changing the runtime PaddingConfig, my approach handles it during ONNX import code generation:

  1. onnx-ir: Parses asymmetric padding into a new internal Asymmetric variant (non-breaking for burn-core).
  2. burn-import: When generating code for Conv2d/Pool2d:
    • Configures the layer with PaddingConfig2d::Valid.
    • Injects a tensor.pad(...) call immediately before the layer in the generated forward method.

This allows existing Burn versions to run these models without waiting for a breaking release or modifying the core tensor API. I have a branch with the initial code as a PoC. I might need to add the tests but it's pretty ready already

@khengari77
Copy link

Hi @antimora @laggui,

I've also been looking into this to support models like Facial-Landmark-Detection which use asymmetric padding and I was working on.

Since native support in burn-nn involves breaking changes to the Explicit enum variants, I experimented with an alternative decomposition strategy strictly within burn-import.

Instead of changing the runtime PaddingConfig, my approach handles it during ONNX import code generation:

  1. onnx-ir: Parses asymmetric padding into a new internal Asymmetric variant (non-breaking for burn-core).

  2. burn-import: When generating code for Conv2d/Pool2d:

    • Configures the layer with PaddingConfig2d::Valid.
    • Injects a tensor.pad(...) call immediately before the layer in the generated forward method.

This allows existing Burn versions to run these models without waiting for a breaking release or modifying the core tensor API. I have a branch with the initial code as a PoC. I might need to add the tests but it's pretty ready already

Here's my commit on my fork. I think this is the path is the one of least resistance and less breaking changes: 258d47b

@antimora
Copy link
Collaborator Author

antimora commented Jan 4, 2026

@khengari77 thanks for looking into this as well. I thought going this route but it only benefits ONNX users and if users want to shift to Burn API then they need to do padding manually.

I believe for this release we are okay to introduce a breaking change.

@khengari77
Copy link

@khengari77 thanks for looking into this as well. I thought going this route but it only benefits ONNX users and if users want to shift to Burn API then they need to do padding manually.

I believe for this release we are okay to introduce a breaking change.

Actually I think you can create an abstraction over padding that internally decomposes in the same way. Again this will not introduce any breaking changes. Anyways if you see that the other choice is better in the long term then that's something else. Thanks for your response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement Enhance existing features onnx

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[ONNX Import] Asymmetric padding is not supported Asymmetric padding is not supported

3 participants