-
Notifications
You must be signed in to change notification settings - Fork 767
Add asymmetric padding support for conv and pool operations #4263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
This commit updates the padding configuration for 1D and 2D convolution and pooling modules to support asymmetric padding. The code now detects asymmetric padding and applies explicit padding using the pad operation before the main convolution or pooling operation. Tests and code generation are updated to handle and verify asymmetric padding cases. Asymmetric 3D padding is not supported and will panic at runtime.
Introduces new ONNX models and Rust/py scripts to test asymmetric padding for AvgPool1d/2d, MaxPool1d/2d, and Conv1d/2d. Updates build.rs to include these models, adds dependency on onnxscript, and extends test modules to verify output shapes and sums for correctness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds comprehensive support for asymmetric (per-side) padding in convolution and pooling operations, addressing limitations where only symmetric padding was previously supported. The implementation correctly handles asymmetric padding by applying an explicit pad operation before the convolution/pooling operation.
Key Changes:
- Extended
PaddingConfig1d,PaddingConfig2d, andPaddingConfig3dto support per-side padding values (breaking API change) - Implemented asymmetric padding handling in Conv1d, Conv2d, MaxPool1d, MaxPool2d, AvgPool1d, and AvgPool2d (Conv3d panics with clear error as 3D padding is not yet supported)
- Updated ONNX import pipeline to correctly parse and convert asymmetric padding from ONNX models
- Added comprehensive test coverage including unit tests and ONNX integration tests
Reviewed changes
Copilot reviewed 35 out of 41 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| crates/onnx-ir/src/node/padding.rs | Extended PaddingConfig enums to support per-side padding, removed asymmetric padding restrictions, added helper methods |
| crates/onnx-ir/src/node/{conv,pool}*.rs | Updated tests to match new PaddingConfig API signatures |
| crates/burn-nn/src/padding.rs | Extended PaddingConfig enums, added is_asymmetric() and as_tuple() methods, updated documentation |
| crates/burn-nn/src/modules/conv/*.rs | Implemented asymmetric padding via explicit pad operations for Conv1d/Conv2d, added panic for Conv3d |
| crates/burn-nn/src/modules/pool/*.rs | Implemented asymmetric padding via explicit pad operations with appropriate pad values for each pooling type |
| crates/burn-import/src/burn/codegen.rs | Updated ToTokens implementations to generate code for new PaddingConfig signatures |
| crates/burn-import/src/burn/node/*.rs | Updated codegen for conv and pool nodes to handle asymmetric padding |
| crates/burn-import/onnx-tests/tests/* | Added comprehensive integration tests with PyTorch/ONNX models for asymmetric padding |
| crates/burn-import/onnx-tests/build.rs | Registered new test model files |
| crates/burn-import/onnx-tests/pyproject.toml | Added onnxscript dependency |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| /// Applies explicit padding values. | ||
| /// Format: (left, right) | ||
| /// For symmetric padding, use the same value for both (e.g., `Explicit(1, 1)`). | ||
| Explicit(usize, usize), |
Copilot
AI
Dec 29, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a breaking API change. The previous PaddingConfig1d::Explicit(usize) signature has been changed to PaddingConfig1d::Explicit(usize, usize) to support asymmetric padding. Users of the old API will need to update their code from Explicit(n) to Explicit(n, n) for symmetric padding. Similarly, PaddingConfig2d::Explicit(h, w) becomes Explicit(h, w, h, w) and PaddingConfig3d::Explicit(d, h, w) becomes Explicit(d, h, w, d, h, w). Consider documenting this migration path prominently in release notes or migration guides.
Update to use 4-argument format (top, left, bottom, right) for explicit padding.
laggui
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple of changes needed, on top of the compilation error due to breaking Explicit(...) enum variant change.
| // Handle asymmetric padding by applying explicit pad operation first | ||
| if self.padding.is_asymmetric() { | ||
| let (top, left, bottom, right) = self.padding.as_tuple(); | ||
| // Burn's pad takes (left, right, top, bottom) for the last two dimensions | ||
| let padded = input.pad((left, right, top, bottom), PadMode::Constant(0.0)); | ||
| // Use zero padding for the conv operation since we already padded | ||
| conv2d( | ||
| padded, | ||
| self.weight.val(), | ||
| self.bias.as_ref().map(|bias| bias.val()), | ||
| ConvOptions::new(self.stride, [0, 0], self.dilation, self.groups), | ||
| ) | ||
| } else { | ||
| let [_batch_size, _channels_in, height_in, width_in] = input.dims(); | ||
| let padding = self.padding.calculate_padding_2d( | ||
| height_in, | ||
| width_in, | ||
| &self.kernel_size, | ||
| &self.stride, | ||
| ); | ||
| conv2d( | ||
| input, | ||
| self.weight.val(), | ||
| self.bias.as_ref().map(|bias| bias.val()), | ||
| ConvOptions::new(self.stride, padding, self.dilation, self.groups), | ||
| ) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be applied at the functional conv1d level, otherwise there is a discrepancy when calling the module function directly. Same for the other nn module changes in the PR.
And perhaps we should add a note that asymmetric padding involves an explicit pad operation.
| Self::Same => same_padding(), | ||
| Self::Explicit(value) => *value, | ||
| Self::Explicit(left, right) => { | ||
| if left != right { | ||
| panic!("Asymmetric padding should be handled separately via is_asymmetric()") | ||
| } | ||
| *left | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This only handles explicit asymmetric padding, but when Same padding is used with even kernel sizes the resulting padding could be asymmetric.
So this further reinforces my previous comment, explicit padding check should be applied at the module function level instead (after actual padding sizes have been computed).
|
I've also been looking into this to support models like Since native support in Instead of changing the runtime
This allows existing Burn versions to run these models without waiting for a breaking release or modifying the core tensor API. I have a branch with the initial code as a PoC. I might need to add the tests but it's pretty ready already |
Here's my commit on my fork. I think this is the path is the one of least resistance and less breaking changes: 258d47b |
|
@khengari77 thanks for looking into this as well. I thought going this route but it only benefits ONNX users and if users want to shift to Burn API then they need to do padding manually. I believe for this release we are okay to introduce a breaking change. |
Actually I think you can create an abstraction over padding that internally decomposes in the same way. Again this will not introduce any breaking changes. Anyways if you see that the other choice is better in the long term then that's something else. Thanks for your response. |
Checklist
cargo run-checkscommand has been executed.Related Issues/PRs
Fixes #4253
Fixes #1370
Related #2676
Changes
burn-nn
onnx-ir
burn-import
Testing
Breaking
PaddingConfig's Explicit variant expanded