Skip to content

Conversation

@milpuz01
Copy link
Contributor

Overview

This PR adds ARM64 NEON assembly micro‑kernels for NCHW, depthwise, and pointwise convolution, wires them into the MLAS build, and adds shape‑based selection heuristics for NCHWC depthwise/pointwise to favor the asm kernels in safe cases (stride‑1 pointwise; wider depthwise outputs). The BF16 path is unchanged.

Key changes

  • cmake/onnxruntime_mlas.cmake
    • Add new AArch64 assembly sources for NCHW, depthwise, and pointwise conv to the MLAS build.
  • onnxruntime/core/mlas/lib/aarch64/SconvKernelNeon.S
    • New vectorised NCHW convolution micro‑kernel.
  • onnxruntime/core/mlas/lib/aarch64/SconvDepthwiseKernelNeon.S
    • New vectorised depthwise micro‑kernel (fast path for in‑bounds loads, slow path for padding).
  • onnxruntime/core/mlas/lib/aarch64/SconvPointwiseKernelNeon.S
    • New vectorised pointwise micro‑kernel (multi‑output reuse).
  • onnxruntime/core/mlas/lib/mlasi.h, onnxruntime/core/mlas/lib/platform.cpp
    • Declare/register new asm kernels and prefer them on ARM64.
  • onnxruntime/core/mlas/lib/snchwc.cpp
    • Heuristics: use pointwise asm when StrideHeight == 1 && StrideWidth == 1 and OutputThisIteration >= 4; use depthwise asm when OutputWidth >= 4.
  • onnxruntime/core/mlas/lib/sbconv_kernel_neon.cpp
    • Include fix for the conv kernel flags header.

Performance

Numbers below are expressed as multipliers vs the non‑NCHWC baseline (same model and perf_test settings):

Baseline (no --enable_arm_neon_nchwc)

  • 8 cores: 1.00×
  • 16 cores: 1.00×

With --enable_arm_neon_nchwc (no asm additions/heuristics)

  • 8 cores: 1.18×
  • 16 cores: 1.24×

With this PR (asm kernels + heuristics)

  • 8 cores: 1.77×
  • 16 cores: 2.54×

Testing

  • ./build.sh --config Release --build_shared_lib --parallel --compile_no_warning_as_error --skip_submodule_sync --skip_tests --enable_pybind --build_wheel --enable_arm_neon_nchwc
  • OMP_NUM_THREADS=8 ./build/Linux/Release/onnxruntime_perf_test -I -m times -r 1000 --x 8 ~/mobilenetv2-7.onnx

@aviralagrawal
Copy link

Interesting contribution - thank you!

A few questions -

  1. Pointwise convolution is currently implemented via direct GEMM - which I assume is optimized. How does this kernel beat the performance of GEMM?
  2. Can you share the link to the mobilenet model that you used for performance benchmarking?
  3. How does it perform on single threaded experiments? Afaik, the original nchwc kernels in NEON kernels for NCHWc Convolution and Pooling #25580 suffered in the single threaded setting but outperformed the default in thread counts>8.

@milpuz01
Copy link
Contributor Author

Hi @aviralagrawal, thank you vey much for your prompt feedback.

  1. Pointwise convolution is currently implemented via direct GEMM - which I assume is optimized. How does this kernel beat the performance of GEMM?

Compared to direct GEMM implementation of pointwise convolution asm kernel computes 1x1 conv directly:

  • it explicitly tiles 4 outputs: computes up to 4 output positions in parallel and reuses filter loads across those outputs so with single load we are able to accumulate 4 outputs while direct GEMM doesn't tile multiple outputs together
  • fuses accumulate/bias/ReLU into store path instead of separate passes with direct GEMM
  • unrolls the block size explicitly with 16 invocations to keep accumulators in registers and minimise loop overheads thus reducing dispatch/param overhead and output read-modify-write passes compared to direct GEMM

As usual there are trade-offs so direct GEMM would be faster when output count is small because then asm kernel drops to single-output path which has less ILP and won't be able to reuse filter loads, non-unit stride and non-contigious output regions hence why we have heuristics checking for stride width and height and very large K/M when GEMM blocking can make better use of caches then a fixed 4-output tile.

This is best illustrated if we extract pointwise convolutions from mobilnet that we ran and we can see that on average asm implementation is 1.07x faster, and significant speed ups are when number of channels is high and K/M are small (in the image those are H and W dimensions). In convolution heavy networks the convolutions that are dominant are ones with large number of channels and low height and width so we see visible performance improvements as optimisations from this PR are weighted in that direction.

image
  1. Can you share the link to the mobilenet model that you used for performance benchmarking?

For benchmarking we used the model from: https://github.com/onnx/models/blob/main/validated/vision/classification/mobilenet/model/mobilenetv2-7.onnx

  1. How does it perform on single threaded experiments? Afaik, the original nchwc kernels in NEON kernels for NCHWc Convolution and Pooling #25580 suffered in the single threaded setting but outperformed the default in thread counts>8.

Running OMP_NUM_THREADS=1 ./build/Linux/Release/onnxruntime_perf_test -I -m times -r 1000 --x 1 ~/mobilenetv2-7.onnx on Graviton 4 with binary that was built with --enable_arm_neon_nchwc it is slower by 0.89x then building without that flag, while with this PR it is actually 1.25x faster than the baseline.

@Rohanjames1997
Copy link
Contributor

Thanks @milpuz01 for the detailed description & comment!

A couple questions from my side:

  1. Is there a reason why ConvNchwcFloatKernel was not optimized? Afaik, It is not very different from ConvNchwFloatKernel. The x86 asm implementations of these two kernels differ very slightly too. It is a much heavier kernel than Pointwise and Depthwise, and many larger Conv models stress this kernel. An example for this type of model is in this comment: NEON kernels for NCHWc Convolution and Pooling #25580 (comment).

  2. Can we switch the default path of Fp32 Conv on Arm64 to use these new kernels? (effectively voiding --enable_arm_neon_nchwc like it was before) Asking because this PR improves upon the single-threaded performance as well. I'd love to hear your thoughts, but also would be wise to hear from @hariharans29 before implementing.

@milpuz01
Copy link
Contributor Author

Hi @Rohanjames1997, thank you very much for your comments.

  1. Is there a reason why ConvNchwcFloatKernel was not optimized?

No, particular reason. Mostly because the focus for this PR was on MobileNet model and lack of bandwidth. Thank you for sharing the model where ConvNchwcFloatKernel is invoked. We will take a look at optimising it to, but I would suggest that we add optimisation as a follow up PR so that we do not overload this PR with too many changes to review.

  1. Can we switch the default path of Fp32 Conv on Arm64 to use these new kernels? (effectively voiding --enable_arm_neon_nchwc like it was before) Asking because this PR improves upon the single-threaded performance as well. I'd love to hear your thoughts, but also would be wise to hear from @hariharans29 before implementing.

Yes, I think that is great idea and would be interesting to hear from @hariharans29 too what other testing we should make to try to make these kernels default. As you can see above this change is not going to accelerate all possible pointwise convolutions for example but on average it will show the improvements so if we could agree on a set of performance targets we can use that to drive the decision.

Also thank you for your code review I will address them in a separate commit.

@hariharans29
Copy link
Member

hariharans29 commented Jan 23, 2026

Hi @Rohanjames1997, thank you very much for your comments.

  1. Is there a reason why ConvNchwcFloatKernel was not optimized?

No, particular reason. Mostly because the focus for this PR was on MobileNet model and lack of bandwidth. Thank you for sharing the model where ConvNchwcFloatKernel is invoked. We will take a look at optimising it to, but I would suggest that we add optimisation as a follow up PR so that we do not overload this PR with too many changes to review.

  1. Can we switch the default path of Fp32 Conv on Arm64 to use these new kernels? (effectively voiding --enable_arm_neon_nchwc like it was before) Asking because this PR improves upon the single-threaded performance as well. I'd love to hear your thoughts, but also would be wise to hear from @hariharans29 before implementing.

Yes, I think that is great idea and would be interesting to hear from @hariharans29 too what other testing we should make to try to make these kernels default. As you can see above this change is not going to accelerate all possible pointwise convolutions for example but on average it will show the improvements so if we could agree on a set of performance targets we can use that to drive the decision.

Also thank you for your code review I will address them in a separate commit.

Unfortunately, I don't have a comprehensive list of performance targets to be met to make the feature default. Since, the performance testing may not include all possible Conv shapes, I would like to err on the side of caution and atleast provide one release timeline heads-up to the users before considering making the feature default. I would also encourage you to open a discussion to solicit feedback from other ORT users on ARM if they see speed-up for their models with this feature. It would provide greater confidence and a strong data point to turn it on by default.

Thanks for this contribution, we will review it shortly !

@milpuz01
Copy link
Contributor Author

Unfortunately, I don't have a comprehensive list of performance targets to be met to make the feature default. Since, the performance testing may not include all possible Conv shapes, I would like to err on the side of caution and atleast provide one release timeline heads-up to the users before considering making the feature default. I would also encourage you to open a discussion to solicit feedback from other ORT users on ARM if they see speed-up for their models with this feature. It would provide greater confidence and a strong data point to turn it on by default.

Thanks @hariharans29. I agree with erring on the side of caution. If this PR goes through and it is in the main release is it possible to add a note that we would like to make --enable_arm_neon_nchwc as a default in the future releases so that we can try to get some feedback on that via that route too? Thanks again.

@hariharans29
Copy link
Member

hariharans29 commented Jan 26, 2026

Unfortunately, I don't have a comprehensive list of performance targets to be met to make the feature default. Since, the performance testing may not include all possible Conv shapes, I would like to err on the side of caution and atleast provide one release timeline heads-up to the users before considering making the feature default. I would also encourage you to open a discussion to solicit feedback from other ORT users on ARM if they see speed-up for their models with this feature. It would provide greater confidence and a strong data point to turn it on by default.

Thanks @hariharans29. I agree with erring on the side of caution. If this PR goes through and it is in the main release is it possible to add a note that we would like to make --enable_arm_neon_nchwc as a default in the future releases so that we can try to get some feedback on that via that route too? Thanks again.

Thanks @milpuz01. The PR should go through in main eventually but I don't think it will go in 1.24.0 unfortunately as the release branch is cut and the bar to take in new code at this point is critical bug fixes and urgent customer asks only. I will try to take this in for 1.24.1 when it happens and sure I will add a note about considering making it default in one of the future releases, but ultimately, as discussed in the comment #27099 (comment), I expect the NchwcFloatKernel needs optimizations before considering that.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds new AArch64 NEON assembly micro-kernels for NCHW, depthwise NCHWc, and pointwise NCHWc convolution, integrates them into the MLAS build, and updates NCHWc kernel-selection heuristics to prefer the asm kernels in selected shapes.

Changes:

  • Add new AArch64 .S convolution micro-kernels (NCHW, depthwise NCHWc, pointwise NCHWc) and wire them into the MLAS build.
  • Update ARM64 platform init and NCHWc execution heuristics to select asm kernels for pointwise (stride-1, larger tiles) and depthwise (wider outputs).
  • Remove the old intrinsics wrapper for the NCHW float kernel in the NCHWc NEON source file.

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
cmake/onnxruntime_mlas.cmake Adds new AArch64 asm sources to the ARM NEON NCHWc MLAS build setup.
onnxruntime/core/mlas/lib/snchwc.cpp Adds ARM64 heuristics to prefer asm depthwise/pointwise kernels in “safe” cases.
onnxruntime/core/mlas/lib/sconv_nchwc_kernel_neon.cpp Removes the old NCHW float kernel wrapper implementation from the NCHWc NEON source file.
onnxruntime/core/mlas/lib/platform.cpp Switches ARM64 NCHW conv kernel default to asm; updates commentary around kernel choices.
onnxruntime/core/mlas/lib/mlasi.h Declares new asm kernel entry points for ARM64 NEON NCHWc.
onnxruntime/core/mlas/lib/aarch64/SconvKernelNeon.S Adds new NCHW convolution asm micro-kernel.
onnxruntime/core/mlas/lib/aarch64/SconvDepthwiseKernelNeon.S Adds new depthwise NCHWc asm micro-kernel (fast/slow path for padding).
onnxruntime/core/mlas/lib/aarch64/SconvPointwiseKernelNeon.S Adds new pointwise NCHWc asm micro-kernel (multi-output reuse).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +677 to +681
// prologue -------------------------------------------------------------
stp x19,x20,[sp,#-.LFrame_SavedRegs]!
stp x21,x22,[sp,#.LFrame_x21_x22]
stp x23,x24,[sp,#.LFrame_x23_x24]
stp x25,x26,[sp,#.LFrame_x25_x26]
Copy link

Copilot AI Jan 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MlasConvNchwFloatKernelNeonAsm uses v8–v15 (e.g., in CONV2_NOPAD/POSTPROCESS paths) but the prologue only saves x19–x28. Per this repo’s AArch64 assembly convention/ABI notes (e.g., aarch64/ConvSymU8KernelNeon.S saves d8–d15), this function must preserve callee-saved FP/SIMD regs (at least d8–d15) when it clobbers them. Please save/restore d8–d15 (or refactor to only use caller-saved v registers) to avoid corrupting caller state.

Copilot uses AI. Check for mistakes.
this->ConvNchwFloatKernel = MlasConvNchwFloatKernelNeon;
// Prefer the hand written micro-kernel for the NCHW convolution path. It
// offers a tighter schedule and a specialised two-output inner loop that
// reduces pressure on the memory system compared
Copy link

Copilot AI Jan 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment is incomplete (ends with “compared”) and doesn’t form a full sentence. Please finish the sentence (e.g., “compared to the intrinsics kernel”) so the rationale is clear.

Suggested change
// reduces pressure on the memory system compared
// reduces pressure on the memory system compared to the intrinsics-based implementation.

Copilot uses AI. Check for mistakes.
Comment on lines +580 to +581
// reduces memory traffic. The AArch64 assembly kernel is picked up by
// heuristics in platform.cpp to avoid regressions on small convolutions.
Copy link

Copilot AI Jan 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment says the AArch64 pointwise asm kernel is selected by “heuristics in platform.cpp”, but the selection logic is in snchwc.cpp. Please update the comment to point to the correct location (or move the heuristic into platform dispatch if that’s the intent) to avoid confusion for future maintainers.

Suggested change
// reduces memory traffic. The AArch64 assembly kernel is picked up by
// heuristics in platform.cpp to avoid regressions on small convolutions.
// reduces memory traffic. The AArch64 assembly kernel is selected by
// heuristics in snchwc.cpp to avoid regressions on small convolutions.

Copilot uses AI. Check for mistakes.
// Compute the base pointers for this filter block.
madd x16,x14,x8,x2 // output pointer for this filter
madd x17,x14,x7,x1 // filter pointer for this filter
add x18,x10,x14,lsl #6 // bias pointer (if used)
Copy link

Copilot AI Jan 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kernel uses x18 as a general-purpose register (e.g., bias pointer at line 139). On some AArch64 ABIs x18 is reserved as the platform register (the repo even notes this in aarch64/HalfGemmKernelNeon.S:22). Since setup_arm_neon_nchwc() doesn’t exclude these sources on Apple, enabling onnxruntime_USE_ARM_NEON_NCHWC on such platforms could clobber x18 and break TLS/platform state. Please avoid using x18 (pick another temp) or add platform-specific guarding in CMake so this file isn’t built where x18 is reserved.

Suggested change
add x18,x10,x14,lsl #6 // bias pointer (if used)
add x21,x10,x14,lsl #6 // bias pointer (if used)

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants