Skip to content

Conversation

@Rohanjames1997
Copy link
Contributor

@Rohanjames1997 Rohanjames1997 commented Dec 19, 2025

Description

This PR adds a BF16 (bfloat16) pointwise convolution kernel for ARM64 NCHWc format, leveraging the existing SBGEMM infrastructure. When the mlas.enable_gemm_fastmath_arm64_bfloat16 session option is enabled on supported ARM64 Linux hardware, Pointwise Conv is rerouted to use this BF16 implementation. This is an opt-in feature, similar to how BF16 matmul is opt-in.

Added a bool ZeroMode field to MLAS_SBGEMM_DATA_PARAMS (default true for backward compatibility) to enable per-batch control over output accumulation. This mirrors the beta parameter in FP32's MlasGemmBatch and is required for Pointwise convolutions with >128 input channels, where multiple GEMM calls must accumulate into the same output buffer.

Motivation and Context

The existing mlas.enable_gemm_fastmath_arm64_bfloat16 session option accelerates MatMul operations on ARM64 processors with BF16 support, but convolution operations did not benefit from this optimization. Pointwise convolutions (1x1 kernels) are essentially batched matrix multiplications.

This change extends the BF16 fastmath optimization to pointwise NCHWc convolutions, reusing the same session option. The implementation mirrors the FP32 pointwise kernel structure while delegating the actual computation to SBGEMM, ensuring correctness and maintainability.

Performance improvement

Measured a 15-20% gain on Mobilenet inference on an AWS Graviton4 instance.

Before (FP32)

/build/Linux/Release/onnxruntime_perf_test -C "mlas.enable_gemm_fastmath_arm64_bfloat16|0" -x 32 -I -m times -r 2000 ~/scripts/mobilenet.onnx

Number of inferences per second: 559.154

After (BF16)

./build/Linux/Release/onnxruntime_perf_test -C "mlas.enable_gemm_fastmath_arm64_bfloat16|1" -x 32 -I -m times -r 2000 ~/scripts/mobilenet.onnx

Number of inferences per second: 651.221

@Rohanjames1997
Copy link
Contributor Author

@hariharans29 another PR that's up your alley.

Can you request a preliminary review from Copilot & run CI?

Thanks!

@hariharans29 hariharans29 requested a review from Copilot December 21, 2025 05:23
@hariharans29
Copy link
Member

/azp run Linux QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows GPU Doc Gen CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR extends BF16 (bfloat16) precision optimization support to pointwise (1x1) NCHWc convolutions on ARM64 Linux platforms. The implementation leverages the existing SBGEMM infrastructure and the mlas.enable_gemm_fastmath_arm64_bfloat16 session option, delivering a reported 15-20% performance improvement on Mobilenet inference.

Key changes:

  • Adds BF16 pointwise convolution kernel (MlasConvPointwiseBf16KernelNeon) that delegates computation to SBGEMM
  • Introduces ZeroMode field to MLAS_SBGEMM_DATA_PARAMS to enable accumulation control across multiple GEMM calls
  • Routes pointwise convolutions to BF16 implementation when fastmath mode is enabled on supported hardware

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated no comments.

Show a summary per file
File Description
onnxruntime/core/mlas/lib/sbconv_kernel_neon.cpp New BF16 pointwise convolution kernel implementation using SBGEMM batch operations
onnxruntime/core/mlas/inc/mlas.h Adds UseBf16 parameter to MlasNchwcConv API and ZeroMode field to MLAS_SBGEMM_DATA_PARAMS
onnxruntime/core/mlas/lib/sbgemm.h Propagates ZeroMode parameter through SBGEMM packed/non-packed operations
onnxruntime/core/mlas/lib/snchwc.cpp Adds UseBf16 parameter and conditional BF16 kernel selection logic
onnxruntime/core/mlas/lib/mlasi.h Declares MlasConvPointwiseBf16KernelNeon and adds ConvPointwiseBf16Kernel to platform struct
onnxruntime/core/mlas/lib/platform.cpp Initializes BF16 kernel pointer in ARM64 NEON platform initialization
onnxruntime/contrib_ops/cpu/nchwc_ops.h Adds fastmath mode detection in constructor and member variable
onnxruntime/contrib_ops/cpu/nchwc_ops.cc Passes BF16 flag to MlasNchwcConv based on session options
cmake/onnxruntime_mlas.cmake Adds new source file with ARM BF16 compilation flags

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@Rohanjames1997
Copy link
Contributor Author

Rohanjames1997 commented Dec 22, 2025

Thanks @hariharans29 !
Looks like the failures are due to inconsistent ifdefs(?). I'm looking into it.
Do let me know if you have ideas too, but I may need you to rerun CI a few times more after I push fixes.

@Rohanjames1997
Copy link
Contributor Author

Since SBGemm is compiled only on linux, I have enabled this BF16 Pointwise Conv kernel only on linux as well.

Can the CI be run again? 🤞

@aviralagrawal
Copy link

Exciting stuff. Looking forward to seeing this merged.

@Rohanjames1997
Copy link
Contributor Author

Thanks @aviralagrawal!

@hariharans29 gentle reminder 🙂

@hariharans29
Copy link
Member

/azp run Linux QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows GPU Doc Gen CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@hariharans29
Copy link
Member

Thanks @aviralagrawal!

@hariharans29 gentle reminder 🙂

Hi - Sorry , I was OOF. Kicked off CI and will review this PR soon. Thanks!

Copy link
Contributor Author

@Rohanjames1997 Rohanjames1997 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the review!
I've replied to the comments. I will wait for your reply before adding any more commits.

Meanwhile I am looking at the current CI failure.

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@hariharans29
Copy link
Member

You can ignore the failing CUDA checks. They just need #27020 to go in.

@Rohanjames1997
Copy link
Contributor Author

Phew... Thanks 😅

@hariharans29
Copy link
Member

Can you rebase as well ? You may need some changes in main for some checks to pass

@Rohanjames1997
Copy link
Contributor Author

Done. Ready for CI when you are. Feel free to merge if the CI passes 🤞

@hariharans29
Copy link
Member

Done. Ready for CI when you are. Feel free to merge if the CI passes 🤞

Thank you ! I will take one final look this evening and merge should everything look fine. Thanks again for the contribution !

@hariharans29
Copy link
Member

/azp run Linux QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows GPU Doc Gen CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@hariharans29
Copy link
Member

hariharans29 commented Jan 16, 2026

Left behind a couple of comments - can you please address them when you get a chance ? The rest of the code looks fine ! Thank you again !

@hariharans29
Copy link
Member

/azp run Linux QNN CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI,Windows ARM64 QNN CI Pipeline,Windows GPU Doc Gen CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

@hariharans29 hariharans29 merged commit 6d34aba into microsoft:main Jan 18, 2026
88 checks passed
alex-spacemit pushed a commit to spacemit-com/onnxruntime that referenced this pull request Jan 20, 2026
## Description

This PR adds a BF16 (bfloat16) pointwise convolution kernel for ARM64
NCHWc format, leveraging the existing SBGEMM infrastructure. When the
`mlas.enable_gemm_fastmath_arm64_bfloat16` session option is enabled on
supported ARM64 Linux hardware, Pointwise Conv is rerouted to use this
BF16 implementation. This is an opt-in feature, similar to how BF16
matmul is opt-in.

Added a bool ZeroMode field to `MLAS_SBGEMM_DATA_PARAMS` (default `true`
for backward compatibility) to enable per-batch control over output
accumulation. This mirrors the beta parameter in FP32's `MlasGemmBatch`
and is required for Pointwise convolutions with >128 input channels,
where multiple GEMM calls must accumulate into the same output buffer.

## Motivation and Context

The existing `mlas.enable_gemm_fastmath_arm64_bfloat16` session option
accelerates MatMul operations on ARM64 processors with BF16 support, but
convolution operations did not benefit from this optimization. Pointwise
convolutions (1x1 kernels) are essentially batched matrix
multiplications.

This change extends the BF16 fastmath optimization to pointwise NCHWc
convolutions, reusing the same session option. The implementation
mirrors the FP32 pointwise kernel structure while delegating the actual
computation to SBGEMM, ensuring correctness and maintainability.

## Performance improvement
Measured a 15-20% gain on Mobilenet inference on an AWS Graviton4
instance.

Before (FP32)
```
/build/Linux/Release/onnxruntime_perf_test -C "mlas.enable_gemm_fastmath_arm64_bfloat16|0" -x 32 -I -m times -r 2000 ~/scripts/mobilenet.onnx

Number of inferences per second: 559.154
```

After (BF16)
```
./build/Linux/Release/onnxruntime_perf_test -C "mlas.enable_gemm_fastmath_arm64_bfloat16|1" -x 32 -I -m times -r 2000 ~/scripts/mobilenet.onnx

Number of inferences per second: 651.221

```
tianleiwu pushed a commit that referenced this pull request Jan 21, 2026
## Description

This PR adds a BF16 (bfloat16) pointwise convolution kernel for ARM64
NCHWc format, leveraging the existing SBGEMM infrastructure. When the
`mlas.enable_gemm_fastmath_arm64_bfloat16` session option is enabled on
supported ARM64 Linux hardware, Pointwise Conv is rerouted to use this
BF16 implementation. This is an opt-in feature, similar to how BF16
matmul is opt-in.

Added a bool ZeroMode field to `MLAS_SBGEMM_DATA_PARAMS` (default `true`
for backward compatibility) to enable per-batch control over output
accumulation. This mirrors the beta parameter in FP32's `MlasGemmBatch`
and is required for Pointwise convolutions with >128 input channels,
where multiple GEMM calls must accumulate into the same output buffer.

## Motivation and Context

The existing `mlas.enable_gemm_fastmath_arm64_bfloat16` session option
accelerates MatMul operations on ARM64 processors with BF16 support, but
convolution operations did not benefit from this optimization. Pointwise
convolutions (1x1 kernels) are essentially batched matrix
multiplications.

This change extends the BF16 fastmath optimization to pointwise NCHWc
convolutions, reusing the same session option. The implementation
mirrors the FP32 pointwise kernel structure while delegating the actual
computation to SBGEMM, ensuring correctness and maintainability.

## Performance improvement
Measured a 15-20% gain on Mobilenet inference on an AWS Graviton4
instance.

Before (FP32)
```
/build/Linux/Release/onnxruntime_perf_test -C "mlas.enable_gemm_fastmath_arm64_bfloat16|0" -x 32 -I -m times -r 2000 ~/scripts/mobilenet.onnx

Number of inferences per second: 559.154
```

After (BF16)
```
./build/Linux/Release/onnxruntime_perf_test -C "mlas.enable_gemm_fastmath_arm64_bfloat16|1" -x 32 -I -m times -r 2000 ~/scripts/mobilenet.onnx

Number of inferences per second: 651.221

```

(cherry picked from commit 6d34aba)
hariharans29 pushed a commit that referenced this pull request Jan 21, 2026
### Description
`sconv.h` was renamed to `sconv_nchwc_kernel_neon.h` in #26688 but the
reference to the old name was still in a new file added at around the
same time in #26838.
The CI doesn't include building for this configuration yet - it will be
added after the 1.24 release.



### Motivation and Context
Fixes failing mainline build on Arm64 linux when
`--enable_arm_neon_nchwc` is supplied.


### Testing
This now passes on Arm64 linux
`./build.sh --config Release --build_shared_lib --parallel
--compile_no_warning_as_error --skip_submodule_sync --skip_tests
--enable_pybind --build_wheel --enable_arm_neon_nchwc`
tianleiwu pushed a commit that referenced this pull request Jan 22, 2026
### Description
`sconv.h` was renamed to `sconv_nchwc_kernel_neon.h` in #26688 but the
reference to the old name was still in a new file added at around the
same time in #26838.
The CI doesn't include building for this configuration yet - it will be
added after the 1.24 release.

### Motivation and Context
Fixes failing mainline build on Arm64 linux when
`--enable_arm_neon_nchwc` is supplied.

### Testing
This now passes on Arm64 linux
`./build.sh --config Release --build_shared_lib --parallel
--compile_no_warning_as_error --skip_submodule_sync --skip_tests
--enable_pybind --build_wheel --enable_arm_neon_nchwc`

(cherry picked from commit 347b990)
tianleiwu added a commit that referenced this pull request Jan 23, 2026
### Description
This PR cherry-picks the following changes for the 1.24.0 release.

### Cherry-picked Commits
| Commit | Commit Title | Author |
|---|---|---|
| 744e7fe | Add type definitions, registration, utilities for
INT2/UINT2 support (#26824) | vraspar |
| 530a1fb | [QNN EP] Add BFloat16 dtype support in QNN EP (#26987) |
tirupath-qti |
| 8e050d1 | Implement new experimental lookup-based matrix
multiplication method(TMAC) (#26695) | vraspar |
| 2d2ba6b | [MLAS/CPU EP] Improve performance of Silu activation path
within the QuickGelu CPU kernel (#26753) | Hariharan Seshadri |
| 1c02b79 | [QNN EP] Add support for handling 0-dimension for Concat
Op (#27000) | Ashwath Shankarnarayan |
| cc2b01b | Fix ClipQuantFusion crash when Clip has multiple input
edges (#27016) | Edward Chen |
| bbd3850 | [QNN EP] Support quantized BatchNorm with per-channel DQ
params on QNN HTP (#26959) | qti-yuduo |
| d8f0318 | Add API to get ep graph partitioning info (#26781) |
Adrian Lizarraga |
| b912b18 | [OVEP] OpenVINO EP Features and bug-fixes for ORT-1.24 -
Follow up (#27007) | Preetha Veeramalai |
| ba11af4 | [QNN-EP] Add MatMulNBits translation for GPU (#26340) |
quic-tirupath |
| c03c419 | [MLAS/NEON] Add dedicated kernel for depthwise
convolution for ARM64 using NEON intrinsics (#26688) | Hariharan
Seshadri |
| e7dfd69 | [QNN-EP] Support alternate Layernorm fusion pattern in
QNN preprocess (#26060) | qti-mattsinc |
| 4013dc1 | Implement multithreading in qgemm_kleidi (#26301) |
Melike Kaptan |
| 9f06181 | [CXX] Enable users to specify custom OrtSyncStream via
RunOptions (#26988) | Dmitri Smirnov |
| cfccd64 | Added support for QMX kernels in MLAS (#26849) |
qti-vaiskv |
| 29d9b2f | Tweak external resource importer handle structs (#27040)
| Scott McKay |
| 9d108d0 | [QNN EP] Add QuickGELU operator support for QNN provider
(#27034) | tirupath-qti |
| b35688f | Add INT2 and UINT2 support for QDQ, transpose and cast
ops (#27022) | vraspar |
| 6d34aba | Introducing BF16 Pointwise NCHWc Convolution for Arm64
(#26838) | Rohanjames1997 |
| 36017ad | [EP ABI] Add CreateCustomOpDomains() API for plugin EP to
register custom ops (#27050) | Chi Lo |
| 50a03e4 | Add a new pipeline for CUDA 13 nuget builds (#27023) |
eserscor |
| a0d4439 | [EP ABI] Update Graph_GetGraphView() implementation
(#26711) | Chi Lo |
| 34bb209 | [webgpu] Fix a bug for im2col (#27069) | Wenqin Yang |
| 46e8d45 | [QNN EP] Add FusedMatMul operator support (#27044) |
tirupath-qti |
| 5e7e7a3 | Disable Float32_2Bits_Asymmetric_256x256 test (#27046) |
vraspar |
| 39f966e | Fix Doxygen documentation build error in
onnxruntime_c_api.h (#27083) | Nick Eubanks |
| 8a7a797 | Print tensor for new packed type of 2 bits (#27064) |
Tianlei Wu |
| 01f40e6 | Fix GPU JAR testing on Linux (#27011) | eserscor |
| b6ed7f3 | Fix warning around ununsed code in QNN Android Emulator
builds by clang (#27026) | Hariharan Seshadri |
| d7daa45 | Raise the timeout for the ios simulator job (#27045) |
Hariharan Seshadri |
| 7e1d818 | upgrade emsdk to 4.0.23 (#27029) | Yulong Wang |
| 347b990 | Fix failing mainline build on Arm64 linux (#27101) |
Rohanjames1997 |
| f481b17 | Add dedicated API to support extracting compatibility
string from model metadata (#27015) | adrastogi |

---------

Signed-off-by: Liqun Fu <liqun.fu@microsoft.com>
Signed-off-by: bfilipek <bartlomiej.filipek@intel.com>
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Jonathan Clohessy <jonathan.clohessy@arm.com>
Signed-off-by: Christian Bourjau <christian.bourjau@quantco.com>
Signed-off-by: melkap01 <melike.kaptan@arm.com>
Co-authored-by: vraspar <vrajang@outlook.com>
Co-authored-by: tirupath-qti <tirupath@qti.qualcomm.com>
Co-authored-by: Ashwath Shankarnarayan <ashwshan@qti.qualcomm.com>
Co-authored-by: Liqun Fu <liqun.fu@microsoft.com>
Co-authored-by: carzh <wolfivyaura@gmail.com>
Co-authored-by: Hector Li <hecli@microsoft.com>
Co-authored-by: carzh <carolinezhu@microsoft.com>
Co-authored-by: Vrajang Parikh <vrparikh@microsoft.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Yuduo Wu <yuduow@qti.qualcomm.com>
Co-authored-by: Adrian Lizarraga <adlizarraga@microsoft.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: jatinwadhwa921 <110383850+jatinwadhwa921@users.noreply.github.com>
Co-authored-by: jatinwadhwa921 <jatin.wadhwa@intel.com>
Co-authored-by: saurabh <saurabh1.kale@intel.com>
Co-authored-by: Ankit Maheshkar <ankit.maheshkar@intel.com>
Co-authored-by: sfatimar <sahar.fatima@intel.com>
Co-authored-by: Javier Martinez <javier.e.martinez@intel.com>
Co-authored-by: Bartlomiej Filipek <bartlomiej.filipek@intel.com>
Co-authored-by: bopeng1234 <bo.peng@intel.com>
Co-authored-by: Eric Crawford <eric.r.crawford@intel.com>
Co-authored-by: MayureshV1 <47039074+MayureshV1@users.noreply.github.com>
Co-authored-by: TejalKhade28 <tejal.khade@intel.com>
Co-authored-by: Vishnudas Thaniel S <vishnudas.thaniel.s@intel.com>
Co-authored-by: Yaru Du <yaru.du@intel.com>
Co-authored-by: Ryan Metcalfe <107415876+RyanMetcalfeInt8@users.noreply.github.com>
Co-authored-by: Dvoretckii, Mikhail <mikhail.dvoretckii@intel.com>
Co-authored-by: Pallavi Gupta <pallavi.gupta@intel.com>
Co-authored-by: Jianhui Dai <jianhui.j.dai@intel.com>
Co-authored-by: Jiajia Qin <jiajiaqin@microsoft.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: Fei Chen <feich@microsoft.com>
Co-authored-by: Yulong Wang <7679871+fs-eire@users.noreply.github.com>
Co-authored-by: Akupadhye <aupadhye@qti.qualcomm.com>
Co-authored-by: Wang Ning <ning4.wang@intel.com>
Co-authored-by: Maximilian Müller <44298237+gedoensmax@users.noreply.github.com>
Co-authored-by: Chi Lo <54722500+chilo-ms@users.noreply.github.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: Wanming Lin <wanming.lin@intel.com>
Co-authored-by: quic-calvnguy <quic_calvnguy@quicinc.com>
Co-authored-by: Jie Chen <jie.a.chen@intel.com>
Co-authored-by: xhcao <xinghua.cao@intel.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: quic-hungjuiw <quic_hungjuiw@quicinc.com>
Co-authored-by: Ian Hunter <ianfhunter@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: kunal-vaishnavi <115581922+kunal-vaishnavi@users.noreply.github.com>
Co-authored-by: Jeff Kilpatrick <jkilpatrick@qti.qualcomm.com>
Co-authored-by: Jeff Kilpatrick <jkilpat@qti.qualcomm.com>
Co-authored-by: Scott McKay <skottmckay@gmail.com>
Co-authored-by: Nenad Banfic <46795300+nenad1002@users.noreply.github.com>
Co-authored-by: derdeljan-msft <derdeljan@microsoft.com>
Co-authored-by: n1harika <niharika.sathish@intel.com>
Co-authored-by: Ryan Metcalfe <ryan.metcalfe@intel.com>
Co-authored-by: Jaswanth Gannamaneni <jaswanth.gannamaneni@intel.com>
Co-authored-by: Klimenko, Mikhail <mikhail.klimenko@intel.com>
Co-authored-by: liang <gxgaoliang@126.com>
Co-authored-by: Garth Long <garth.long@intel.com>
Co-authored-by: Jonathan Clohessy <jonathan.clohessy@arm.com>
Co-authored-by: Akshay Sonawane <111780983+apsonawane@users.noreply.github.com>
Co-authored-by: Christopher Warrington <chwarr@microsoft.com>
Co-authored-by: Ishwar Raut <iraut@nvidia.com>
Co-authored-by: Gaurav Garg <gaugarg@nvidia.com>
Co-authored-by: Xinpeng Dou <15529241576@163.com>
Co-authored-by: adrastogi <aditya.rastogi@microsoft.com>
Co-authored-by: Aditya Rastogi <adityar@ntdev.microsoft.com>
Co-authored-by: qti-hungjuiw <hungjuiw@qti.qualcomm.com>
Co-authored-by: Pradeep Sakhamoori <psakhamoori@microsoft.com>
Co-authored-by: Adam Pocock <adam.pocock@oracle.com>
Co-authored-by: mingyue <131847423+mingyueliuh@users.noreply.github.com>
Co-authored-by: Susanta Bhattacharjee <susanta.bhattacharjee@intel.com>
Co-authored-by: Jozef Wludzik <jozef.wludzik@intel.com>
Co-authored-by: Rajeev Sekar <rajeevsekar21@gmail.com>
Co-authored-by: Mayuresh M Varerkar <mayuresh.m.varerkar@intel.com>
Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Wenqin Yang <wenqin.yang@intel.com>
Co-authored-by: xieofxie <xieofxie@126.com>
Co-authored-by: hualxie <hualxie@microsoft.com>
Co-authored-by: Joshua Lochner <admin@xenova.com>
Co-authored-by: Christian Bourjau <cbourjau@users.noreply.github.com>
Co-authored-by: Xiaofei Han <xiaofeihan@microsoft.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: chunghow-qti <chunghow@qti.qualcomm.com>
Co-authored-by: Guenther Schmuelling <guschmue@microsoft.com>
Co-authored-by: Jiawei Shao <jiawei.shao@intel.com>
Co-authored-by: czekun <chen.zekun@intel.com>
Co-authored-by: Jaskaran Singh Nagi <jaskaran.singh.nagi@intel.com>
Co-authored-by: quic-tirupath <quic_tirupath@quicinc.com>
Co-authored-by: qti-mattsinc <mattsinc@qti.qualcomm.com>
Co-authored-by: Melike Kaptan <melike.kaptan@arm.com>
Co-authored-by: Damien Dooley <damien.dooley@arm.com>
Co-authored-by: qti-vaiskv <vaiskv@qti.qualcomm.com>
Co-authored-by: Rohanjames1997 <rohan.james4@gmail.com>
Co-authored-by: eserscor <erscor@microsoft.com>
Co-authored-by: eserscor <247253654+eserscor@users.noreply.github.com>
Co-authored-by: Nick Eubanks <nieubank@microsoft.com>
Co-authored-by: adrastogi <8368026+adrastogi@users.noreply.github.com>
Co-authored-by: Rohanjames1997 <rohanjms@amazon.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants