-
Notifications
You must be signed in to change notification settings - Fork 71
auto-round-kernel installation method #1221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
24 commits
Select commit
Hold shift + click to select a range
32f8707
add ark install scripts and README.md
chensuyue cfa02f4
fix format
chensuyue 862001f
minor update
chensuyue a42cb67
add kernel-install in auto-round setup
chensuyue a43ae35
minor update
chensuyue ae0cd60
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] 4039add
format update
chensuyue 52be92a
fix issue
chensuyue c816e9b
update ark install cmd
chensuyue 0d4db1b
update readme
chensuyue 6f7b49e
minor update
chensuyue 786ef4a
minor update
chensuyue f47974e
Update the usage of new ARK functions (#1224)
luoyu-intel 053584f
update MD
luoyu-intel a799336
revert ipex; add windows for ark
luoyu-intel 9b1a4e9
use Algorithm
luoyu-intel f5790b7
change description of torch version
luoyu-intel 0cecd9a
use 2.8 as minimum torch version
luoyu-intel 92cb6f4
update README.md
chensuyue ce53d2b
update ut scripts
chensuyue 15d311a
Merge branch 'main' into suyue/ark_install
chensuyue bf49e79
test with torch gpu
chensuyue 914f62d
Revert "test with torch gpu"
chensuyue 2501ab4
Merge branch 'main' into suyue/ark_install
chensuyue File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,90 @@ | ||
| ## What is AutoRound Kernel? | ||
| AutoRound Kernel is a low-bit acceleration library for Intel platform. | ||
|
|
||
| The kernels are optimized for the following CPUs: | ||
| * Intel Xeon Scalable processor (formerly Sapphire Rapids, and Emerald Rapids) | ||
| * Intel Xeon 6 processors (formerly Sierra Forest and Granite Rapids) | ||
|
|
||
| The kernels are optimized for the following GPUs: | ||
| * Intel Arc B-Series Graphics and Intel Arc Pro B-Series Graphics | ||
| (formerly Battlemage) | ||
|
|
||
| ## Key Features | ||
| AutoRound Kernel provides weight-only linear computational capabilities for LLM inference. Specifically, the weight-only-quantization configs we support are given in the table below: | ||
| ### CPU | ||
| | Weight dtype | Compute dtype | Scale dtype | Algorithm<sup>[1]</sup> | | ||
| | ---------------------- | :----------------: | :---------------: | :--------: | | ||
| | INT8 | INT8<sup>[2]</sup> / BF16 / FP32 | BF16 / FP32 | sym / asym | | ||
| | INT4 | INT8 / BF16 / FP32 | BF16 / FP32 | sym / asym | | ||
| | INT3 | INT8 / BF16 / FP32 | BF16 / FP32 | sym / asym | | ||
| | INT2 | INT8 / BF16 / FP32 | BF16 / FP32 | sym / asym | | ||
| | INT5 | INT8 / BF16 / FP32 | BF16 / FP32 | sym / asym | | ||
| | INT6 | INT8 / BF16 / FP32 | BF16 / FP32 | sym / asym | | ||
| | INT7 | INT8 / BF16 / FP32 | BF16 / FP32 | sym / asym | | ||
| | INT1 | INT8 / BF16 / FP32 | BF16 / FP32 | sym / asym | | ||
| | FP8 (E4M3, E5M2) | BF16 / FP32 | FP32 / FP8 (E8M0) | NA | | ||
| | FP4 (E2M1) | BF16 / FP32 | BF16 / FP32 | NA | | ||
|
|
||
| ### XPU | ||
| | Weight dtype | Compute dtype | Scale dtype | Algorithm | | ||
| | ---------------------- | :----------------: | :---------------: | :--------: | | ||
| | INT8 | INT8 / FP16 | FP16 | sym | | ||
| | INT4 | INT8 / FP16 | FP16 | sym | | ||
| | FP8 (E4M3, E5M2) | FP16 | FP16 / FP8 (E8M0) | NA | | ||
|
|
||
| <sup>[1]</sup>: Quantization algorithms for integer types: symmetric or asymmetric. | ||
| <sup>[2]</sup>: Includes dynamic activation quantization; results are dequantized to floating-point formats. | ||
|
|
||
|
|
||
| ## Installation | ||
| ### Install via pip | ||
| ```bash | ||
| # Install the latest auto-round kernel which may upgrade your PyTorch version automatically | ||
| pip install auto-round-kernel | ||
| # Install auto-round kernel with respective to specific PyTorch version (e.g., v2.8.x) | ||
| pip install auto-round-kernel torch~=2.8.0 | ||
| ``` | ||
|
|
||
| <details> | ||
| <summary>Other Installation Methods</summary> | ||
|
|
||
| ### Install via Script | ||
| ```bash | ||
| curl -fsSL https://raw.githubusercontent.com/intel/auto-round/main/auto_round_extension/ark/install_kernel.py | ||
| python3 install_kernel.py | ||
| ``` | ||
| **Notes:** | ||
| Recommend to use this method if you want to keep your current PyTorch and auto-round versions. | ||
| This installation script will detect the current environment and install the corresponding auto-round-kernel version. | ||
|
|
||
| ### Install via auto_round | ||
| ```bash | ||
| pip install auto-round | ||
| auto-round-kernel-install | ||
| ``` | ||
|
|
||
| </details> | ||
|
|
||
| ### Versioning Scheme | ||
| The version number of auto-round-kernel follows the format: | ||
| `{auto-round major version}.{auto-round minor version}.{oneAPI version}.{kernel version}` | ||
|
|
||
| **For example: v0.9.1.1** | ||
| - The first two digits (0.9) correspond to the major and minor version of the auto_round framework. | ||
| - The third digit (1) represents the major version of Intel oneAPI: `1` indicates support for oneAPI 2025.1 (typically Torch 2.8), `2` indicates support for oneAPI 2025.2 (typically Torch 2.9). | ||
| - The final digit (1) is the patch version of auto-round-kernel, reflecting updates, bug fixes, or improvements to the kernel package itself. | ||
|
|
||
| **Version mapping table** | ||
|
|
||
| | auto-round-kernel Version | auto-round Version | oneAPI Version | Typical PyTorch Version | | ||
| |:-------------------------:|:------------------:|:--------------:|:-------------------------:| | ||
| | 0.9.1.x | 0.9.x | 2025.1 | 2.8.x | | ||
| | 0.9.2.x | 0.9.x | 2025.2 | 2.9.x | | ||
|
|
||
| **Notes:** oneAPI version is aligned with PyTorch version during auto-round-kernel binary build, but oneAPI toolkit is not required in runtime. | ||
|
|
||
| ### Validated Hardware Environment | ||
| #### CPU based on [Intel 64 architecture or compatible processors](https://en.wikipedia.org/wiki/X86-64): | ||
| * Intel Xeon Scalable processor (Granite Rapids) | ||
| #### GPU built on Intel's Xe architecture: | ||
| * Intel Arc B-Series Graphics (Battlemage) | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,60 @@ | ||
| # Copyright (c) 2026 Intel Corporation | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
| import re | ||
| import subprocess | ||
| import sys | ||
|
|
||
|
|
||
| def get_torch_minor(): | ||
| try: | ||
| import torch | ||
|
|
||
| m = re.match(r"^(\d+)\.(\d+)", torch.__version__) | ||
| return f"{m.group(1)}.{m.group(2)}" if m else None | ||
| except ImportError: | ||
| return None | ||
|
|
||
|
|
||
| def get_auto_round_minor(): | ||
| try: | ||
| import auto_round | ||
|
|
||
| m = re.match(r"^(\d+)\.(\d+)", auto_round.__version__) | ||
| return f"{m.group(1)}.{m.group(2)}" if m else None | ||
| except ImportError: | ||
| return None | ||
|
|
||
|
|
||
| # Map torch minor version to kernel version | ||
| auto_round_minor = "0.9" if get_auto_round_minor() is None else get_auto_round_minor() | ||
| KERNEL_MAP = { | ||
| "2.8": f"auto-round-kernel~={auto_round_minor}.1.0", | ||
| "2.9": f"auto-round-kernel~={auto_round_minor}.2.0", | ||
| } | ||
|
|
||
|
|
||
| def main(): | ||
| torch_minor = get_torch_minor() | ||
| if torch_minor and torch_minor in KERNEL_MAP: | ||
| pkg = KERNEL_MAP[torch_minor] | ||
| print(f"Detected torch {torch_minor}, installing {pkg} ...") | ||
| subprocess.check_call([sys.executable, "-m", "pip", "install", pkg, "--upgrade-strategy", "only-if-needed"]) | ||
| else: | ||
| print("torch not found or no mapping for your version. Installing the latest auto-round-kernel ...") | ||
chensuyue marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| subprocess.check_call([sys.executable, "-m", "pip", "install", "auto-round-kernel"]) | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| main() | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.