-
Notifications
You must be signed in to change notification settings - Fork 854
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: kernel hub introduction draft #2777
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, looking great! I did a quick early pass, feel free to ping again when you want!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! But too wide I think, it will be cropped at the sides possibly hiding part of the title. The recommended aspect ratio is 2:1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! updated to be 2:1 in the latest commits
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reminder that we also have to add an entry to _blog.yml
when you are ready to submit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh thanks for the tip, added an entry in the latest commit (and will make sure to bump when the article is ready)
thumbnail: /blog/assets/hello-hf-kernels/kernel-hub-five-mins-short.png | ||
authors: | ||
- user: drbh | ||
date: 2025-03-28 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Date goes in _blog.yml
using a format like "March 28, 2025"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! updated in the latest commits
hello-hf-kernels.md
Outdated
|
||
# 🏎️ Learn the Hugging Face Kernel Hub in 5 Minutes | ||
|
||
**Unlock performance boosts for your models with pre-optimized compute kernels, easily loaded from the Hub.** |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
**Unlock performance boosts for your models with pre-optimized compute kernels, easily loaded from the Hub.** | |
**Boost your model performance with pre-optimized kernels, easily loaded from the Hub.** |
Maybe, for simplification?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! updated in the latest commits
hello-hf-kernels.md
Outdated
|
||
**Unlock performance boosts for your models with pre-optimized compute kernels, easily loaded from the Hub.** | ||
|
||
Today, we'll explore an exciting development from Hugging Face: the **Kernel Hub**! As ML practitioners, we know that maximizing performance often involves diving deep into optimized code, custom CUDA kernels, or complex build systems. The Kernel Hub aims to simplify this dramatically. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Today, we'll explore an exciting development from Hugging Face: the **Kernel Hub**! As ML practitioners, we know that maximizing performance often involves diving deep into optimized code, custom CUDA kernels, or complex build systems. The Kernel Hub aims to simplify this dramatically. | |
Today, we'll explore an exciting development from Hugging Face: the **Kernel Hub**! As ML practitioners, we know that maximizing performance often involves diving deep into optimized code, custom CUDA kernels, or complex build systems. The Kernel Hub simplifies this process dramatically! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh this is better, updated in latest commit
hello-hf-kernels.md
Outdated
expected = torch.tensor( | ||
[ | ||
[0.1100, 2.1309, -0.0700, 0.6802], | ||
[-0.0500, 0.4800, -0.1700, -0.1700], | ||
[0.3701, -0.1300, -0.0800, -0.1200], | ||
[-0.0400, 0.1200, -0.1500, 1.7998], | ||
], | ||
dtype=torch.float16, | ||
device=DEVICE, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps an alternative could be to retrieve the reference results from PyTorch's gelu?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea agreed that is a better example, updated in latest commit
hello-hf-kernels.md
Outdated
|
||
## 2. How to Use the Kernel Hub (Basic Example) | ||
|
||
Using the Kernel Hub is designed to be straightforward. The `kernels` library provides the main interface. Here's a quick example loading an optimized GELU activation function kernel (we'll use a different kernel for the main example later). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using the Kernel Hub is designed to be straightforward. The `kernels` library provides the main interface. Here's a quick example loading an optimized GELU activation function kernel (we'll use a different kernel for the main example later). | |
Using the Kernel Hub is designed to be straightforward. The `kernels` library provides the main interface. Here's a quick example that loads an optimized GELU activation function kernel. (Later on, we'll see another example about how to integrate a kernel in our model). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks this reads better, updated in latest
|
||
**Important Notes on the `KernelModel`:** | ||
* **Kernel Inheritance:** The `KernelRMSNorm` class inherits from `layer_norm_kernel_module.layers.LlamaRMSNorm`, which is the RMSNorm implementation in the kernel. This allows us to use the optimized kernel directly. | ||
* **Accessing the Function:** The exact way to access the RMSNorm function (`layer_norm_kernel_module.layers.LlamaRMSNorm.forward`, `layer_norm_kernel_module.rms_norm_forward`, or something else) **depends entirely on how the kernel creator structured the repository on the Hub.** You may need to inspect the loaded `layer_norm_kernel_module` object (e.g., using `dir()`) or check the kernel's documentation on the Hub to find the correct function/method and its signature. I've used `rms_norm_forward` as a plausible placeholder and added error handling. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice if we can point to some kernel documentation (in the kernel's model card in the Hub) by the time this is published :) This could encourage others to adopt some common structure for kernel description / docs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agreed! currently there is a effort to generate some useful docs started here huggingface/kernel-builder#89 however this is still a work in progress and should be updated before publishing
TODO
- improve docs across all existing examples (probably autogen)
hello-hf-kernels.md
Outdated
from snippet2 import BaselineModel | ||
from snippet3 import KernelModel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should introduce the script name before each snippet, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, updated to have meaningful names and use them in the scripts in latest
|
||
# Download optimized activation kernels from the Hub | ||
# This fetches the kernel code if not already cached | ||
activation_kernels = get_kernel("kernels-community/activation") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Super cool! Would something like this (different kernel) be automatically resolved? Do we want to talk (in a later section) about what happens if there's no match?
hello-hf-kernels.md
Outdated
|
||
### Benefits of the Kernel Hub: | ||
|
||
* **Instant Access to Optimized Kernels**: Load and run kernels optimized for various hardware (like NVIDIA GPUs) without local compilation hassles. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* **Instant Access to Optimized Kernels**: Load and run kernels optimized for various hardware (like NVIDIA GPUs) without local compilation hassles. | |
* **Instant Access to Optimized Kernels**: Load and run kernels optimized for various hardware starting with NVIDIA and AMD GPUs, without local compilation hassles. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks! updated in the latest commits
hello-hf-kernels.md
Outdated
~~~bash | ||
pip install kernels torch numpy | ||
~~~ | ||
Ensure you have a compatible PyTorch version and CUDA installed if using GPU kernels. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make this hardware agnostic for AMD?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch, i've updated the phrasing to avoid "CUDA" in the latest commit
hello-hf-kernels.md
Outdated
|
||
## 1. What is the Kernel Hub? | ||
|
||
The [Kernel Hub](https://huggingface.co/kernels) (👈 Check it out!) allows Python libraries and applications to **load optimized compute kernels directly from the Hugging Face Hub**. Think of it like the Model Hub, but for low-level, high-performance code snippets (kernels) that accelerate specific operations, often on GPUs. Examples include optimized attention mechanisms (like FlashAttention), activation functions, and normalization layers (like LayerNorm or RMSNorm). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to mention some challenging kernels here. I think activation and normalization kernels are usually pretty good in frameworks. Maybe, attention mechanisms, quantizers, and Mixture of Expert layers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, updated to include some more impactful/useful examples. thanks!
hello-hf-kernels.md
Outdated
# Ensure you have a CUDA-enabled device | ||
if not torch.cuda.is_available(): | ||
raise RuntimeError("This example requires a CUDA-enabled GPU") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me upload the activation
kernel for ROCm as well. I think the example is stronger if we can show something that works with both CUDA and ROCm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Built, running validation tests now...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All tests pass.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wooo amazing, thank you!
hello-hf-kernels.md
Outdated
if not torch.cuda.is_available(): | ||
raise RuntimeError("This example requires a CUDA-enabled GPU") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the Triton kernel should also work with ROCm? Worth trying.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
awesome, thanks for building/testing! removed torch.cuda..
in the latest commit
hello-hf-kernels.md
Outdated
layer_norm_kernel_module = get_kernel("kernels-community/triton-layer-norm") | ||
|
||
|
||
class KernelRMSNorm(layer_norm_kernel_module.layers.LlamaRMSNorm): | ||
def __init__(self, hidden_size, variance_epsilon=1e-5): | ||
super().__init__() | ||
self.weight = nn.Parameter(torch.ones(hidden_size)) | ||
self.variance_epsilon = variance_epsilon |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We want people to use @use_kernel_forward_from_hub
to annotate the Torch class and then register LlamaRMSNorm
using a mapping. See: https://github.com/huggingface/kernels/blob/main/docs/layers.md
Using @use_kernel_forward_from_hub
enables people to make layers that are (dynamically) extensible with kernels, people can replace kernels, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah yea great point! I've updated the code to prefer adding @use_kernel_forward_from_hub("LlamaRMSNorm")
to the RMSNorm
defined in the reference example (and added some descriptive comments).
hello-hf-kernels.md
Outdated
): | ||
super().__init__() | ||
self.linear1 = nn.Linear(input_size, hidden_size) | ||
self.norm = KernelRMSNorm(hidden_size, variance_epsilon=eps) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With @use_kernel_forward_from_hub
, you don't need this. The model doesn't need any change to use kernels, the model writer or the user can map kernels externally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this has been updated in the latest commit along with the larger change to prefer using the use_kernel_forward_from_hub
decorator in the example. thanks!
This PR is an early draft for an introduction to the kernel hub
TODO
kernel-builder
to showcase kernel creation/publishing to the hub