Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add an HLO backend for LLM models #775

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open

Add an HLO backend for LLM models #775

wants to merge 9 commits into from

Conversation

dacorvo
Copy link
Collaborator

@dacorvo dacorvo commented Feb 4, 2025

What does this PR do?

This backend is a backport of features first implemented in the transformers-neuronx package from the
AWS Neuron SDK.

As the original transformers-neuronx implementation, it relies on XLA High Level Operations (HLO)
as the compiled language for implementing Neuron optimized transformer decoder classes.
More specifically, it uses a syntax called “PyHLO”, name of a Neuron internal tool for writing/compiling the HLO language in Python.

See backends\hlo\README.md for details.

The Llama, Granite and Qwen2 models previously using transformers-neuronx are now using this new backend directly.

This backend is a backport of features previously implemented in the
AWS Neuron SDK transformers-neuronx package.
When using the new HLO backend, the graphs will be slightly modified, so
we bump the dev version to avoid trying to reuse the cached test artifacts
from the previous dev version.
The name of the class is confusing as there are already NeuronConfig
classes in AWS Neuron SDK (both in NxDI and TnX).
@dacorvo dacorvo marked this pull request as ready for review February 5, 2025 11:10
These tests are taking some time, so it is better to have them
separated.
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Collaborator

@tengomucho tengomucho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only had an overview to the backend implementation, the rest LGTM

@@ -724,7 +730,7 @@ def main():
submodels = None
else:
input_shapes, neuron_config_class = get_input_shapes_and_config_class(task, args)
if NeuronDecoderConfig in inspect.getmro(neuron_config_class):
if NeuronDecoderExportConfig in inspect.getmro(neuron_config_class):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if !is_transformers_neuronx_available() then NeuronDecoderExportConfig will not be defined.

The HF_ENDPOINT variable is not always taken into account when using the
huggingface_hub client depending on the order of imports.
This modifies the tests to create temporary dorectories under the testing
user account instead.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants