-
Notifications
You must be signed in to change notification settings - Fork 677
cuda export supported #14478
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cuda export supported #14478
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14478
Note: Links to docs will display an error until the docs builds have been completed. ❌ 7 New Failures, 5 Pending, 1 Unrelated FailureAs of commit 3779980 with merge base b3f3111 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating diff in D82987410. |
This PR needs a
|
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: larryliu0820 Differential Revision: D82987410
e690b0a
to
26299fe
Compare
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating diff in D82987410. |
return PreprocessResult( | ||
processed_bytes=b"", | ||
debug_handle_map={}, | ||
data_store_output=named_data_store.get_named_data_store_output(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you putting this in the named_data_store since the .so is not actually shareable? Just legacy from when we were going to share with nativeRT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we just want to make sure in et we are using the correct pipeline.
in the future we need to find the way to load .so directly from ptd file which benefits both et loading efficiency and other partners like nativeRT.
26299fe
to
2a9e51d
Compare
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating diff in D82987410. |
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
2a9e51d
to
16cf09f
Compare
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating diff in D82987410. |
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
16cf09f
to
2ff245a
Compare
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating diff in D82987410. |
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
2ff245a
to
420593e
Compare
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating diff in D82987410. |
420593e
to
2834b2c
Compare
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating diff in D82987410. |
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
2834b2c
to
caae471
Compare
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating diff in D82987410. |
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
caae471
to
243cae8
Compare
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating diff in D82987410. |
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
243cae8
to
a3da839
Compare
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating diff in D82987410. |
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Reviewed By: angelayi, larryliu0820 Differential Revision: D82987410
a3da839
to
3779980
Compare
@Gasoonjia has exported this pull request. If you are a Meta employee, you can view the originating diff in D82987410. |
Summary: Pull Request resolved: pytorch#14478 this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Differential Revision: D82987410 Reviewed By: angelayi, larryliu0820
Summary: Pull Request resolved: pytorch#14478 this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Differential Revision: D82987410 Reviewed By: angelayi, larryliu0820
Summary: Pull Request resolved: pytorch#14478 this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime. Differential Revision: D82987410 Reviewed By: angelayi, larryliu0820
Summary: this diff introuce the cuda backend that compiles the partitioned model graph to run on CUDA devices. It uses the AOTInductor compiler to generate optimized CUDA kernels for the model's operators with libtorch-free. The compiled model can be executed on CUDA devices using the Executorch runtime.
Differential Revision: D82987410