From 0ba40d0ed5b176723b17122ac964c688cfb71a8b Mon Sep 17 00:00:00 2001 From: lucylq Date: Wed, 9 Apr 2025 15:06:15 -0700 Subject: [PATCH] Link executorch-examples --- docs/source/using-executorch-cpp.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/source/using-executorch-cpp.md b/docs/source/using-executorch-cpp.md index 4f8a83830e0..4385f494cb6 100644 --- a/docs/source/using-executorch-cpp.md +++ b/docs/source/using-executorch-cpp.md @@ -32,6 +32,8 @@ if (result.ok()) { For more information on the Module class, see [Running an ExecuTorch Model Using the Module Extension in C++](extension-module.md). For information on high-level tensor APIs, see [Managing Tensor Memory in C++](extension-tensor.md). +For complete examples of building and running a C++ application using the Module API, refer to our [examples GitHub repository](https://github.com/pytorch-labs/executorch-examples/tree/main/mv2/cpp). + ## Low-Level APIs Running a model using the low-level runtime APIs allows for a high-degree of control over memory allocation, placement, and loading. This allows for advanced use cases, such as placing allocations in specific memory banks or loading a model without a file system. For an end to end example using the low-level runtime APIs, see [Running an ExecuTorch Model in C++ Tutorial](running-a-model-cpp-tutorial.md).