Skip to content

Commit 98ee7fe

Browse files
newlingramiro050
authored andcommitted
Update E2E links
1 parent d082310 commit 98ee7fe

File tree

1 file changed

+9
-11
lines changed

1 file changed

+9
-11
lines changed

docs/Torch-ops-E2E-implementation.md

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -9,29 +9,27 @@ https://github.com/llvm/torch-mlir/pull/294
99

1010
### Step 1. Add an end-to-end test to iterate on
1111

12-
Add an end-to-end test to the [end-to-end test suite](https://github.com/llvm/torch-mlir/tree/main/python/torch_mlir_e2e_test/test_suite). Ideally there is an existing file that your op fits into. If not, you can create a new file.
12+
Add an end-to-end test to the [end-to-end test suite](https://github.com/llvm/torch-mlir/blob/main/docs/adding_an_e2e_test.md)). Ideally there is an existing file that your op fits into. If not, you can create a new file.
1313

1414
We generally recommend testing by invoking `torch.ops.aten.someop` from Python -- that gives a very precise test for the individual Torch operator you are implementing (calling `torch.ops.aten.someop` from Python always lowers into the MLIR `torch.aten.someop` operation)
1515

1616
The end-to-end test is important to check the correctness of the other steps.
1717

1818
### Step 2. Update ods
19-
Update [torch_ods_gen.py](https://github.com/llvm/torch-mlir/blob/main/python/torch_mlir/dialects/torch/importer/jit_ir/build_tools/torch_ods_gen.py) with the new op and run [update_torch_ods.sh](https://github.com/llvm/torch-mlir/blob/main/build_tools/update_torch_ods.sh) to generate the ods. Running `update_torch_ods.sh` would dump all the operators with schema into `JITOperatorRegistryDump.txt`. It’s convenient to look for ops signatures and operands names in this file.
2019

21-
### Step 3. Propagate dtypes with RefineTypes pass
22-
The RefineTypes pass propagates refined tensor dtypes across the entire program. Each visit function infers the output tensor dtype based on the input. It’s necessary to make sure the new op is handled correctly by this pass. If existing helpers can’t be reused and new code logic is added, unit tests like those in [test/Dialect/Torch/refine-types.mlir](https://github.com/llvm/torch-mlir/blob/main/test/Dialect/Torch/refine-types.mlir) are needed. The unit tests use LLVM’s FileCheck and MLIR provides a script [mlir/utils/generate-test-checks.py](https://github.com/llvm/llvm-project/blob/main/mlir/utils/generate-test-checks.py) to generate [FileCheck](https://llvm.org/docs/CommandGuide/FileCheck.html) statements.
20+
Update [torch_ods_gen.py](https://github.com/llvm/torch-mlir/blob/main/projects/pt1/python/torch_mlir/dialects/torch/importer/jit_ir/build_tools/torch_ods_gen.py) with the new op and run [update_torch_ods.sh](https://github.com/llvm/torch-mlir/blob/main/build_tools/update_torch_ods.sh) to generate the ods. Running `update_torch_ods.sh` would dump all the operators with schema into `JITOperatorRegistryDump.txt`. It’s convenient to look for ops signatures and operands names in this file.
2321

24-
### Step 4. Add a shape function to the shape library
22+
### Step 3. Propagate types
23+
It’s essential to make sure the new op implements shape and dtype inference. See [abstract_interp_lib](https://github.com/llvm/torch-mlir/blob/main/docs/abstract_interp_lib.md) for information on adding shape and dtype inference.
2524

26-
See the documentation in [Adding a Shape Function](https://github.com/llvm/torch-mlir/blob/main/docs/adding_a_shape_function.md).
27-
28-
### Step 5. Torch ops lowering
25+
### Step 4. Torch ops lowering
2926

3027
#### Decompose
3128

29+
3230
If your op can be decomposed into other supported ops, then you can add a pattern into [DecomposeComplexOps](https://github.com/llvm/torch-mlir/blob/8d3ca887df5ac5126fa3fc2ec3546c6322a4d066/lib/Dialect/Torch/Transforms/DecomposeComplexOps.cpp#L1).
3331

34-
You can find an [example PR here](https://github.com/llvm/torch-mlir/pull/1453).
32+
You can find example PRs [here](https://github.com/llvm/torch-mlir/pull/2550) and [here](https://github.com/llvm/torch-mlir/pull/2553).
3533

3634
#### Lower to Linalg
3735

@@ -40,7 +38,7 @@ The `Torch` dialect needs to be lowered to [Linalg](https://mlir.llvm.org/docs/D
4038
You can find an [example PR here](https://github.com/llvm/torch-mlir/pull/294).
4139

4240
## Delivering Code
43-
1. The codebase follows the [LLVM’s coding conventions](https://llvm.org/docs/CodingStandards.html).The following items might be the most frequently used rules:
41+
1. The codebase follows the [LLVM’s coding conventions](https://llvm.org/docs/CodingStandards.html).The following items might be the most frequently used rules:
4442
- [use-early-exits-and-continue-to-simplify-code](https://llvm.org/docs/CodingStandards.html#use-early-exits-and-continue-to-simplify-code)
4543
- [don-t-use-else-after-a-return](https://llvm.org/docs/CodingStandards.html#don-t-use-else-after-a-return)
4644
- [use-auto-type-deduction-to-make-code-more-readable](https://llvm.org/docs/CodingStandards.html#use-auto-type-deduction-to-make-code-more-readable)
@@ -49,4 +47,4 @@ You can find an [example PR here](https://github.com/llvm/torch-mlir/pull/294).
4947
2. Try to refactor and reuse existing code/helpers when working on RefineTypes and TorchToLinalg lowering for easier maintenance, testing and better readability. Try not to copy & paste existing code.
5048
3. Squash all the commits into one, including the commits addressing review comments.
5149
4. Use `git clang-format HEAD~1` to automatically format your commit.
52-
5. Rebase on `HEAD` before delivering.
50+
5. Rebase on `HEAD` before delivering.

0 commit comments

Comments
 (0)