Skip to content

Hidet v0.1

Compare
Choose a tag to compare
@yaoyaoding yaoyaoding released this 06 Jan 02:57
001c438

This is the first release of hidet.

For the usage of hidet, please visit: https://docs.hidet.org

What's Changed

  • [Docs] Update documentation by @yaoyaoding in #2
  • [Operator] Add leaky_relu and conv2d_transpose operator by @yaoyaoding in #3
  • [Doc] Add doc on how to define operator computation by @yaoyaoding in #4
  • [Bug] fix bugs in reshape and conv2d_transpose by @yaoyaoding in #5
  • [Option] Add option module by @yaoyaoding in #6
  • [Docs] Add documentation on how to add new operators by @yaoyaoding in #7
  • [Operator] Add PRelu op by @hjjq in #8
  • [Docs] Add documentation for operator cache & fix a typo by @yaoyaoding in #9
  • [Operator] Add Abs and And operator by @hjjq in #10
  • [CI] Update github workflow by @yaoyaoding in #11
  • [CI] Update docs workflow, not delete remote dest dir by @yaoyaoding in #12
  • [Operator] Add conv2d_transpose_gemm operator & fix a bug by @yaoyaoding in #13
  • [Runtime] force to use gpu tensor buffer in cuda graph by @yaoyaoding in #14
  • [Functor] Fix a bug in IR functor by @yaoyaoding in #15
  • [Graph] Force users to give an input order when multiple symbolic inputs are found in traced graph by @yaoyaoding in #17
  • [Operator] Add BitShift, Bitwise*, Ceil Operators by @hjjq in #19
  • [IR] Refactor scalar type system by @yaoyaoding in #18
  • [IR] Refactoring math functions by @yaoyaoding in #20
  • [Operator] Fix a bug when resolve matmul to batch_matmul by @yaoyaoding in #21
  • [Operator] Add cubic interpolation to Resize Operator by @hjjq in #22
  • [Packfunc] Refactor packed func & add vector type by @yaoyaoding in #23
  • [Pass] Add lower_special_cast pass and refactor resolve rule registration by @yaoyaoding in #24
  • [Docs] Change github repo url by @yaoyaoding in #25
  • [Operator] Add float16 precision matrix multiplication by @yaoyaoding in #26
  • [Docs] Add a guide on operator resolving by @yaoyaoding in #27
  • [CI] Avoid interactive query in apt installation of tzdata by @yaoyaoding in #28
  • [Docs] Add sub-graph rewrite tutorial by @yaoyaoding in #29
  • [Tensor] Implement dlpack tensor exchange protocol by @yaoyaoding in #30
  • [Frontend] Add a torch dynamo backend based on hidet "onnx2hidet" by @yaoyaoding in #31
  • [Frontend] Add hidet dynamo backend based on torch.fx by @yaoyaoding in #32
  • [Frontend] Make onnx dependency optional by @yaoyaoding in #33
  • [Frontend] Add more operator mappings for pytorch frontend by @yaoyaoding in #34
  • [Opeartor] Fix a bug in take (index can be in [-r, r-1]) by @yaoyaoding in #35
  • [Frontend] Add an option to print correctness report in hidet backend of torch dynamo by @yaoyaoding in #36
  • [IR] Refactor the attribute 'dtype' of hidet.Tensor from 'str' to 'DataType' by @yaoyaoding in #37
  • [Operator] Add a constant operator and deprecates manually implemented fill cuda kernel by @yaoyaoding in #38
  • [ONNX] Add reduce l2 onnx operator by @yaoyaoding in #40
  • [CLI] Add the 'hidet' command line interface by @yaoyaoding in #39
  • [Codegen] Add explicit conversion type for float16 by @yaoyaoding in #41
  • [Docs] Add the documentation for 'hidet' backend of PyTorch dynamo by @yaoyaoding in #42
  • [Runtime] Refactor the cuda runtime api used in hidet by @yaoyaoding in #43
  • [Testing] Remove redundant models in hidet.testing by @yaoyaoding in #44
  • [Runtime][IR] Refactor the device attribute of Tensor object by @yaoyaoding in #45
  • [Array-API][Phase 0] Adding the declarations of missing operators in Array API by @yaoyaoding in #46
  • [Operator] Add arange and linspace operator by @yaoyaoding in #47
  • [Bug] Fix a bug related to memset by @yaoyaoding in #49
  • [Docs] Add and update documentation by @yaoyaoding in #48
  • [Docs][Opeartor] Add more pytorch operator bindings and docs by @yaoyaoding in #50
  • [License][Docs] Add license header and update README.md by @yaoyaoding in #51
  • [Docs] Update docs by @yaoyaoding in #52
  • [IR] Add LaunchKernelStmt by @yaoyaoding in #53
  • [Operator] Add some torch operator mapping by @yaoyaoding in #54
  • [Bug] Fix a bug in hidet dynamo backend when cuda graph is not used by @yaoyaoding in #55
  • [Dynamo] Allow torch dynamo backend to accepts non-contiguous input by @yaoyaoding in #56
  • [Graph] Add to_cuda() for Module class by @hjjq in #57
  • [Bug] Fix a bug where the shared memory becomes zero in LaunchKernelStmt by @yaoyaoding in #58
  • [Release] Prepare to release the first version of hidet to public by @yaoyaoding in #59

New Contributors

  • @hjjq made their first contribution in #8

Full Changelog: https://github.com/hidet-org/hidet/commits/v0.1