Skip to content

Releases: ENOT-AutoDL/onnx2torch

v1.5.15

07 Aug 14:06
369412a
Compare
Choose a tag to compare
feat: squeeze-21
feat: ArgMax and ArgMin

Thanks to @rosemanmeilof and @ajstarna for their contributions.

🍰

v1.5.14

02 Apr 11:03
cd98062
Compare
Choose a tag to compare
feat: isinf, isnan and nonzero operations
fix: reduce keepdim type (int -> bool)

Thanks to @karalinkw and @francis2tm for their contributions.

🍰

v1.5.13

27 Oct 14:16
204175a
Compare
Choose a tag to compare
fix: div return type for integer arguments
docs: missing [import onnx] in main example

Thanks @seriousran for his contribution.

🍰

v1.5.12

09 Oct 21:25
4732c11
Compare
Choose a tag to compare
fix: export to onnx ConstantOfShape in case of bool value attribute

🍰

v1.5.11

26 Jul 18:56
c6360a0
Compare
Choose a tag to compare
feat: DepthToSpace

Thanks @sushanthr for his contribution.

🍰

v1.5.10

07 Jul 11:06
eb86d59
Compare
Choose a tag to compare
chore: fix broken build (missing submodules)

🍰

v1.5.9

07 Jul 10:19
77814e4
Compare
Choose a tag to compare
feat: LayerNormalization

🍰

Thanks @JohnMasoner for his contribution.

v1.5.8

01 Jun 16:20
bd1bf5f
Compare
Choose a tag to compare
fix: op Range export to onnx (keeping dynamic inputs)

🍰

v1.5.7

26 May 18:23
2ed856f
Compare
Choose a tag to compare
feat: Mod
feat: EyeLike
feat: human-readable onnx node naming

fix: gemm weight swapped dimension

🍰

Thanks @JohnMasoner and @jan-haug for their contributions.

v1.5.6

13 Feb 05:35
aeaf7a1
Compare
Choose a tag to compare
feat: InstanceNormalization
feat: LeakyRelu opset 16
feat: RoiAlign for opset 16
feat: ScatterND for opset 16 (reduction none only)
feat: GatherND (only for parameter batch_dims=0)

fix: lambda not found error when using torch.save
fix: upcast indices to int64
fix: GEMM dim typo (no parameters for nn.Linear)

docs: workaround for old opset versions

🍰

Thanks @SuperSecureHuman and @JohnMasoner for their contributions.