You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+38-20Lines changed: 38 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,40 @@
2
2
3
3
Documentation for Composable Kernel available at [https://rocm.docs.amd.com/projects/composable_kernel/en/latest/](https://rocm.docs.amd.com/projects/composable_kernel/en/latest/).
4
4
5
+
## (Unreleased) Composable Kernel for ROCm
6
+
7
+
### Added
8
+
9
+
* Added a compute async pipeline in the CK TILE universal GEMM on gfx950
10
+
* Added support for B Tensor type pk_int4_t in the CK TILE weight preshuffle GEMM.
11
+
* Added the new api to load different memory sizes to SGPR.
12
+
* Added support for B Tensor Preshuffle in CK TILE Grouped GEMM.
13
+
* Added a basic copy kernel example and supporting documentation for new CK Tile developers.
14
+
* Added support for grouped_gemm kernels to perform multi_d elementwise operation.
15
+
* Added support for Multiple ABD GEMM
16
+
* Added benchmarking support for tile engine GEMM Multi D.
17
+
* Added block scaling support in CK_TILE GEMM, allowing flexible use of quantization matrices from either A or B operands.
18
+
* Added the row-wise column-wise quantization for CK_TILE GEMM & CK_TILE Grouped GEMM.
19
+
* Added support for f32 to FMHA (fwd/bwd).
20
+
* Added tensor-wise quantization for CK_TILE GEMM.
21
+
* Added support for batched contraction kernel.
22
+
* Added pooling kernel in CK_TILE
23
+
24
+
### Changed
25
+
26
+
* Removed `BlockSize` in `make_kernel` and `CShuffleEpilogueProblem` to support Wave32 in CK_TILE (#2594)
27
+
28
+
## Composable Kernel 1.1.0 for ROCm 7.1.0
29
+
30
+
### Added
31
+
32
+
* Added support for hdim as a multiple of 32 for FMHA (fwd/fwd_splitkv/bwd)
33
+
* Added support for elementwise kernel.
34
+
35
+
### Upcoming changes
36
+
37
+
* Non-grouped convolutions are deprecated. Their functionality is supported by grouped convolution.
38
+
5
39
## Composable Kernel 1.1.0 for ROCm 7.0.0
6
40
7
41
### Added
@@ -19,26 +53,18 @@ Documentation for Composable Kernel available at [https://rocm.docs.amd.com/proj
19
53
* Added support for Split K for grouped convolution backward data.
20
54
* Added logit soft-capping support for fMHA forward kernels.
21
55
* Added support for hdim as a multiple of 32 for FMHA (fwd/fwd_splitkv)
22
-
* Added support for hdim as a multiple of 32 for FMHA (fwd/fwd_splitkv/bwd)
23
56
* Added benchmarking support for tile engine GEMM.
24
57
* Added Ping-pong scheduler support for GEMM operation along the K dimension.
25
58
* Added rotating buffer feature for CK_Tile GEMM.
26
59
* Added int8 support for CK_TILE GEMM.
27
-
* Added support for elementwise kernel.
28
60
29
61
### Optimized
30
62
63
+
* Optimize the gemm multiply multiply preshuffle & lds bypass with Pack of KGroup and better instruction layout.
64
+
* Added Vectorize Transpose optimization for CK Tile
65
+
* Added the asynchronous copy for gfx950
31
66
32
-
* Optimize the gemm multiply multiply preshuffle & lds bypass with Pack of KGroup and better instruction layout. (#2166)
33
-
* Added Vectorize Transpose optimization for CK Tile (#2131)
34
-
* Added the asynchronous copy for gfx950 (#2425)
35
-
36
-
37
-
### Fixes
38
-
39
-
None
40
-
41
-
### Changes
67
+
### Changed
42
68
43
69
* Removed support for gfx940 and gfx941 targets (#1944)
44
70
* Replaced the raw buffer load/store intrinsics with Clang20 built-ins (#1876)
@@ -47,14 +73,6 @@ None
47
73
* Number of instances in instance factory for grouped convolution backward weight NGCHW/GKYXC/NGKHW has been reduced.
48
74
* Number of instances in instance factory for grouped convolution backward data NGCHW/GKYXC/NGKHW has been reduced.
49
75
50
-
### Known issues
51
-
52
-
None
53
-
54
-
### Upcoming changes
55
-
56
-
* Non-grouped convolutions are deprecated. All of their functionality is supported by grouped convolution.
0 commit comments