You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+60Lines changed: 60 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -31,3 +31,63 @@ This package is the counterpart of Julia's `AbstractArray` interface, but for GP
31
31
types: It provides functionality and tooling to speed-up development of new GPU array types.
32
32
**This package is not intended for end users!** Instead, you should use one of the packages
33
33
that builds on GPUArrays.jl, such as [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl), [oneAPI.jl](https://github.com/JuliaGPU/oneAPI.jl), [AMDGPU.jl](https://github.com/JuliaGPU/AMDGPU.jl), or [Metal.jl](https://github.com/JuliaGPU/Metal.jl).
34
+
35
+
## Interface methods
36
+
37
+
To support a new GPU backend, you will need to implement various interface methods for your backend's array types.
38
+
Some (CPU based) examples can be see in the testing library `JLArrays` (located in the `lib` directory of this package).
39
+
40
+
### Dense array support
41
+
42
+
### Sparse array support (optional)
43
+
44
+
`GPUArrays.jl` provides **device-side** array types for `CSC`, `CSR`, `COO`, and `BSR` matrices, as well as sparse vectors.
45
+
It also provides abstract types for these layouts that you can create concrete child types of in order to benefit from the
46
+
backend-agnostic wrappers. In particular, `GPUArrays.jl` provides out-of-the-box support for broadcasting and `mapreduce` over
47
+
GPU sparse arrays.
48
+
49
+
For **host-side** types, your custom sparse types should implement:
50
+
51
+
-`dense_array_type` - the corresponding dense array type. For example, for a `CuSparseVector` or `CuSparseMatrixCXX`, the `dense_array_type` is `CuArray`
52
+
-`sparse_array_type` - the **untyped** sparse array type corresponding to a given parametrized type. A `CuSparseVector{Tv, Ti}` would have a `sparse_array_type` of `CuVector` -- note the lack of type parameters!
53
+
-`csc_type(::Type{T})` - the compressed sparse column type for your backend. A `CuSparseMatrixCSR` would have a `csc_type` of `CuSparseMatrixCSC`.
54
+
-`csr_type(::Type{T})` - the compressed sparse row type for your backend. A `CuSparseMatrixCSC` would have a `csr_type` of `CuSparseMatrixCSR`.
55
+
-`coo_type(::Type{T})` - the coordinate sparse matrix type for your backend. A `CuSparseMatrixCSC` would have a `coo_type` of `CuSparseMatrixCOO`.
56
+
57
+
To use `SparseArrays.findnz`, your host-side type **must** implement `sortperm`. This can be done with scalar indexing, but will be very slow.
58
+
59
+
Additionally, you need to teach `GPUArrays.jl` how to translate your backend's specific types onto the device. `GPUArrays.jl` provides the device-side types:
60
+
61
+
-`GPUSparseDeviceVector`
62
+
-`GPUSparseDeviceMatrixCSC`
63
+
-`GPUSparseDeviceMatrixCSR`
64
+
-`GPUSparseDeviceMatrixBSR`
65
+
-`GPUSparseDeviceMatrixCOO`
66
+
67
+
You will need to create a method of `Adapt.adapt_structure` for each format your backend supports. **Note** that if your backend supports separate address spaces,
68
+
as CUDA and ROCm do, you need to provide a parameter to these device-side arrays to indicate in which address space the underlying pointers live. An example of adapting
69
+
an array to the device-side struct:
70
+
71
+
```julia
72
+
function GPUArrays.GPUSparseDeviceVector(iPtr::MyDeviceVector{Ti, A},
0 commit comments