You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello reader! 👋
I have been using Jotai for a bunch of projects now, large and small, and I fell in love with its programming model. My main focus as of late has shifted from client apps to working on a library called TypeGPU, a low-level abstraction on top of WebGPU that improves ergonomics, type-inference, as well as solves the most common pain points. It's fully imperative though, and while that's great for implementing fine grained processing, it's way harder to model data dependencies on a larger scale. A higher-level declarative API is a necessity!
There is prior art in this space, most notably Use.GPU, which uses JSX and its "Live" runtime to declare not only the 3D scene, but also the shaders in a declarative way.
Modeling data graphs in a React-like model was never my favorite though, and I always gravitated towards Jotai for that. Moreover, these data graphs could be used outside of React, in other frameworks or just in vanilla JS. So... what if an atom's state could be allocated in GPU memory, and derived atoms could calculate their state as compute shaders?
A new idea emerges: JotaiGPU!
High-level overview:
Introduce an API that mirrors atoms, but represents state and derived state that resides on the GPU instead of CPU:
withUpload(...) - atom wrapper, adding the context of a serialization schema to a vanilla atom, meaning they can be sent to the GPU (TLDR, atom+schema).
gpuAtom - generic state, stored in GPU memory.
pixelAtom - visual state, stored in textures.
Allow to build data graphs that combine both atoms and GPU atoms, meaning we can easily offload SIMD processing to the GPU, and keep the rest on the CPU (if the parallelization benefits outweigh the serialization cost).
Enable custom state-driven visualizations that recompute only when the relevant data changes.
Type-safe data schemas, used for serialization across the CPU|GPU boundary.
End-to-end type inference of those schemas.
Transpiling TypeScript to WGSL - allowing GPU atoms and pixel atoms to be defined using regular TypeScript functions.
Compute Graph Example
import{atom}from'jotai';import*asdfrom'typegpu/data';import{withUpload,gpuAtom}from'jotai-gpu';constcountAtom=withUpload(d.u32,atom(1));// We can create derived elements from uniforms.constdoubleAtom=gpuAtom(d.u32)(()=>{'kernel';// <- a bundler directive that allows this code to run on the GPU.returncountAtom.$*2;});// We can create derived atoms from elements.constquadAtom=atom(async(get)=>{// We have to await, since the state of `doubleAtom` has to be// fetched from the GPU.constdouble=awaitget(doubleAtom);returndouble*2;});functionExample(){// Uniforms and elements are compatible with the vanilla atom APIsconst[count,setCount]=useAtom(countAtom);constquad=useAtomValue(quadAtom);// <- Will suspendconstincrement=()=>{setCount(prev=>prev+1);};return(<div><buttontype="button"onClick={increment}>Click me!</button><p>counter: {count}</p><p>counter*4 (partially computed on the GPU): {quad}</p></div>);}
Minimal Visual API Example
import{vec4f}from'typegpu/data';import{pixelAtom,useRender}from'jotai-gpu';// A pure description of a pixel. In this case, every pixel is red.constfinalPixelAtom=pixelAtom(vec4f)(()=>{'kernel';// <- a bundler directive that allows this code to run on the GPU.returnvec4f(1,0,0,1);});functionExample(){const{ canvasRef }=useRender(finalPixelAtom);return<canvasref={canvasRef}/>;}
Simple Gradient Example
import{atom}from'jotai';import{mix}from'typegpu/std';import{vec3f,vec4f}from'typegpu/data';import{withUpload,pixelAtom,targetResolution,useRender}from'jotai-gpu';// To pass data from the CPU to the GPU, we can create// reactive *uniforms*. They differ from vanilla atoms// by including a schema that can be used to serialize// values when crossing the CPU|GPU boundary.constleftColorAtom=withUpload(vec3f,atom(vec3f(0,1,0.9)));constrightColorAtom=withUpload(vec3f,atom(vec3f(0,0,1)));constfinalPixelAtom=pixelAtom(vec4f)(({ uv })=>{'kernel';constleftColor=leftColorAtom.$;// ^? v3f <- inferred from schemaconstrightColor=rightColorAtom.$;// ^? v3f <- inferred from schemareturnmix(leftColor,rightColor,uv.x);});functionExample(){constcanvasRef=useRef();useRender(finalPixelAtom,{target: canvasRef});return<canvaswidth={32}height={32}ref={canvasRef}/>;}
"Game of Life" Example
import{atom}from'jotai';import*asdfrom'typegpu/data';import{withUpload,gpuAtom,pixelAtom,frameSchedule,useRender}from'jotai-gpu';const{ deltaTimeAtom }=frameSchedule();constCell=d.struct({alive: d.bool,});// ^? d.WgslStruct<{ alive: d.Bool }>// An equivalent of a derived atom, but calculated on the GPU.// 512x512 cells will be calculated in parallel.constgridAtom=gpuAtom(Cell,[512,512])(({ idx })=>{'kernel';// Subscribing to deltaTime on the frame// schedule causes this *element* to be// recomputed on every frame.constdeltaTime=deltaTimeAtom.$;// ^ number// Depending on the previous value of// itself causes the *element* to be// double-buffered.constold=gridAtom.$;// ^? { alive: boolean }[][]constoldCell=old[idx.x][idx.y];// ...// Apply the rules of the "Game of Life"// ...returnnewCell;});// ^ GpuAtom<d.WgslStruct<{ alive: d.Bool }>, { alive: boolean }, '2d'>constaliveColorAtom=withUpload(d.vec3f,atom(d.vec3f(1)));constfinalPixelAtom=pixelAtom(d.vec4f)(({ idx })=>{'kernel';constgrid=gridAtom.$;// ^? { alive: boolean }[][]if(grid[idx.x][idx.y].alive){returnd.vec4f(aliveColorAtom.$,1);}returnd.vec4f(0,0,0,1);});functionGameOfLife(){// Uniforms are compatible with the vanilla atom APIconst[aliveColor,setAliveColor]=useAtom(aliveColorAtom);const{ canvasRef }=useRender(finalPixelAtom);// ...return<canvasref={canvasRef}/>}
Custom Geometries
Right now, the proposed abstraction does not cover geometries other than a full-screen quad. This is a solvable problem, but I want to marinate more on the initial proposal and idea before landing on anything concrete.
GPU atoms in the context of vanilla atoms
GPU atoms are asynchronous atoms, meaning GpuAtom<WgslStruct<{ a: F32, b: U32 }>, { a: number, b: number }> is assignable to Atom<Promise<{ a: number, b: number }>>.
Reading elements using APIs like store.get(...), useAtomValue or get(...) will retrieve the data from the GPU and deserialize it according to the element's schema.
Vanilla atoms in the context of GPU atoms
Using .$ inside the definitions of derived GPU atoms or pixel atoms can only be used to read other gpu atoms or withUpload(...)-wrapped atoms, not vanilla atoms! That's because atoms on their own lack schemas that define their shape for serialization.
Zero-dimensional (no parallelizm) GPU atoms that compute their derived value based on other atoms, and are able to be asynchronously read by other atoms, or with store.get().
One, two and three dimensional GPU atoms
Pixel atoms, and hooks to render them onto canvases.
Double-buffering based on the element retrieving its own value.
Being able to render pixels into intermediate textures, not only onto a final canvas.
...?
I would love to hear everyone's thoughts and opinions on JotaiGPU, let me know if there's anything I can clarify! 🙏
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello reader! 👋
I have been using Jotai for a bunch of projects now, large and small, and I fell in love with its programming model. My main focus as of late has shifted from client apps to working on a library called TypeGPU, a low-level abstraction on top of WebGPU that improves ergonomics, type-inference, as well as solves the most common pain points. It's fully imperative though, and while that's great for implementing fine grained processing, it's way harder to model data dependencies on a larger scale. A higher-level declarative API is a necessity!
There is prior art in this space, most notably Use.GPU, which uses JSX and its "Live" runtime to declare not only the 3D scene, but also the shaders in a declarative way.
Modeling data graphs in a React-like model was never my favorite though, and I always gravitated towards Jotai for that. Moreover, these data graphs could be used outside of React, in other frameworks or just in vanilla JS. So... what if an atom's state could be allocated in GPU memory, and derived atoms could calculate their state as compute shaders?
A new idea emerges: JotaiGPU!
High-level overview:
Compute Graph Example
Minimal Visual API Example
Simple Gradient Example
"Game of Life" Example
Custom Geometries
Right now, the proposed abstraction does not cover geometries other than a full-screen quad. This is a solvable problem, but I want to marinate more on the initial proposal and idea before landing on anything concrete.
GPU atoms in the context of vanilla atoms
GpuAtom<WgslStruct<{ a: F32, b: U32 }>, { a: number, b: number }>
is assignable toAtom<Promise<{ a: number, b: number }>>
.store.get(...)
,useAtomValue
orget(...)
will retrieve the data from the GPU and deserialize it according to the element's schema.Vanilla atoms in the context of GPU atoms
.$
inside the definitions of derived GPU atoms or pixel atoms can only be used to read other gpu atoms orwithUpload(...)
-wrapped atoms, not vanilla atoms! That's because atoms on their own lack schemas that define their shape for serialization.Roadmap
store.get()
.I would love to hear everyone's thoughts and opinions on JotaiGPU, let me know if there's anything I can clarify! 🙏
Beta Was this translation helpful? Give feedback.
All reactions