Skip to content

Commit 86a51f6

Browse files
committed
Docs
1 parent d532d48 commit 86a51f6

File tree

2 files changed

+74
-3
lines changed

2 files changed

+74
-3
lines changed

apps/typegpu-docs/src/content/docs/fundamentals/utils.mdx

Lines changed: 73 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -109,3 +109,76 @@ The default workgroup sizes are:
109109

110110
The callback is not called if the global invocation id of a thread would exceed the size in any dimension.
111111
:::
112+
113+
## *batch*
114+
By default TypeGPU pipelines and render passes are submitted to the GPU immediatelly.
115+
If you want to give the GPU an opportunity to utilize its resources, you can use the `batch` function.
116+
117+
The `batch` function allows you to submit multiple pipelines and renderPasses to the GPU in a single call.
118+
Under the hood it creates `GPUCommandEncoder`,
119+
writes relative commands from the passed function to it and submits created `GPUCommandBuffer` to the device.
120+
121+
:::caution
122+
Calling pipeline with performance callback always flushes the command encoder. Also writes/reads to/from buffers flush the command encoder.
123+
I prepared table that tells you when flush is called (ergo new command encoder is created). Keep that in mind when using `batch`.
124+
:::
125+
126+
| Invocation | Inside batch env | Outside batch env |
127+
|-----------------------------------|--------------------|--------------------|
128+
| raw pipeline | No Flush ❌ | Flush ✅ |
129+
| pipeline with performance callback| Flush ✅ | Flush ✅ |
130+
| pipeline with timestamp writes | No Flush ❌ | Flush ✅ |
131+
| beginRenderPass | No Flush ❌ | Flush ✅ |
132+
| write | Flush ✅ | Flush ✅ |
133+
| read | Flush ✅ | Flush ✅ |
134+
135+
```ts twoslash
136+
import tgpu from 'typegpu';
137+
import * as d from 'typegpu/data';
138+
139+
140+
const entryFn = tgpu['~unstable'].computeFn({ workgroupSize: [7] })(() => {});
141+
const vertexFn = tgpu['~unstable'].vertexFn({
142+
out: { pos: d.builtin.position },
143+
})(() => {
144+
return { pos: d.vec4f() };
145+
});
146+
const fragmentFn = tgpu['~unstable'].fragmentFn({
147+
out: d.vec4f,
148+
})(() => d.vec4f());
149+
150+
const root = await tgpu.init();
151+
152+
const renderPipeline = root['~unstable']
153+
.withVertex(vertexFn, {})
154+
.withFragment(fragmentFn, { format: 'rgba8unorm' })
155+
.createPipeline();
156+
157+
const renderPipelineWithPerformanceCallback = root['~unstable']
158+
.withVertex(vertexFn, {})
159+
.withFragment(fragmentFn, { format: 'rgba8unorm' })
160+
.createPipeline()
161+
.withPerformanceCallback(() => {});
162+
163+
const computePipeline = root['~unstable']
164+
.withCompute(entryFn)
165+
.createPipeline();
166+
167+
// ---cut---
168+
const render = () => {
169+
computePipeline.dispatchWorkgroups(7, 7, 7);
170+
renderPipeline.draw(777);
171+
// more operations...
172+
173+
renderPipelineWithPerformanceCallback.draw(777);
174+
// new command encoder
175+
// force flush from renderPipelineWithPerformanceCallback
176+
};
177+
178+
root['~unstable'].batch(render);
179+
```
180+
181+
:::note
182+
The calls in the batch callback have to be made synchronously.
183+
This may seem limiting, but our reasoning is that if you need to wait for something inside of a batch, just split the batch in two!
184+
:::

packages/typegpu/tests/batch.test.ts

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,7 @@ describe('Batch', () => {
99
const vertexFn = tgpu['~unstable'].vertexFn({
1010
out: { pos: d.builtin.position },
1111
})(() => {
12-
return {
13-
pos: d.vec4f(),
14-
};
12+
return { pos: d.vec4f() };
1513
});
1614
const fragmentFn = tgpu['~unstable'].fragmentFn({
1715
out: d.vec4f,

0 commit comments

Comments
 (0)