feat: Add task chunking support to Nuke submitter#293
feat: Add task chunking support to Nuke submitter#293rickrams wants to merge 1 commit intoaws-deadline:mainlinefrom
Conversation
0c90f67 to
8bb2443
Compare
| @@ -0,0 +1,162 @@ | |||
| specificationVersion: jobtemplate-2023-09 | |||
There was a problem hiding this comment.
don't see a need for creating a new job template. I think it would be better to update the existing job template to support chunking. when running without chunking we could just use a chunk size of 1 with targetRuntimeSeconds set to 0.
There was a problem hiding this comment.
Great idea - deleted the separate chunked template. Updated default_nuke_job_template.yaml in-place with TASK_CHUNKING extension and CHUNK[INT] Frame parameter. ChunkSize defaults to 1 (identical to old per-frame behavior).
| "Useful when frame render times vary. Leave at 0 to use a fixed\n" | ||
| "chunk size for all chunks." |
There was a problem hiding this comment.
"Useful when frame render times vary" seems doesn't seem like the right use-case for dynamic chunk sizing. this feature relies on the scheduler estimating how long each task will take based on the duration of previous tasks.
| self.use_chunking_check = QCheckBox("Enable task chunking", self) | ||
| self.use_chunking_check.setToolTip( | ||
| "Group multiple frames into chunks to reduce application startup overhead per task" | ||
| ) | ||
| lyt.addWidget(self.use_chunking_check, 5, 0) |
There was a problem hiding this comment.
rather than adding a check box for choosing whether to enable chunking, I think we are better just using the spinner for chunk size. customers who want no chunking can just use chunkSize of 1 with dynamic chunking disabled.
Add task chunking using the OpenJD TASK_CHUNKING extension. The existing job template now uses CHUNK[INT] for the Frame parameter with contiguous chunks. Chunk size defaults to 1 (one frame per task, same as before). Users increase chunk size to group frames and reduce per-task overhead. Changes: - Updated default job template with TASK_CHUNKING extension, CHUNK[INT] Frame parameter, ChunkSize (default 1), and TargetChunkDuration params - RenderSettings: chunk_size and target_chunk_duration fields (sticky) - UI: Chunk size and target chunk duration spinners in job settings - Docs: Updated user guide, README, and CHANGELOG No adaptor changes needed - the existing NukeHandler.start_render() already handles the contiguous chunk frame range format (e.g. 1-10). Note: Requires a worker agent that supports the TASK_CHUNKING extension. Service-managed fleets always use a compatible version. Signed-off-by: Rick Ramsay <49293857+rickrams@users.noreply.github.com>
8bb2443 to
d893a6c
Compare
|
|
tested running integration tests, they passed |
|
did some manual testing, seems to work as expected. the job bundle output tests fail though. Its doesn't seem to be a regression, the test code just has to be updated. I'll send you a zip with the results on slack, it should be pretty simple to update the tests accordingly. |
|
We decided to move this to a different PR that includes a fix for the job output tests: #294. |



Add opt-in task chunking using the OpenJD TASK_CHUNKING extension. When enabled, frames are grouped into contiguous chunks instead of being dispatched individually, reducing per-task overhead. A future update could add non-contiguous chunking but for now this seems useful as is.
Changes:
No adaptor changes needed - the existing NukeHandler.start_render() already handles the contiguous chunk frame range format (e.g. 1-10).
How was this change tested? Local testing of some 500 frame scenes in Nuke 15.
Please run the integration tests and paste the results below
Don't have this setup so no.
If
installer/was modified or a file was added/removed fromsrc/, then update the installer tests and post the test results belowNo modified
Did you run the "Job Bundle Output Tests"? If not, why not? If so, paste the test results here.
Don't have this setup so no.
Was this change documented?
Yes
Is this a breaking change?
No
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.