Skip to content

Commit 9a3b8c6

Browse files
committed
Fix handling of init_timestep in StableDiffusionGeneratorPipeline and improve its documentation.
1 parent bd74b84 commit 9a3b8c6

File tree

2 files changed

+4
-9
lines changed

2 files changed

+4
-9
lines changed

invokeai/backend/stable_diffusion/diffusers_pipeline.py

Lines changed: 3 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -299,9 +299,8 @@ def latents_from_embeddings(
299299
HACK(ryand): seed is only used in a particular case when `noise` is None, but we need to re-generate the
300300
same noise used earlier in the pipeline. This should really be handled in a clearer way.
301301
timesteps: The timestep schedule for the denoising process.
302-
init_timestep: The first timestep in the schedule.
303-
TODO(ryand): I'm pretty sure this should always be the same as timesteps[0:1]. Confirm that that is the
304-
case, and remove this duplicate param.
302+
init_timestep: The first timestep in the schedule. This is used to determine the initial noise level, so
303+
should be populated if you want noise applied *even* if timesteps is empty.
305304
callback: A callback function that is called to report progress during the denoising process.
306305
control_data: ControlNet data.
307306
ip_adapter_data: IP-Adapter data.
@@ -316,9 +315,7 @@ def latents_from_embeddings(
316315
SD UNet model.
317316
is_gradient_mask: A flag indicating whether `mask` is a gradient mask or not.
318317
"""
319-
# TODO(ryand): Figure out why this condition is necessary, and document it. My guess is that it's to handle
320-
# cases where densoisings_start and denoising_end are set such that there are no timesteps.
321-
if init_timestep.shape[0] == 0 or timesteps.shape[0] == 0:
318+
if init_timestep.shape[0] == 0:
322319
return latents
323320

324321
orig_latents = latents.clone()

invokeai/backend/stable_diffusion/multi_diffusion_pipeline.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -49,9 +49,7 @@ def multi_diffusion_denoise(
4949
) -> torch.Tensor:
5050
self._check_regional_prompting(multi_diffusion_conditioning)
5151

52-
# TODO(ryand): Figure out why this condition is necessary, and document it. My guess is that it's to handle
53-
# cases where densoisings_start and denoising_end are set such that there are no timesteps.
54-
if init_timestep.shape[0] == 0 or timesteps.shape[0] == 0:
52+
if init_timestep.shape[0] == 0:
5553
return latents
5654

5755
batch_size, _, latent_height, latent_width = latents.shape

0 commit comments

Comments
 (0)