Note: Advanced MSW-MSA Attention node parameters changed. May break workflows.
Note: This update may slightly change seeds.
- MSW-MSA attention can now work with all images sizes. When the size is incompatible it will scale the latent which may affect quality. Contributed by @pamparamm. Thanks!
- Scaling now tries to make the output size a multiple of 8 so it's compatible with MSW-MSA attention. May change seeds, set
ca_latent_pixel_increment: 1
in YAML parameters for the old behavior. Note: Does not apply if you useavg_pool2d
for downscaling. - CA downscaling now uses
adaptive_avg_pool2d
as the default method which supports fractional downscale sizes. As far as I know, it's the same asavg_pool2d
with integer sizes but it's possible this will change seeds. - Simple nodes now support an "auto" model type parameter that will try to guess the model from the latent type.
- Added a
yaml_parameters
input to the advanced nodes which allows specifying advanced/uncommon parameters. See main README for possible settings. - You can now use a different scale factor for width and height in RAUNet CA scaling. See
ca_downscale_factor_w
in YAML parameters. - You can now fade out the CA scaling effect in RAUNet node. See
ca_fadeout_start_time
andca_fadeout_cap
in YAML parameters. - Simple nodes default parameters for SDXL models adjusted to match the official HiDiffusion ones more closely.
Check the expandable "YAML Parameters" sections in the main README for more information about advanced parameters added in this update.
Full Changelog: 0.8.3...0.8.4