File tree Expand file tree Collapse file tree 1 file changed +2
-2
lines changed
Expand file tree Collapse file tree 1 file changed +2
-2
lines changed Original file line number Diff line number Diff line change @@ -1527,9 +1527,9 @@ class DiffusionModelUNet(nn.Module):
15271527 upcast_attention: if True, upcast attention operations to full precision.
15281528 dropout_cattn: if different from zero, this will be the dropout value for the cross-attention layers.
15291529 include_fc: whether to include the final linear layer. Default to True.
1530- use_combined_linear: whether to use a single linear layer for qkv projection, default to True .
1530+ use_combined_linear: whether to use a single linear layer for qkv projection, default to False .
15311531 use_flash_attention: if True, use Pytorch's inbuilt flash attention for a memory efficient attention mechanism
1532- (see https://pytorch.org/docs/2.2/generated/torch.nn.functional.scaled_dot_product_attention.html).
1532+ (see https://pytorch.org/docs/2.2/generated/torch.nn.functional.scaled_dot_product_attention.html), default to False .
15331533 """
15341534
15351535 def __init__ (
You can’t perform that action at this time.
0 commit comments