Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from comfyanonymous:master #106

Open
wants to merge 1,808 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
1808 commits
Select commit Hold shift + click to select a range
0b9839e
Update web content to release v1.6.15 (#6324)
huchenlei Jan 3, 2025
8f29664
Change defaults in nightly package workflow.
comfyanonymous Jan 3, 2025
45671cd
Update web content to release v1.6.16 (#6335)
huchenlei Jan 3, 2025
caa6476
Update web content to release v1.6.17 (#6337)
huchenlei Jan 3, 2025
d45ebb6
Remove old unused function.
comfyanonymous Jan 4, 2025
5cbf797
Add advanced device option to clip loader nodes.
comfyanonymous Jan 5, 2025
c8a3492
Make the device an optional parameter in the clip loaders.
comfyanonymous Jan 5, 2025
b65b83a
Add update-frontend github action (#6336)
huchenlei Jan 5, 2025
7da85fa
Update CODEOWNERS (#6338)
yoland68 Jan 5, 2025
c496e53
In inner_sample, change "sigmas" to "sampler_sigmas" in transformer_o…
Kosinkadink Jan 6, 2025
916d1e1
Make ancestral samplers more deterministic.
comfyanonymous Jan 6, 2025
eeab420
Update frontend to v1.6.18 (#6368)
huchenlei Jan 6, 2025
d055325
Document get_attr and get_model_object (#6357)
huchenlei Jan 7, 2025
4209edf
Make a few more samplers deterministic.
comfyanonymous Jan 7, 2025
c515bdf
fixed: robust loading `comfy.settings.json` (#6383)
ltdrdata Jan 7, 2025
d0f3752
Properly calculate inner dim for t5 model.
comfyanonymous Jan 7, 2025
2307ff6
Improve some logging messages.
comfyanonymous Jan 9, 2025
ff83865
Cleaner handling of attention mask in ltxv model code.
comfyanonymous Jan 9, 2025
129d890
Add argument to skip the output reshaping in the attention functions.
comfyanonymous Jan 10, 2025
2ff3104
WIP support for Nvidia Cosmos 7B and 14B text to world (video) models.
comfyanonymous Jan 10, 2025
adea2be
Add edm option to ModelSamplingContinuousEDM for Cosmos.
comfyanonymous Jan 11, 2025
9c773a2
Add pyproject.toml (#6386)
huchenlei Jan 11, 2025
ee8a7ab
Fast latent preview for Cosmos.
comfyanonymous Jan 11, 2025
6c9bd11
Hooks Part 2 - TransformerOptionsHook and AdditionalModelsHook (#6377)
Kosinkadink Jan 11, 2025
42086af
Merge ruff.toml into pyproject.toml (#6431)
huchenlei Jan 11, 2025
b9d9bcb
fixed a bug where a relative path was not converted to a full path (#…
bigcat88 Jan 12, 2025
90f349f
Add res_multistep sampler from the cosmos code.
comfyanonymous Jan 12, 2025
1f1c7b7
Remove useless code.
comfyanonymous Jan 13, 2025
3aaabb1
Implement Cosmos Image/Video to World (Video) diffusion models.
comfyanonymous Jan 14, 2025
c78a456
Rewrite res_multistep sampler and implement res_multistep_cfg_pp samp…
pamparamm Jan 14, 2025
2cdbaf5
Add SetFirstSigma node (#6459)
catboxanon Jan 15, 2025
5b657f8
Allow setting start and end image in CosmosImageToVideoLatent.
comfyanonymous Jan 15, 2025
2feb8d0
Force safe loading of files in torch format on pytorch 2.4+
comfyanonymous Jan 15, 2025
cba58ff
Remove unsafe embedding load for very old pytorch.
comfyanonymous Jan 15, 2025
1709a84
Use latest python 3.12.8 the portable release.
comfyanonymous Jan 15, 2025
3baf92d
CosmosImageToVideoLatent batch_size now does something.
comfyanonymous Jan 15, 2025
2e20e39
Add minimum numpy version to requirements.txt
comfyanonymous Jan 16, 2025
55ade36
Remove python 3.8 from test-build workflow.
comfyanonymous Jan 16, 2025
bfd5dfd
3.13 doesn't work yet.
comfyanonymous Jan 16, 2025
0087611
Optimize first attention block in cosmos VAE.
comfyanonymous Jan 16, 2025
4758fb6
Lower cosmos VAE memory usage by a bit.
comfyanonymous Jan 16, 2025
25683b5
Lower cosmos diffusion model memory usage.
comfyanonymous Jan 16, 2025
6320d05
Slightly lower hunyuan video memory usage.
comfyanonymous Jan 16, 2025
9d8b6c1
More accurate memory estimation for cosmos and hunyuan video.
comfyanonymous Jan 16, 2025
23289a6
Clean up some debug lines.
comfyanonymous Jan 16, 2025
88ceb28
Tweak hunyuan memory usage factor.
comfyanonymous Jan 16, 2025
31831e6
Code refactor.
comfyanonymous Jan 16, 2025
619b8cd
Bump ComfyUI version to 0.3.11
comfyanonymous Jan 16, 2025
cca96a8
Fix cosmos VAE failing with videos longer than 121 frames.
comfyanonymous Jan 16, 2025
0aa2368
Fix some cosmos fp8 issues.
comfyanonymous Jan 16, 2025
55add50
Bump ComfyUI version to v0.3.12
comfyanonymous Jan 16, 2025
7fc3ccd
Add that nvidia cosmos is supported to the README.
comfyanonymous Jan 17, 2025
2f3ab40
Add warning when using old pytorch versions.
comfyanonymous Jan 17, 2025
507199d
Uni pc sampler now works with audio and video models.
comfyanonymous Jan 18, 2025
3a3910f
PromptServer: Return 400 for empty filename param (#6504)
catboxanon Jan 18, 2025
b1a0213
Remove comfy.samplers self-import (#6506)
catboxanon Jan 18, 2025
b4de04a
Update frontend to v1.7.14 (#6522)
comfy-pr-bot Jan 19, 2025
ebf038d
Use `torch.special.expm1` (#6388)
kit1980 Jan 19, 2025
a00e148
LatentBatch fix for video latents
comfyanonymous Jan 19, 2025
d8a7a32
Cleanup old TODO.
comfyanonymous Jan 20, 2025
fb2ad64
Add FluxDisableGuidance node to disable using the guidance embed.
comfyanonymous Jan 20, 2025
d303cb5
Add missing case to CLIPLoader.
comfyanonymous Jan 21, 2025
e857dd4
Add gradient estimation sampler (#6554)
chaObserv Jan 22, 2025
a7fe0a9
Refactor and fixes for video latents.
comfyanonymous Jan 22, 2025
d6bbe8c
Remove support for python 3.8.
comfyanonymous Jan 22, 2025
a058f52
[i18n] Add /i18n endpoint to provide all custom node translations (#6…
huchenlei Jan 22, 2025
ca69b41
Add utils/ to web server developer codeowner (#6570)
huchenlei Jan 22, 2025
f3566f0
remove some params from load 3d node (#6436)
jtydhr88 Jan 22, 2025
dfa2b6d
Remove unused function lcm in conds.py (#6572)
huchenlei Jan 23, 2025
96e2a45
Remove useless code.
comfyanonymous Jan 23, 2025
ce557cf
Remove redundant code (#6576)
webfiltered Jan 23, 2025
14ca5f5
Remove useless code.
comfyanonymous Jan 24, 2025
7fbf4b7
Update nightly pytorch ROCm command in Readme.
comfyanonymous Jan 24, 2025
6d21740
Print ComfyUI version.
comfyanonymous Jan 25, 2025
67feb05
Remove redundant code.
comfyanonymous Jan 26, 2025
4f011b9
Better CLIPTextEncode error when clip input is None.
comfyanonymous Jan 26, 2025
255edf2
Lower minimum ratio of loaded weights on Nvidia.
comfyanonymous Jan 27, 2025
1210d09
Convert `latents_ubyte` to 8-bit unsigned int before converting to CP…
shenanigansd Jan 28, 2025
13fd4d6
More friendly error messages for corrupted safetensors files.
comfyanonymous Jan 28, 2025
222f48c
Allow changing folder_paths.base_path via command line argument. (#6600)
webfiltered Jan 29, 2025
6ff2e4d
Remove logging call added in last commit.
comfyanonymous Jan 29, 2025
537c27c
Bump default cuda version in standalone package to 126.
comfyanonymous Jan 29, 2025
f9230bd
Update the python version in some workflows.
comfyanonymous Jan 29, 2025
ef85058
Bump ComfyUI version to v0.3.13
comfyanonymous Jan 29, 2025
2f98c24
Update Readme with link to instruction for Nvidia 50 series.
comfyanonymous Jan 30, 2025
8d8dc9a
Allow batch of different sigmas when noise scaling.
comfyanonymous Jan 30, 2025
541dc08
Update Readme.
comfyanonymous Jan 31, 2025
669e049
Update frontend to v1.8.12 (#6662)
comfy-pr-bot Jan 31, 2025
768e035
Add node for preview 3d animation (#6594)
jtydhr88 Jan 31, 2025
9e1d301
Only use stable cascade lora format with cascade model.
comfyanonymous Feb 1, 2025
24d6871
add disable-compres-response-body cli args; add compress middleware; …
KarryCharon Feb 2, 2025
0a0df5f
better guide message for sageattention (#6634)
ltdrdata Feb 2, 2025
44e19a2
Use maximum negative value instead of -inf for masks in text encoders.
comfyanonymous Feb 2, 2025
932ae8d
Update frontend to v1.8.13 (#6682)
comfy-pr-bot Feb 2, 2025
ed4d92b
Model merging nodes for cosmos.
comfyanonymous Feb 3, 2025
8d88bfa
allow searching for new .pt2 extension, which can contain AOTI compil…
Slickytail Feb 3, 2025
e5ea112
Support Lumina 2 model.
comfyanonymous Feb 4, 2025
3e880ac
Fix on python 3.9
comfyanonymous Feb 4, 2025
8ac2ddd
Lower the default shift of lumina to reduce artifacts.
comfyanonymous Feb 4, 2025
016b219
Add Lumina Image 2.0 to Readme.
comfyanonymous Feb 4, 2025
a57d635
Fix lumina 2 batches.
comfyanonymous Feb 5, 2025
6065300
Use regular numbers for rope in lumina model.
comfyanonymous Feb 5, 2025
94f21f9
Upcasting rope to fp32 seems to make no difference in this model.
comfyanonymous Feb 5, 2025
37cd448
Set the shift for Lumina back to 6.
comfyanonymous Feb 5, 2025
debabcc
Bump ComfyUI version to v0.3.14
comfyanonymous Feb 5, 2025
f1059b0
Remove unused GET /files API endpoint (#6714)
huchenlei Feb 5, 2025
14880e6
Remove some useless code.
comfyanonymous Feb 6, 2025
fca304d
Update frontend to v1.8.14 (#6724)
comfy-pr-bot Feb 6, 2025
b695176
fix a bug in the attn_masked redux code when using weight=1.0 (#6721)
Slickytail Feb 6, 2025
079eccc
Don't compress http response by default.
comfyanonymous Feb 7, 2025
832e3f5
Fix another small bug in attention_bias redux (#6737)
Slickytail Feb 7, 2025
af93c8d
Document which text encoder to use for lumina 2.
comfyanonymous Feb 8, 2025
43a74c0
Allow FP16 accumulation with `--fast` (#6453)
catboxanon Feb 8, 2025
3d06e1c
Make error more clear to user.
comfyanonymous Feb 8, 2025
caeb27c
res_multistep: Fix cfgpp and add ancestral samplers (#6731)
pamparamm Feb 9, 2025
095d867
Remove useless function.
comfyanonymous Feb 9, 2025
4027466
Make lumina model work with any latent resolution.
comfyanonymous Feb 10, 2025
e57d228
Fix incorrect Content-Type for WebP images (#6752)
bananasss00 Feb 11, 2025
af4b7c9
Make --force-fp16 actually force the diffusion model to be fp16.
comfyanonymous Feb 11, 2025
b124256
Fix for running via DirectML (#6542)
hisham-hchowdhu Feb 11, 2025
d9f0fcd
Cleanup.
comfyanonymous Feb 11, 2025
ab888e1
Add add_weight_wrapper function to model patcher.
comfyanonymous Feb 12, 2025
3574025
mix_ascend_bf16_infer_err (#6794)
zhoufan2956 Feb 12, 2025
1d5d658
Fix ruff.
comfyanonymous Feb 12, 2025
8773ccf
Better memory estimation for ROCm that support mem efficient attention.
comfyanonymous Feb 13, 2025
019c702
Add a way to set a different compute dtype for the model at runtime.
comfyanonymous Feb 14, 2025
042a905
Open yaml files with utf-8 encoding for extra_model_paths.yaml (#6807)
robinjhuang Feb 14, 2025
d7b4bf2
Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7
comfyanonymous Feb 14, 2025
1cd6cd6
Disable pytorch attention in VAE for AMD.
comfyanonymous Feb 14, 2025
2e21122
Add a node to set the model compute dtype for debugging.
comfyanonymous Feb 15, 2025
b3d6ae1
Update frontend to v1.9.17 (#6814)
comfy-pr-bot Feb 15, 2025
93c8607
remove light_intensity and fov from load3d (#6742)
jtydhr88 Feb 15, 2025
e2919d3
Disable bf16 on AMD GPUs that don't support it.
comfyanonymous Feb 16, 2025
d0399f4
Update frontend to v1.9.18 (#6828)
comfy-pr-bot Feb 16, 2025
61c8c70
support system prompt and cfg renorm in Lumina2 (#6795)
lzyhha Feb 16, 2025
530412c
Refactor torch version checks to be more future proof.
comfyanonymous Feb 17, 2025
8c0bae5
bf16 manual cast works on old AMD.
comfyanonymous Feb 17, 2025
31e54b7
Improve AMD arch detection.
comfyanonymous Feb 17, 2025
b07258c
Fix typo.
comfyanonymous Feb 18, 2025
acc152b
Support loading and using SkyReels-V1-Hunyuan-I2V (#6862)
kijai Feb 18, 2025
afc85cd
Add Load Image Output node (#6790)
christian-byrne Feb 18, 2025
0d4d922
Add early experimental SaveWEBM node to save .webm files.
comfyanonymous Feb 19, 2025
5715be2
Fix Hunyuan unet config detection for some models. (#6877)
maedtb Feb 19, 2025
b4d3652
fixed: crash caused by outdated incompatible aiohttp dependency (#6841)
ltdrdata Feb 19, 2025
c5be423
Fix link pointing to non-exisiting docs (#6891)
silveroxides Feb 20, 2025
29d4384
Normalize extra_model_config.yaml paths to prevent duplicates. (#6885)
robinjhuang Feb 20, 2025
12da6ef
Apparently directml supports fp16.
comfyanonymous Feb 20, 2025
d372725
Add discord channel to support section. (#6900)
robinjhuang Feb 20, 2025
f579a74
Update frontend release schedule in README. (#6908)
webfiltered Feb 21, 2025
41c30e9
Let all model memory be offloaded on nvidia.
comfyanonymous Feb 21, 2025
a6deca6
Latest mac still has the black image bug.
comfyanonymous Feb 22, 2025
072db3b
Assume the mac black image bug won't be fixed before v16.
comfyanonymous Feb 22, 2025
b50ab15
Bump ComfyUI version to v0.3.15
comfyanonymous Feb 22, 2025
aff1653
Remove some useless code.
comfyanonymous Feb 22, 2025
ace899e
Prioritize fp16 compute when using allow_fp16_accumulation
comfyanonymous Feb 23, 2025
4553891
Update installation documentation to include desktop + cli. (#6899)
robinjhuang Feb 24, 2025
96d891c
Speedup on some models by not upcasting bfloat16 to float32 on mac.
comfyanonymous Feb 24, 2025
f400760
Cleanup some lumina te code.
comfyanonymous Feb 25, 2025
6302301
WIP support for Wan t2v model.
comfyanonymous Feb 25, 2025
f37551c
Change wan rope implementation to the flux one.
comfyanonymous Feb 26, 2025
ea0f939
Fix issue with wan and other attention implementations.
comfyanonymous Feb 26, 2025
9a66bb9
Make wan work with all latent resolutions.
comfyanonymous Feb 26, 2025
189da37
Update README.md (#6960)
yoland68 Feb 26, 2025
0c32f82
Fix missing frames in SaveWEBM node.
comfyanonymous Feb 26, 2025
cb06e96
Wan seems to work with fp16.
comfyanonymous Feb 26, 2025
4ced06b
WIP support for Wan I2V model.
comfyanonymous Feb 26, 2025
0844998
Slightly better wan i2v mask implementation.
comfyanonymous Feb 26, 2025
fa62287
More code reuse in wan.
comfyanonymous Feb 26, 2025
b6fefe6
Better wan memory estimation.
comfyanonymous Feb 26, 2025
4bca736
Don't try to use clip_fea on t2v model.
comfyanonymous Feb 26, 2025
c37f15f
Add fast preview support for Wan models.
comfyanonymous Feb 26, 2025
26c7baf
Bump ComfyUI version to v0.3.16
comfyanonymous Feb 26, 2025
0270a0b
Reduce artifacts on Wan by doing the patch embedding in fp32.
comfyanonymous Feb 26, 2025
8e69e2d
Bump ComfyUI version to v0.3.17
comfyanonymous Feb 26, 2025
3ea3bc8
Fix wan issues when prompt length is long.
comfyanonymous Feb 27, 2025
89253e9
Support Cambricon MLU (#6964)
BiologicalExplosion Feb 27, 2025
92d8d15
Readme changes.
comfyanonymous Feb 27, 2025
714f728
Add to README that the Wan model is supported.
comfyanonymous Feb 27, 2025
b07f116
Bump ComfyUI version to v0.3.18
comfyanonymous Feb 27, 2025
f4dac8a
Wan code small cleanup.
comfyanonymous Feb 27, 2025
1804397
Use fp16 if checkpoint weights are fp16 and the model supports it.
comfyanonymous Feb 27, 2025
eb45434
Use fp16 for intermediate for fp8 weights with --fast if supported.
comfyanonymous Feb 28, 2025
cf0b549
--fast now takes a number as argument to indicate how fast you want it.
comfyanonymous Feb 28, 2025
4d55f16
Use enum list for --fast options (#7024)
huchenlei Mar 1, 2025
4dc6709
Rename argument in last commit and document the options.
comfyanonymous Mar 1, 2025
6f81cd8
Change defaults in WanImageToVideo node.
comfyanonymous Mar 2, 2025
9af6320
Make 2d area composition nodes work on video models.
comfyanonymous Mar 2, 2025
04cf0cc
Use comfyui_frontend_package pypi package to manage frontend dependen…
huchenlei Mar 2, 2025
6752a82
Make the missing frontend package error more obvious.
comfyanonymous Mar 2, 2025
d6e5d48
improved: better frontend package installation guide (#7047)
ltdrdata Mar 3, 2025
f86c724
Temporal area composition.
comfyanonymous Mar 3, 2025
8362199
Bump ComfyUI version to v0.3.19
comfyanonymous Mar 4, 2025
7c7c70c
Refactor skyreels i2v code.
comfyanonymous Mar 4, 2025
65042f7
Make it easier to set a custom template for hunyuan video.
comfyanonymous Mar 4, 2025
2b14065
suggest absolute full path to the `requirements.txt` instead of just …
ltdrdata Mar 5, 2025
745b136
Add update instructions for the portable.
comfyanonymous Mar 5, 2025
93fedd9
Support LTXV 0.9.5.
comfyanonymous Mar 5, 2025
9c9a7f0
Adjust ltxv memory factor.
comfyanonymous Mar 5, 2025
369b079
Fix lowvram issue with ltxv vae.
comfyanonymous Mar 5, 2025
dc134b2
Bump ComfyUI version to v0.3.20
comfyanonymous Mar 5, 2025
30e6cfb
Fix LTXVPreprocess on resolutions that are not multiples of 2.
comfyanonymous Mar 5, 2025
77633ba
Remove unused variable.
comfyanonymous Mar 5, 2025
6d45ffb
Bump ComfyUI version to v0.3.21
comfyanonymous Mar 5, 2025
872780d
fix: ltxv crop guides works with 0 keyframes (#7085)
kvochko Mar 5, 2025
a80bc82
Partially revert last commit.
comfyanonymous Mar 5, 2025
76739c2
Revert "Partially revert last commit."
comfyanonymous Mar 5, 2025
8895199
Bump ComfyUI version to v0.3.22
comfyanonymous Mar 5, 2025
52b3469
[NodeDef] Explicitly add control_after_generate to seed/noise_seed (#…
huchenlei Mar 5, 2025
c1909f3
Better argument handling of front-end-root (#7043)
silveroxides Mar 5, 2025
5d84607
Add type hint for FileLocator (#6968)
huchenlei Mar 5, 2025
85ef295
Make applying embeddings more efficient.
comfyanonymous Mar 5, 2025
0bef826
Support llava clip vision model.
comfyanonymous Mar 6, 2025
29a70ca
Support HunyuanVideo image to video model.
comfyanonymous Mar 6, 2025
0124be4
ComfyUI version v0.3.23
comfyanonymous Mar 6, 2025
dfa36e6
Fix some things breaking when embeddings fail to apply.
comfyanonymous Mar 6, 2025
a131258
ComfyUI version v0.3.24
comfyanonymous Mar 6, 2025
1650cda
Fixed: Incorrect guide message for missing frontend. (#7105)
ltdrdata Mar 6, 2025
e62d72e
Typo in node_typing.py (#7092)
JettHu Mar 6, 2025
e147415
Support fp8_scaled diffusion models that don't use fp8 matrix mult.
comfyanonymous Mar 7, 2025
70e15fd
No need for scale_input when fp8 matrix mult is disabled.
comfyanonymous Mar 7, 2025
11b1f27
Set WAN default compute dtype to fp16.
comfyanonymous Mar 7, 2025
4ab1875
Add .bat file to nightly package to run with fp16 accumulation.
comfyanonymous Mar 7, 2025
5dbd250
Update nightly instructions in readme.
comfyanonymous Mar 7, 2025
d60fe0a
Reduce size of nightly package.
comfyanonymous Mar 7, 2025
ebbb920
Add back taesd to nightly package.
comfyanonymous Mar 7, 2025
84cc9cb
Update frontend to 1.11.8 (#7119)
huchenlei Mar 8, 2025
c3d9cc4
Print the frontend version in the log.
comfyanonymous Mar 8, 2025
be4e760
Add an image_interleave option to the Hunyuan image to video encode n…
comfyanonymous Mar 8, 2025
29832b3
Warn if frontend package is older than the one in requirements.txt
comfyanonymous Mar 8, 2025
0952569
Fix stable cascade VAE on some lowvram machines.
comfyanonymous Mar 9, 2025
7395b0c
Support new hunyuan video i2v model.
comfyanonymous Mar 9, 2025
2bc4b59
ComfyUI version v0.3.25
comfyanonymous Mar 9, 2025
528d1b3
When cached_hook_patches contain weights for hooks, only use hook_bac…
Kosinkadink Mar 9, 2025
9aac21f
Fix issues with new hunyuan img2vid model and bumb version to v0.3.26
comfyanonymous Mar 9, 2025
a73410a
remove overrides
christian-byrne Mar 9, 2025
e1da98a
remove unused params (#6931)
jtydhr88 Mar 9, 2025
6f8e766
Prevent custom nodes from accidentally overwriting global modules.
comfyanonymous Mar 10, 2025
67c7184
ltxv: relax frame_idx divisibility for single frames. (#7146)
kvochko Mar 10, 2025
35e2dcf
Hack to fix broken manager.
comfyanonymous Mar 10, 2025
b779349
Temporarily revert fix to give time for people to update their nodes.
comfyanonymous Mar 10, 2025
1f138dd
Only check frontend package if using default frontend
huchenlei Mar 10, 2025
6f6349b
nit
huchenlei Mar 10, 2025
7946049
nit
huchenlei Mar 10, 2025
db9f2a3
Fix unit test
huchenlei Mar 10, 2025
65ea778
nit
huchenlei Mar 10, 2025
ca8efab
Support control loras on Wan.
comfyanonymous Mar 10, 2025
cfbe4b4
Access package version
huchenlei Mar 11, 2025
9468976
Merge pull request #7179 from comfyanonymous/ignore_fe_package
comfyanonymous Mar 11, 2025
bc219a6
Merge pull request #7143 from christian-byrne/fix-remote-widget-node
comfyanonymous Mar 11, 2025
2330754
Fix error saving some latents.
comfyanonymous Mar 11, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  

This file was deleted.

115 changes: 98 additions & 17 deletions .ci/update_windows/update.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
import pygit2
from datetime import datetime
import sys
import os
import shutil
import filecmp

def pull(repo, remote_name='origin', branch='master'):
for remote in repo.remotes:
Expand All @@ -25,41 +28,119 @@ def pull(repo, remote_name='origin', branch='master'):

if repo.index.conflicts is not None:
for conflict in repo.index.conflicts:
print('Conflicts found in:', conflict[0].path)
print('Conflicts found in:', conflict[0].path) # noqa: T201
raise AssertionError('Conflicts, ahhhhh!!')

user = repo.default_signature
tree = repo.index.write_tree()
commit = repo.create_commit('HEAD',
user,
user,
'Merge!',
tree,
[repo.head.target, remote_master_id])
repo.create_commit('HEAD',
user,
user,
'Merge!',
tree,
[repo.head.target, remote_master_id])
# We need to do this or git CLI will think we are still merging.
repo.state_cleanup()
else:
raise AssertionError('Unknown merge analysis result')

pygit2.option(pygit2.GIT_OPT_SET_OWNER_VALIDATION, 0)
repo = pygit2.Repository(str(sys.argv[1]))
repo_path = str(sys.argv[1])
repo = pygit2.Repository(repo_path)
ident = pygit2.Signature('comfyui', 'comfy@ui')
try:
print("stashing current changes")
print("stashing current changes") # noqa: T201
repo.stash(ident)
except KeyError:
print("nothing to stash")
print("nothing to stash") # noqa: T201
backup_branch_name = 'backup_branch_{}'.format(datetime.today().strftime('%Y-%m-%d_%H_%M_%S'))
print("creating backup branch: {}".format(backup_branch_name))
repo.branches.local.create(backup_branch_name, repo.head.peel())
print("creating backup branch: {}".format(backup_branch_name)) # noqa: T201
try:
repo.branches.local.create(backup_branch_name, repo.head.peel())
except:
pass

print("checking out master branch")
print("checking out master branch") # noqa: T201
branch = repo.lookup_branch('master')
ref = repo.lookup_reference(branch.name)
repo.checkout(ref)
if branch is None:
ref = repo.lookup_reference('refs/remotes/origin/master')
repo.checkout(ref)
branch = repo.lookup_branch('master')
if branch is None:
repo.create_branch('master', repo.get(ref.target))
else:
ref = repo.lookup_reference(branch.name)
repo.checkout(ref)

print("pulling latest changes")
print("pulling latest changes") # noqa: T201
pull(repo)

print("Done!")
if "--stable" in sys.argv:
def latest_tag(repo):
versions = []
for k in repo.references:
try:
prefix = "refs/tags/v"
if k.startswith(prefix):
version = list(map(int, k[len(prefix):].split(".")))
versions.append((version[0] * 10000000000 + version[1] * 100000 + version[2], k))
except:
pass
versions.sort()
if len(versions) > 0:
return versions[-1][1]
return None
latest_tag = latest_tag(repo)
if latest_tag is not None:
repo.checkout(latest_tag)

print("Done!") # noqa: T201

self_update = True
if len(sys.argv) > 2:
self_update = '--skip_self_update' not in sys.argv

update_py_path = os.path.realpath(__file__)
repo_update_py_path = os.path.join(repo_path, ".ci/update_windows/update.py")

cur_path = os.path.dirname(update_py_path)


req_path = os.path.join(cur_path, "current_requirements.txt")
repo_req_path = os.path.join(repo_path, "requirements.txt")


def files_equal(file1, file2):
try:
return filecmp.cmp(file1, file2, shallow=False)
except:
return False

def file_size(f):
try:
return os.path.getsize(f)
except:
return 0


if self_update and not files_equal(update_py_path, repo_update_py_path) and file_size(repo_update_py_path) > 10:
shutil.copy(repo_update_py_path, os.path.join(cur_path, "update_new.py"))
exit()

if not os.path.exists(req_path) or not files_equal(repo_req_path, req_path):
import subprocess
try:
subprocess.check_call([sys.executable, '-s', '-m', 'pip', 'install', '-r', repo_req_path])
shutil.copy(repo_req_path, req_path)
except:
pass


stable_update_script = os.path.join(repo_path, ".ci/update_windows/update_comfyui_stable.bat")
stable_update_script_to = os.path.join(cur_path, "update_comfyui_stable.bat")

try:
if not file_size(stable_update_script_to) > 10:
shutil.copy(stable_update_script, stable_update_script_to)
except:
pass
8 changes: 7 additions & 1 deletion .ci/update_windows/update_comfyui.bat
Original file line number Diff line number Diff line change
@@ -1,2 +1,8 @@
@echo off
..\python_embeded\python.exe .\update.py ..\ComfyUI\
pause
if exist update_new.py (
move /y update_new.py update.py
echo Running updater again since it got updated.
..\python_embeded\python.exe .\update.py ..\ComfyUI\ --skip_self_update
)
if "%~1"=="" pause
3 changes: 0 additions & 3 deletions .ci/update_windows/update_comfyui_and_python_dependencies.bat

This file was deleted.

8 changes: 8 additions & 0 deletions .ci/update_windows/update_comfyui_stable.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
@echo off
..\python_embeded\python.exe .\update.py ..\ComfyUI\ --stable
if exist update_new.py (
move /y update_new.py update.py
echo Running updater again since it got updated.
..\python_embeded\python.exe .\update.py ..\ComfyUI\ --skip_self_update --stable
)
if "%~1"=="" pause

This file was deleted.

2 changes: 1 addition & 1 deletion .ci/windows_base_files/README_VERY_IMPORTANT.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ run_cpu.bat

IF YOU GET A RED ERROR IN THE UI MAKE SURE YOU HAVE A MODEL/CHECKPOINT IN: ComfyUI\models\checkpoints

You can download the stable diffusion 1.5 one from: https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt
You can download the stable diffusion 1.5 one from: https://huggingface.co/Comfy-Org/stable-diffusion-v1-5-archive/blob/main/v1-5-pruned-emaonly-fp16.safetensors


RECOMMENDED WAY TO UPDATE:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --use-pytorch-cross-attention
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --fast
pause
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --fast fp16_accumulation
pause
2 changes: 2 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
/web/assets/** linguist-generated
/web/** linguist-vendored
48 changes: 48 additions & 0 deletions .github/ISSUE_TEMPLATE/bug-report.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
name: Bug Report
description: "Something is broken inside of ComfyUI. (Do not use this if you're just having issues and need help, or if the issue relates to a custom node)"
labels: ["Potential Bug"]
body:
- type: markdown
attributes:
value: |
Before submitting a **Bug Report**, please ensure the following:

- **1:** You are running the latest version of ComfyUI.
- **2:** You have looked at the existing bug reports and made sure this isn't already reported.
- **3:** You confirmed that the bug is not caused by a custom node. You can disable all custom nodes by passing
`--disable-all-custom-nodes` command line argument.
- **4:** This is an actual bug in ComfyUI, not just a support question. A bug is when you can specify exact
steps to replicate what went wrong and others will be able to repeat your steps and see the same issue happen.

If unsure, ask on the [ComfyUI Matrix Space](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) or the [Comfy Org Discord](https://discord.gg/comfyorg) first.
- type: textarea
attributes:
label: Expected Behavior
description: "What you expected to happen."
validations:
required: true
- type: textarea
attributes:
label: Actual Behavior
description: "What actually happened. Please include a screenshot of the issue if possible."
validations:
required: true
- type: textarea
attributes:
label: Steps to Reproduce
description: "Describe how to reproduce the issue. Please be sure to attach a workflow JSON or PNG, ideally one that doesn't require custom nodes to test. If the bug open happens when certain custom nodes are used, most likely that custom node is what has the bug rather than ComfyUI, in which case it should be reported to the node's author."
validations:
required: true
- type: textarea
attributes:
label: Debug Logs
description: "Please copy the output from your terminal logs here."
render: powershell
validations:
required: true
- type: textarea
attributes:
label: Other
description: "Any other additional information you think might be helpful."
validations:
required: false
11 changes: 11 additions & 0 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
blank_issues_enabled: true
contact_links:
- name: ComfyUI Frontend Issues
url: https://github.com/Comfy-Org/ComfyUI_frontend/issues
about: Issues related to the ComfyUI frontend (display issues, user interaction bugs), please go to the frontend repo to file the issue
- name: ComfyUI Matrix Space
url: https://app.element.io/#/room/%23comfyui_space%3Amatrix.org
about: The ComfyUI Matrix Space is available for support and general discussion related to ComfyUI (Matrix is like Discord but open source).
- name: Comfy Org Discord
url: https://discord.gg/comfyorg
about: The Comfy Org Discord is available for support and general discussion related to ComfyUI.
32 changes: 32 additions & 0 deletions .github/ISSUE_TEMPLATE/feature-request.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Feature Request
description: "You have an idea for something new you would like to see added to ComfyUI's core."
labels: [ "Feature" ]
body:
- type: markdown
attributes:
value: |
Before submitting a **Feature Request**, please ensure the following:

**1:** You are running the latest version of ComfyUI.
**2:** You have looked to make sure there is not already a feature that does what you need, and there is not already a Feature Request listed for the same idea.
**3:** This is something that makes sense to add to ComfyUI Core, and wouldn't make more sense as a custom node.

If unsure, ask on the [ComfyUI Matrix Space](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) or the [Comfy Org Discord](https://discord.gg/comfyorg) first.
- type: textarea
attributes:
label: Feature Idea
description: "Describe the feature you want to see."
validations:
required: true
- type: textarea
attributes:
label: Existing Solutions
description: "Please search through available custom nodes / extensions to see if there are existing custom solutions for this. If so, please link the options you found here as a reference."
validations:
required: false
- type: textarea
attributes:
label: Other
description: "Any other additional information you think might be helpful."
validations:
required: false
32 changes: 32 additions & 0 deletions .github/ISSUE_TEMPLATE/user-support.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: User Support
description: "Use this if you need help with something, or you're experiencing an issue."
labels: [ "User Support" ]
body:
- type: markdown
attributes:
value: |
Before submitting a **User Report** issue, please ensure the following:

**1:** You are running the latest version of ComfyUI.
**2:** You have made an effort to find public answers to your question before asking here. In other words, you googled it first, and scrolled through recent help topics.

If unsure, ask on the [ComfyUI Matrix Space](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) or the [Comfy Org Discord](https://discord.gg/comfyorg) first.
- type: textarea
attributes:
label: Your question
description: "Post your question here. Please be as detailed as possible."
validations:
required: true
- type: textarea
attributes:
label: Logs
description: "If your question relates to an issue you're experiencing, please go to `Server` -> `Logs` -> potentially set `View Type` to `Debug` as well, then copypaste all the text into here."
render: powershell
validations:
required: false
- type: textarea
attributes:
label: Other
description: "Any other additional information you think might be helpful."
validations:
required: false
53 changes: 53 additions & 0 deletions .github/workflows/pullrequest-ci-run.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# This is the GitHub Workflow that drives full-GPU-enabled tests of pull requests to ComfyUI, when the 'Run-CI-Test' label is added
# Results are reported as checkmarks on the commits, as well as onto https://ci.comfy.org/
name: Pull Request CI Workflow Runs
on:
pull_request_target:
types: [labeled]

jobs:
pr-test-stable:
if: ${{ github.event.label.name == 'Run-CI-Test' }}
strategy:
fail-fast: false
matrix:
os: [macos, linux, windows]
python_version: ["3.9", "3.10", "3.11", "3.12"]
cuda_version: ["12.1"]
torch_version: ["stable"]
include:
- os: macos
runner_label: [self-hosted, macOS]
flags: "--use-pytorch-cross-attention"
- os: linux
runner_label: [self-hosted, Linux]
flags: ""
- os: windows
runner_label: [self-hosted, Windows]
flags: ""
runs-on: ${{ matrix.runner_label }}
steps:
- name: Test Workflows
uses: comfy-org/comfy-action@main
with:
os: ${{ matrix.os }}
python_version: ${{ matrix.python_version }}
torch_version: ${{ matrix.torch_version }}
google_credentials: ${{ secrets.GCS_SERVICE_ACCOUNT_JSON }}
comfyui_flags: ${{ matrix.flags }}
use_prior_commit: 'true'
comment:
if: ${{ github.event.label.name == 'Run-CI-Test' }}
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- uses: actions/github-script@v6
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: '(Automated Bot Message) CI Tests are running, you can view the results at https://ci.comfy.org/?branch=${{ github.event.pull_request.number }}%2Fmerge'
})
Loading