You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when I finish a task and start another task, I want to see the result, but I can only see the current result(in latent), after it done, I can't see both in the history.
after it done:
Actual Behavior
the whole workflow:
Steps to Reproduce
None
Debug Logs
(comfyui) huishi@huishi:~/ComfyUI$ CUDA_VISIBLE_DEVICES=1 python main.py --listen 192.168.2.109--port 7862
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.** ComfyUI startup time: 2024-11-2214:52:43.116344** Platform: Linux
** Python version: 3.10.14 (main, May 62024,19:42:50) [GCC11.2.0]
** Python executable: /home/huishi/anaconda3/envs/comfyui/bin/python
** ComfyUI Path: /home/huishi/ComfyUI
** Log path: /home/huishi/ComfyUI/comfyui.log
Prestartup times for custom nodes:
0.0 seconds: /home/huishi/ComfyUI/custom_nodes/rgthree-comfy
0.5 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Manager
Total VRAM 24268 MB, total RAM 128639 MB
pytorch version: 2.3.1+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using pytorch cross attention
[PromptServer] web root: /home/huishi/ComfyUI/web
Adding extra search path checkpoints /home/huishi/sd-master/stable-diffusion-webui/models/Stable-diffusion
Adding extra search path configs /home/huishi/sd-master/stable-diffusion-webui/models/Stable-diffusion
Adding extra search path vae /home/huishi/sd-master/stable-diffusion-webui/models/VAE
Adding extra search path loras /home/huishi/sd-master/stable-diffusion-webui/models/Lora
Adding extra search path loras /home/huishi/sd-master/stable-diffusion-webui/models/LyCORIS
Adding extra search path upscale_models /home/huishi/sd-master/stable-diffusion-webui/models/ESRGAN
Adding extra search path upscale_models /home/huishi/sd-master/stable-diffusion-webui/models/RealESRGAN
Adding extra search path upscale_models /home/huishi/sd-master/stable-diffusion-webui/models/SwinIR
Adding extra search path embeddings /home/huishi/sd-master/stable-diffusion-webui/embeddings
Adding extra search path hypernetworks /home/huishi/sd-master/stable-diffusion-webui/models/hypernetworks
Adding extra search path controlnet /home/huishi/sd-master/stable-diffusion-webui/models/ControlNet
A new version of Albumentations is available: 1.4.21 (you have 1.4.8). Upgrade using: pip install --upgrade albumentations
Please 'pip install xformers'
Nvidia APEX normalization not installed, using PyTorch LayerNorm
[comfyui_controlnet_aux] | INFO -> Using ckpts path: /home/huishi/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider','DirectMLExecutionProvider','OpenVINOExecutionProvider','ROCMExecutionProvider','CPUExecutionProvider','CoreMLExecutionProvider']
/home/huishi/ComfyUI/custom_nodes/comfyui_controlnet_aux/node_wrappers/dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers,switch to OpenCV with CPU device. DWPose might run very slowly")### Loading: ComfyUI-Inspire-Pack (V1.6)Total VRAM 24268 MB, total RAM 128639 MBpytorch version: 2.3.1+cu118Set vram state to: NORMAL_VRAMDevice: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync### Loading: ComfyUI-Manager (V2.51.8)### ComfyUI Revision: 2848 [dfe32bc8] | Released on '2024-11-22'[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.jsonWeb extensions folder found at /home/huishi/ComfyUI/web/extensions/ComfyLiterals[rgthree-comfy] Loaded 42 magnificent nodes. ??Please 'pip install xformers'Nvidia APEX normalization not installed, using PyTorch LayerNorm------------------------------------------Comfyroll Studio v1.76 : 175 Nodes Loaded------------------------------------------** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki------------------------------------------Import times for custom nodes: 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/websocket_image_save.py 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Detail-Daemon 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyLiterals 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI_SLK_joy_caption_two 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-eesahesNodes 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI_JPS-Nodes 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/comfyui-inpaint-nodes 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/rgthree-comfy 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-GGUF 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-KJNodes 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI_essentials 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/comfyui_controlnet_aux 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Manager 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Inspire-Pack 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/x-flux-comfyui 0.0 seconds: /home/huishi/ComfyUI/custom_nodes/PuLID_ComfyUI 0.4 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper 1.4 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-PuLID-Flux-EnhancedStarting serverTo see the GUI go to: http://192.168.2.109:7862FETCH DATA from: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]['Hyper-FLUX.1-dev-8steps-lora.safetensors', 'anime_lora_comfy_converted.safetensors']['Hyper-FLUX.1-dev-8steps-lora.safetensors', 'anime_lora_comfy_converted.safetensors']FETCH DATA from: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]['Hyper-FLUX.1-dev-8steps-lora.safetensors', 'anime_lora_comfy_converted.safetensors']['Hyper-FLUX.1-dev-8steps-lora.safetensors', 'anime_lora_comfy_converted.safetensors']got promptUsing pytorch attention in VAEUsing pytorch attention in VAE/home/huishi/anaconda3/envs/comfyui/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn(clip missing: ['text_projection.weight']Requested to load FluxClipModel_Loading 1 new modelloaded completely 0.0 9555.075202941895 TrueRequested to load AutoencodingEngineLoading 1 new modelloaded completely 0.0 159.87335777282715 Truemodel weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16model_type FLUXRequested to load FluxLoading 1 new modelloaded completely 0.0 11351.004943847656 True/home/huishi/ComfyUI/comfy/samplers.py:712: UserWarning: backend:cudaMallocAsync ignores max_split_size_mb,roundup_power2_divisions, and garbage_collect_threshold. (Triggered internally at ../c10/cuda/CUDAAllocatorConfig.cpp:309.) if latent_image is not None and torch.count_nonzero(latent_image) > 0: #Don't shift the empty latent image.torch.Size([1, 1, 1296, 1080])torch.Size([1, 1, 1296, 1080])100%|?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 20/20 [01:05<00:00, 3.27s/it]Prompt executed in 80.99 secondsgot prompttorch.Size([1, 1, 872, 744])torch.Size([1, 1, 872, 744])100%|?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 20/20 [00:29<00:00, 1.50s/it]Prompt executed in 31.28 secondsgot prompttorch.Size([1, 1, 1112, 1072])torch.Size([1, 1, 1112, 1072])100%|?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 20/20 [00:55<00:00, 2.77s/it]Prompt executed in 57.64 secondsgot prompttorch.Size([1, 1, 1112, 1072])torch.Size([1, 1, 1112, 1072])100%|?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 20/20 [00:55<00:00, 2.80s/it]Prompt executed in 58.18 secondsgot prompttorch.Size([1, 1, 1112, 1072])torch.Size([1, 1, 1112, 1072])100%|?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 20/20 [00:57<00:00, 2.87s/it]Prompt executed in 59.70 seconds
Other
No response
The text was updated successfully, but these errors were encountered:
Expected Behavior
when I finish a task and start another task, I want to see the result, but I can only see the current result(in latent), after it done, I can't see both in the history.
after it done:
Actual Behavior
the whole workflow:
Steps to Reproduce
None
Debug Logs
Other
No response
The text was updated successfully, but these errors were encountered: