Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

see history result #5712

Closed
Nomination-NRB opened this issue Nov 22, 2024 · 3 comments
Closed

see history result #5712

Nomination-NRB opened this issue Nov 22, 2024 · 3 comments
Labels
Potential Bug User is reporting a bug. This should be tested.

Comments

@Nomination-NRB
Copy link

Expected Behavior

when I finish a task and start another task, I want to see the result, but I can only see the current result(in latent), after it done, I can't see both in the history.

image
after it done:
image

Actual Behavior

the whole workflow:
image

Steps to Reproduce

None

Debug Logs

(comfyui) huishi@huishi:~/ComfyUI$ CUDA_VISIBLE_DEVICES=1 python main.py --listen 192.168.2.109 --port 7862
[START] Security scan
[DONE] Security scan
## ComfyUI-Manager: installing dependencies done.
** ComfyUI startup time: 2024-11-22 14:52:43.116344
** Platform: Linux
** Python version: 3.10.14 (main, May  6 2024, 19:42:50) [GCC 11.2.0]
** Python executable: /home/huishi/anaconda3/envs/comfyui/bin/python
** ComfyUI Path: /home/huishi/ComfyUI
** Log path: /home/huishi/ComfyUI/comfyui.log

Prestartup times for custom nodes:
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/rgthree-comfy
   0.5 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Manager

Total VRAM 24268 MB, total RAM 128639 MB
pytorch version: 2.3.1+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using pytorch cross attention
[Prompt Server] web root: /home/huishi/ComfyUI/web
Adding extra search path checkpoints /home/huishi/sd-master/stable-diffusion-webui/models/Stable-diffusion
Adding extra search path configs /home/huishi/sd-master/stable-diffusion-webui/models/Stable-diffusion
Adding extra search path vae /home/huishi/sd-master/stable-diffusion-webui/models/VAE
Adding extra search path loras /home/huishi/sd-master/stable-diffusion-webui/models/Lora
Adding extra search path loras /home/huishi/sd-master/stable-diffusion-webui/models/LyCORIS
Adding extra search path upscale_models /home/huishi/sd-master/stable-diffusion-webui/models/ESRGAN
Adding extra search path upscale_models /home/huishi/sd-master/stable-diffusion-webui/models/RealESRGAN
Adding extra search path upscale_models /home/huishi/sd-master/stable-diffusion-webui/models/SwinIR
Adding extra search path embeddings /home/huishi/sd-master/stable-diffusion-webui/embeddings
Adding extra search path hypernetworks /home/huishi/sd-master/stable-diffusion-webui/models/hypernetworks
Adding extra search path controlnet /home/huishi/sd-master/stable-diffusion-webui/models/ControlNet
A new version of Albumentations is available: 1.4.21 (you have 1.4.8). Upgrade using: pip install --upgrade albumentations
Please 'pip install xformers'
Nvidia APEX normalization not installed, using PyTorch LayerNorm
[comfyui_controlnet_aux] | INFO -> Using ckpts path: /home/huishi/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
/home/huishi/ComfyUI/custom_nodes/comfyui_controlnet_aux/node_wrappers/dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
  warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")
### Loading: ComfyUI-Inspire-Pack (V1.6)
Total VRAM 24268 MB, total RAM 128639 MB
pytorch version: 2.3.1+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
### Loading: ComfyUI-Manager (V2.51.8)
### ComfyUI Revision: 2848 [dfe32bc8] | Released on '2024-11-22'
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
Web extensions folder found at /home/huishi/ComfyUI/web/extensions/ComfyLiterals

[rgthree-comfy] Loaded 42 magnificent nodes. ??

Please 'pip install xformers'
Nvidia APEX normalization not installed, using PyTorch LayerNorm
------------------------------------------
Comfyroll Studio v1.76 :  175 Nodes Loaded
------------------------------------------
** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md
** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki
------------------------------------------

Import times for custom nodes:
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/websocket_image_save.py
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Detail-Daemon
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyLiterals
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI_SLK_joy_caption_two
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-eesahesNodes
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI_JPS-Nodes
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/comfyui-inpaint-nodes
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/rgthree-comfy
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-GGUF
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-KJNodes
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI_essentials
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/comfyui_controlnet_aux
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Advanced-ControlNet
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Manager
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Inspire-Pack
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/x-flux-comfyui
   0.0 seconds: /home/huishi/ComfyUI/custom_nodes/PuLID_ComfyUI
   0.4 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-KwaiKolorsWrapper
   1.4 seconds: /home/huishi/ComfyUI/custom_nodes/ComfyUI-PuLID-Flux-Enhanced

Starting server

To see the GUI go to: http://192.168.2.109:7862
FETCH DATA from: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
['Hyper-FLUX.1-dev-8steps-lora.safetensors', 'anime_lora_comfy_converted.safetensors']
['Hyper-FLUX.1-dev-8steps-lora.safetensors', 'anime_lora_comfy_converted.safetensors']
FETCH DATA from: /home/huishi/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
['Hyper-FLUX.1-dev-8steps-lora.safetensors', 'anime_lora_comfy_converted.safetensors']
['Hyper-FLUX.1-dev-8steps-lora.safetensors', 'anime_lora_comfy_converted.safetensors']
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
/home/huishi/anaconda3/envs/comfyui/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
clip missing: ['text_projection.weight']
Requested to load FluxClipModel_
Loading 1 new model
loaded completely 0.0 9555.075202941895 True
Requested to load AutoencodingEngine
Loading 1 new model
loaded completely 0.0 159.87335777282715 True
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLUX
Requested to load Flux
Loading 1 new model
loaded completely 0.0 11351.004943847656 True
/home/huishi/ComfyUI/comfy/samplers.py:712: UserWarning: backend:cudaMallocAsync ignores max_split_size_mb,roundup_power2_divisions, and garbage_collect_threshold. (Triggered internally at ../c10/cuda/CUDAAllocatorConfig.cpp:309.)
  if latent_image is not None and torch.count_nonzero(latent_image) > 0: #Don't shift the empty latent image.
torch.Size([1, 1, 1296, 1080])
torch.Size([1, 1, 1296, 1080])
100%|?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 20/20 [01:05<00:00,  3.27s/it]
Prompt executed in 80.99 seconds
got prompt
torch.Size([1, 1, 872, 744])
torch.Size([1, 1, 872, 744])
100%|?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 20/20 [00:29<00:00,  1.50s/it]
Prompt executed in 31.28 seconds
got prompt
torch.Size([1, 1, 1112, 1072])
torch.Size([1, 1, 1112, 1072])
100%|?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 20/20 [00:55<00:00,  2.77s/it]
Prompt executed in 57.64 seconds
got prompt
torch.Size([1, 1, 1112, 1072])
torch.Size([1, 1, 1112, 1072])
100%|?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 20/20 [00:55<00:00,  2.80s/it]
Prompt executed in 58.18 seconds
got prompt
torch.Size([1, 1, 1112, 1072])
torch.Size([1, 1, 1112, 1072])
100%|?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????| 20/20 [00:57<00:00,  2.87s/it]
Prompt executed in 59.70 seconds

Other

No response

@Nomination-NRB Nomination-NRB added the Potential Bug User is reporting a bug. This should be tested. label Nov 22, 2024
@LukeG89
Copy link

LukeG89 commented Nov 22, 2024

To see the result you can either click on the number to open the tast

task

Or click the "picture" icon to see all the results

image

There are a couple of feature requests in the frontend about previews in Queue. We need to wait for the implementations.

@Nomination-NRB
Copy link
Author

To see the result you can either click on the number to open the tast

task

Or click the "picture" icon to see all the results

image

There are a couple of feature requests in the frontend about previews in Queue. We need to wait for the implementations.

Thanks for your reply, but I can not see the result after I follow you tips:
image

@Nomination-NRB
Copy link
Author

I know the problem:
if I only use Image Comparer (rgthree) node, I can't see the image in the history, but I add Save Image node, I can get the image.

image
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Potential Bug User is reporting a bug. This should be tested.
Projects
None yet
Development

No branches or pull requests

2 participants