Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

报错了,一直没弄明白什么原因,麻烦看下谢谢 #5751

Open
xieyao2 opened this issue Nov 24, 2024 · 0 comments
Open

报错了,一直没弄明白什么原因,麻烦看下谢谢 #5751

xieyao2 opened this issue Nov 24, 2024 · 0 comments
Labels
Potential Bug User is reporting a bug. This should be tested.

Comments

@xieyao2
Copy link

xieyao2 commented Nov 24, 2024

Expected Behavior

fix it

Actual Behavior

报错了
Uploading FireShot Capture 014 - _inpainting局部重绘 - ComfyUI - 127.0.0.1.png…

Steps to Reproduce

按照这个视频操作时候发生的https://www.bilibili.com/video/BV1YxU9YvExe

Debug Logs

# ComfyUI Error Report
## Error Details
- **Node ID:** 156
- **Node Type:** ImageCropMerge
- **Exception Type:** RuntimeError
- **Exception Message:** The expanded size of the tensor (240) must match the existing size (504) at non-singleton dimension 2.  Target sizes: [1, 367, 240, 3].  Tensor sizes: [504, 504, 3]
## Stack Trace

  File "d:\ai\comfyui-aki-v1.4\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "d:\ai\comfyui-aki-v1.4\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "d:\ai\comfyui-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "d:\ai\comfyui-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "D:\AI\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-InpaintEasy\image_crop_merge.py", line 31, in merge_images
    result[:, crop_y:crop_y+cropped_original_height, crop_x:crop_x+cropped_original_width] = resized_image

System Information

  • ComfyUI Version: v0.3.4-2-gab885b33
  • Arguments: d:\ai\comfyui-aki-v1.4\main.py --auto-launch --preview-method auto --normalvram --disable-smart-memory --disable-cuda-malloc --fast
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.5.0+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4060 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 17175019520
    • VRAM Free: 14105808440
    • Torch VRAM Total: 2684354560
    • Torch VRAM Free: 894799416

Logs

2024-11-24T06:58:21.000111 - loaded completely 0.0 11350.067443847656 True
2024-11-24T06:58:21.027111 - loaded partially 64.0 62.365234375 0
2024-11-24T06:59:03.434945 - Requested to load AutoencodingEngine
2024-11-24T06:59:03.434945 - Loading 1 new model
2024-11-24T06:59:05.980480 - loaded completely 0.0 159.87335777282715 True
2024-11-24T06:59:06.558497 - Prompt executed in 72.13 seconds
2024-11-24T07:26:22.449659 - got prompt
2024-11-24T07:26:23.890745 - Requested to load FluxClipModel_
2024-11-24T07:26:23.891745 - Loading 1 new model
2024-11-24T07:26:28.212311 - loaded completely 0.0 9319.23095703125 True
2024-11-24T07:26:28.798316 - Requested to load AutoencodingEngine
2024-11-24T07:26:28.798316 - Loading 1 new model
2024-11-24T07:26:30.604608 - loaded completely 0.0 159.87335777282715 True
2024-11-24T07:26:31.298895 - Requested to load Flux
2024-11-24T07:26:31.298895 - Loading 1 new model
2024-11-24T07:26:34.681117 - loaded completely 0.0 11350.067443847656 True
2024-11-24T07:27:18.686554 - Requested to load AutoencodingEngine
2024-11-24T07:27:18.687554 - Loading 1 new model
2024-11-24T07:27:20.461729 - loaded completely 0.0 159.87335777282715 True
2024-11-24T07:27:21.258522 - Prompt executed in 58.76 seconds
2024-11-24T07:28:53.807652 - got prompt
2024-11-24T07:28:53.849653 - Requested to load Flux
2024-11-24T07:28:53.849653 - Loading 1 new model
2024-11-24T07:28:55.953422 - loaded completely 0.0 11350.067443847656 True
2024-11-24T07:29:27.386862 - Requested to load AutoencodingEngine
2024-11-24T07:29:27.386862 - Loading 1 new model
2024-11-24T07:29:29.090437 - loaded completely 0.0 159.87335777282715 True
2024-11-24T07:29:29.808382 - Prompt executed in 35.98 seconds
2024-11-24T07:29:36.986516 - got prompt
2024-11-24T07:29:37.029516 - Requested to load Flux
2024-11-24T07:29:37.029516 - Loading 1 new model
2024-11-24T07:29:39.174681 - loaded completely 0.0 11350.067443847656 True
2024-11-24T07:30:12.100866 - Requested to load AutoencodingEngine
2024-11-24T07:30:12.100866 - Loading 1 new model
2024-11-24T07:30:13.811705 - loaded completely 0.0 159.87335777282715 True
2024-11-24T07:30:14.516073 - Prompt executed in 37.51 seconds
2024-11-24T07:30:32.837094 - got prompt
2024-11-24T07:30:32.881973 - Requested to load Flux
2024-11-24T07:30:32.881973 - Loading 1 new model
2024-11-24T07:30:34.954412 - loaded completely 0.0 11350.067443847656 True
2024-11-24T07:31:06.939061 - Requested to load AutoencodingEngine
2024-11-24T07:31:06.939061 - Loading 1 new model
2024-11-24T07:31:08.666981 - loaded completely 0.0 159.87335777282715 True
2024-11-24T07:31:09.376465 - Prompt executed in 36.52 seconds
2024-11-24T07:31:21.669691 - got prompt
2024-11-24T07:31:21.714690 - Requested to load Flux
2024-11-24T07:31:21.714690 - Loading 1 new model
2024-11-24T07:31:23.706425 - loaded completely 0.0 11350.067443847656 True
2024-11-24T07:31:47.244652 - Requested to load AutoencodingEngine
2024-11-24T07:31:47.244652 - Loading 1 new model
2024-11-24T07:31:49.037418 - loaded completely 0.0 159.87335777282715 True
2024-11-24T07:31:49.783360 - Prompt executed in 28.09 seconds
2024-11-24T07:32:12.167103 - got prompt
2024-11-24T07:32:12.210103 - Requested to load FluxClipModel_
2024-11-24T07:32:12.211103 - Loading 1 new model
2024-11-24T07:32:13.921174 - loaded completely 0.0 9319.23095703125 True
2024-11-24T07:32:14.272759 - Requested to load AutoencodingEngine
2024-11-24T07:32:14.272759 - Loading 1 new model
2024-11-24T07:32:15.706712 - loaded completely 0.0 159.87335777282715 True
2024-11-24T07:32:16.419126 - Requested to load Flux
2024-11-24T07:32:16.419126 - Loading 1 new model
2024-11-24T07:32:18.584859 - loaded completely 0.0 11350.067443847656 True
2024-11-24T07:32:40.984015 - Requested to load AutoencodingEngine
2024-11-24T07:32:40.984015 - Loading 1 new model
2024-11-24T07:32:42.754985 - loaded completely 0.0 159.87335777282715 True
2024-11-24T07:32:43.522406 - Prompt executed in 31.33 seconds
2024-11-24T07:40:12.999724 - got prompt
2024-11-24T07:40:16.184691 - # 😺dzNodes: LayerStyle -> �[1;32mBiRefNetUltraV2 Processed 1 image(s).�[m2024-11-24T07:40:16.185689 - 
2024-11-24T07:40:16.262690 - Prompt executed in 3.25 seconds
2024-11-24T07:40:39.896292 - got prompt
2024-11-24T07:40:39.988293 - Prompt executed in 0.08 seconds
2024-11-24T07:42:03.830638 - got prompt
2024-11-24T07:42:04.719108 - # 😺dzNodes: LayerStyle -> �[1;32mBiRefNetUltraV2 Processed 1 image(s).�[m2024-11-24T07:42:04.719108 - 
2024-11-24T07:42:04.796219 - Prompt executed in 0.95 seconds
2024-11-24T07:43:04.994085 - got prompt
2024-11-24T07:43:05.740793 - # 😺dzNodes: LayerStyle -> �[1;32mBiRefNetUltraV2 Processed 1 image(s).�[m2024-11-24T07:43:05.741793 - 
2024-11-24T07:43:05.793793 - Prompt executed in 0.79 seconds
2024-11-24T07:51:31.340756 - got prompt
2024-11-24T07:51:31.995503 - # 😺dzNodes: LayerStyle -> �[1;32mBiRefNetUltraV2 Processed 1 image(s).�[m2024-11-24T07:51:31.995503 - 
2024-11-24T07:51:32.018503 - Prompt executed in 0.66 seconds
2024-11-24T07:51:56.894157 - got prompt
2024-11-24T07:51:56.918156 - Prompt executed in 0.01 seconds
2024-11-24T07:52:00.452842 - got prompt
2024-11-24T07:52:00.474842 - Prompt executed in 0.01 seconds
2024-11-24T07:52:05.428682 - got prompt
2024-11-24T07:52:07.028798 - # 😺dzNodes: LayerStyle -> �[1;32mBiRefNetUltraV2 Processed 1 image(s).�[m2024-11-24T07:52:07.028798 - 
2024-11-24T07:52:07.087755 - Prompt executed in 1.65 seconds
2024-11-24T07:52:29.690577 - got prompt
2024-11-24T07:52:30.257192 - # 😺dzNodes: LayerStyle -> �[1;32mBiRefNetUltraV2 Processed 1 image(s).�[m2024-11-24T07:52:30.257192 - 
2024-11-24T07:52:30.292192 - Prompt executed in 0.59 seconds
2024-11-24T07:59:45.677662 - got prompt
2024-11-24T07:59:45.701663 - [{'startX': -5.1830754056977675, 'startY': 51.830754056977675, 'endX': 544.2229175982656, 'endY': 666.8890355331127}]2024-11-24T07:59:45.701663 - 
2024-11-24T07:59:45.707662 - model_path: 2024-11-24T07:59:45.707662 -  2024-11-24T07:59:45.707662 - D:\AI\ComfyUI-aki-v1.4\models\sam2\sam2.1_hiera_large-fp16.safetensors2024-11-24T07:59:45.707662 - 
2024-11-24T07:59:45.707662 - Using model config: D:\AI\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-segment-anything-2\sam2_configs\sam2.1_hiera_l.yaml2024-11-24T07:59:45.707662 - 
2024-11-24T07:59:46.963236 - combined labels: 2024-11-24T07:59:46.963236 -  2024-11-24T07:59:46.963236 - [          1           1           1           1           1           1           1           0]2024-11-24T07:59:46.963236 - 
2024-11-24T07:59:46.963236 - combined labels shape: 2024-11-24T07:59:46.964237 -  2024-11-24T07:59:46.964237 - (8,)2024-11-24T07:59:46.964237 - 
2024-11-24T07:59:47.022998 - !!! Exception during processing !!! The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\container.py", line 250, in forward
    def forward(self, input):
        for module in self:
            input = module(input)
                    ~~~~~~ <--- HERE
        return input
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torchvision\transforms\transforms.py", line 354, in forward
            PIL Image or Tensor: Rescaled image.
        """
        return F.resize(img, self.size, self.interpolation, self.max_size, self.antialias)
               ~~~~~~~~ <--- HERE
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torchvision\transforms\functional.py", line 479, in resize
        return F_pil.resize(img, size=output_size, interpolation=pil_interpolation)

    return F_t.resize(img, size=output_size, interpolation=interpolation.value, antialias=antialias)
           ~~~~~~~~~~ <--- HERE
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torchvision\transforms\_functional_tensor.py", line 467, in resize
    align_corners = False if interpolation in ["bilinear", "bicubic"] else None

    img = interpolate(img, size=size, mode=interpolation, align_corners=align_corners, antialias=antialias)
          ~~~~~~~~~~~ <--- HERE

    if interpolation == "bicubic" and out_dtype == torch.uint8:
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\functional.py", line 4565, in interpolate
        assert align_corners is not None
        if antialias:
            return torch._C._nn._upsample_bilinear2d_aa(
                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
                input, output_size, align_corners, scale_factors
            )
RuntimeError: Input and output sizes should be greater than 0, but got input (H: 615, W: 0) output (H: 1024, W: 1024)

2024-11-24T07:59:47.024321 - Traceback (most recent call last):
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\AI\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-segment-anything-2\nodes.py", line 314, in segment
    model.set_image(image_np[i])
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "D:\AI\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-segment-anything-2\sam2\sam2_image_predictor.py", line 90, in set_image
    input_image = self._transforms(image)
  File "D:\AI\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-segment-anything-2\sam2\utils\transforms.py", line 44, in __call__
    return self.transforms(x)
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
    return forward_call(*args, **kwargs)
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\container.py", line 250, in forward
    def forward(self, input):
        for module in self:
            input = module(input)
                    ~~~~~~ <--- HERE
        return input
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torchvision\transforms\transforms.py", line 354, in forward
            PIL Image or Tensor: Rescaled image.
        """
        return F.resize(img, self.size, self.interpolation, self.max_size, self.antialias)
               ~~~~~~~~ <--- HERE
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torchvision\transforms\functional.py", line 479, in resize
        return F_pil.resize(img, size=output_size, interpolation=pil_interpolation)

    return F_t.resize(img, size=output_size, interpolation=interpolation.value, antialias=antialias)
           ~~~~~~~~~~ <--- HERE
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torchvision\transforms\_functional_tensor.py", line 467, in resize
    align_corners = False if interpolation in ["bilinear", "bicubic"] else None

    img = interpolate(img, size=size, mode=interpolation, align_corners=align_corners, antialias=antialias)
          ~~~~~~~~~~~ <--- HERE

    if interpolation == "bicubic" and out_dtype == torch.uint8:
  File "D:\AI\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\functional.py", line 4565, in interpolate
        assert align_corners is not None
        if antialias:
            return torch._C._nn._upsample_bilinear2d_aa(
                   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
                input, output_size, align_corners, scale_factors
            )
RuntimeError: Input and output sizes should be greater than 0, but got input (H: 615, W: 0) output (H: 1024, W: 1024)


2024-11-24T07:59:47.024321 - Prompt executed in 1.33 seconds
2024-11-24T08:00:04.440825 - got prompt
2024-11-24T08:00:04.496825 - [{'startX': 97.35303166224205, 'startY': 9.086282955142591, 'endX': 122.01579968334336, 'endY': 18.172565910285183}]2024-11-24T08:00:04.496825 - 
2024-11-24T08:00:04.500825 - combined labels: 2024-11-24T08:00:04.500825 -  2024-11-24T08:00:04.500825 - [          1           1           1           1           1           1           1           0]2024-11-24T08:00:04.500825 - 
2024-11-24T08:00:04.500825 - combined labels shape: 2024-11-24T08:00:04.500825 -  2024-11-24T08:00:04.500825 - (8,)2024-11-24T08:00:04.500825 - 
2024-11-24T08:00:06.556186 - Prompt executed in 2.10 seconds
2024-11-24T08:02:28.324799 - got prompt
2024-11-24T08:02:28.341799 - [{}]2024-11-24T08:02:28.341799 - 
2024-11-24T08:02:28.346799 - combined labels: 2024-11-24T08:02:28.346799 -  2024-11-24T08:02:28.346799 - [          1           1           1           1           1           1           1           1           1           1           1           1           1           1           1           1           1           1           0]2024-11-24T08:02:28.346799 - 
2024-11-24T08:02:28.346799 - combined labels shape: 2024-11-24T08:02:28.346799 -  2024-11-24T08:02:28.346799 - (19,)2024-11-24T08:02:28.346799 - 
2024-11-24T08:02:28.682799 - Prompt executed in 0.35 seconds
2024-11-24T08:02:41.342076 - got prompt
2024-11-24T08:02:41.360075 - [{}]2024-11-24T08:02:41.360075 - 
2024-11-24T08:02:41.364075 - combined labels: 2024-11-24T08:02:41.364075 -  2024-11-24T08:02:41.365075 - [          1           1           1           1           1           1           1           1           1           1           1           1           1           1           1           1           1           1           0           0           0]2024-11-24T08:02:41.365075 - 
2024-11-24T08:02:41.365075 - combined labels shape: 2024-11-24T08:02:41.365075 -  2024-11-24T08:02:41.365075 - (21,)2024-11-24T08:02:41.365075 - 
2024-11-24T08:02:41.818693 - Prompt executed in 0.47 seconds
2024-11-24T08:07:23.760184 - got prompt
2024-11-24T08:07:23.797183 - Prompt executed in 0.02 seconds
2024-11-24T08:27:15.288851 - got prompt
2024-11-24T08:27:15.311851 - Failed to validate prompt for output 160:
2024-11-24T08:27:15.311851 - * DepthAnythingV2Preprocessor 155:
2024-11-24T08:27:15.311851 -   - Required input is missing: image
2024-11-24T08:27:15.311851 - * InpaintEasyModel 152:
2024-11-24T08:27:15.311851 -   - Required input is missing: vae
2024-11-24T08:27:15.311851 - Output will be ignored
2024-11-24T08:27:15.311851 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2024-11-24T08:27:49.746306 - got prompt
2024-11-24T08:27:50.052011 - Using xformers attention in VAE
2024-11-24T08:27:50.053011 - Using xformers attention in VAE
2024-11-24T08:27:50.208508 - model_path is D:\AI\ComfyUI-aki-v1.4\custom_nodes\comfyui_controlnet_aux\ckpts\depth-anything/Depth-Anything-V2-Large\depth_anything_v2_vitl.pth2024-11-24T08:27:50.208508 - 
2024-11-24T08:27:50.209932 - using MLP layer as FFN
2024-11-24T08:27:54.271430 - clip missing: ['text_projection.weight']
2024-11-24T08:27:55.851365 - Requested to load FluxClipModel_
2024-11-24T08:27:55.851365 - Loading 1 new model
2024-11-24T08:27:57.588025 - loaded completely 0.0 9319.23095703125 True
2024-11-24T08:27:58.318717 - Requested to load CLIPVisionModelProjection
2024-11-24T08:27:58.318717 - Loading 1 new model
2024-11-24T08:27:59.778899 - loaded completely 0.0 787.7150573730469 True
2024-11-24T08:27:59.912873 - Requested to load FluxClipModel_
2024-11-24T08:27:59.912873 - Loading 1 new model
2024-11-24T08:28:01.588192 - loaded completely 0.0 9319.23095703125 True
2024-11-24T08:28:02.004986 - Requested to load AutoencodingEngine
2024-11-24T08:28:02.004986 - Loading 1 new model
2024-11-24T08:28:03.452951 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:28:03.626905 - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-11-24T08:28:03.627905 - model_type FLUX
2024-11-24T08:28:07.660930 - Requested to load Flux
2024-11-24T08:28:07.660930 - Requested to load ControlNetFlux
2024-11-24T08:28:07.660930 - Loading 2 new models
2024-11-24T08:28:10.066006 - loaded completely 0.0 11350.067443847656 True
2024-11-24T08:28:10.102006 - loaded partially 64.0 62.365234375 0
2024-11-24T08:28:10.118170 - Requested to load AutoencodingEngine
2024-11-24T08:28:10.118170 - Loading 1 new model
2024-11-24T08:28:12.519824 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:28:12.586824 - Requested to load Flux
2024-11-24T08:28:12.586824 - Requested to load ControlNetFlux
2024-11-24T08:28:12.586824 - Loading 2 new models
2024-11-24T08:28:14.707122 - loaded completely 0.0 11350.067443847656 True
2024-11-24T08:28:14.721122 - loaded partially 64.0 62.365234375 0
2024-11-24T08:28:49.910808 - Requested to load AutoencodingEngine
2024-11-24T08:28:49.910808 - Loading 1 new model
2024-11-24T08:28:51.824658 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:28:51.958664 - !!! Exception during processing !!! The expanded size of the tensor (240) must match the existing size (504) at non-singleton dimension 2.  Target sizes: [1, 367, 240, 3].  Tensor sizes: [504, 504, 3]
2024-11-24T08:28:51.959663 - Traceback (most recent call last):
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\AI\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-InpaintEasy\image_crop_merge.py", line 31, in merge_images
    result[:, crop_y:crop_y+cropped_original_height, crop_x:crop_x+cropped_original_width] = resized_image
RuntimeError: The expanded size of the tensor (240) must match the existing size (504) at non-singleton dimension 2.  Target sizes: [1, 367, 240, 3].  Tensor sizes: [504, 504, 3]

2024-11-24T08:28:51.994658 - Prompt executed in 62.22 seconds
2024-11-24T08:29:30.155882 - got prompt
2024-11-24T08:29:30.201881 - model_path is D:\AI\ComfyUI-aki-v1.4\custom_nodes\comfyui_controlnet_aux\ckpts\depth-anything/Depth-Anything-V2-Large\depth_anything_v2_vitl.pth2024-11-24T08:29:30.201881 - 
2024-11-24T08:29:30.203881 - using MLP layer as FFN
2024-11-24T08:29:33.127347 - Requested to load AutoencodingEngine
2024-11-24T08:29:33.127347 - Loading 1 new model
2024-11-24T08:29:33.158347 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:29:33.285460 - Requested to load Flux
2024-11-24T08:29:33.285460 - Requested to load ControlNetFlux
2024-11-24T08:29:33.285460 - Loading 2 new models
2024-11-24T08:29:35.528689 - loaded completely 0.0 11350.067443847656 True
2024-11-24T08:29:35.542689 - loaded partially 64.0 62.365234375 0
2024-11-24T08:29:35.547689 - Requested to load AutoencodingEngine
2024-11-24T08:29:35.547689 - Loading 1 new model
2024-11-24T08:29:37.293697 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:29:37.362698 - Requested to load Flux
2024-11-24T08:29:37.362698 - Requested to load ControlNetFlux
2024-11-24T08:29:37.362698 - Loading 2 new models
2024-11-24T08:29:39.355454 - loaded completely 0.0 11350.067443847656 True
2024-11-24T08:29:39.370454 - loaded partially 64.0 62.365234375 0
2024-11-24T08:30:05.222156 - Requested to load AutoencodingEngine
2024-11-24T08:30:05.222156 - Loading 1 new model
2024-11-24T08:30:06.904621 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:30:07.027674 - !!! Exception during processing !!! The expanded size of the tensor (240) must match the existing size (504) at non-singleton dimension 2.  Target sizes: [1, 367, 240, 3].  Tensor sizes: [504, 504, 3]
2024-11-24T08:30:07.027674 - Traceback (most recent call last):
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\AI\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-InpaintEasy\image_crop_merge.py", line 31, in merge_images
    result[:, crop_y:crop_y+cropped_original_height, crop_x:crop_x+cropped_original_width] = resized_image
RuntimeError: The expanded size of the tensor (240) must match the existing size (504) at non-singleton dimension 2.  Target sizes: [1, 367, 240, 3].  Tensor sizes: [504, 504, 3]

2024-11-24T08:30:07.065263 - Prompt executed in 36.89 seconds
2024-11-24T08:33:24.019133 - got prompt
2024-11-24T08:33:24.062645 - Loading model from: D:\AI\ComfyUI-aki-v1.4\models\depthanything\depth_anything_v2_vitl_fp16.safetensors2024-11-24T08:33:24.062645 - 
2024-11-24T08:33:24.064646 - using MLP layer as FFN
2024-11-24T08:33:25.101392 - Requested to load AutoencodingEngine
2024-11-24T08:33:25.101392 - Loading 1 new model
2024-11-24T08:33:25.131392 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:33:25.255392 - Requested to load Flux
2024-11-24T08:33:25.255392 - Requested to load ControlNetFlux
2024-11-24T08:33:25.255392 - Loading 2 new models
2024-11-24T08:33:27.490463 - loaded completely 0.0 11350.067443847656 True
2024-11-24T08:33:27.507463 - loaded partially 64.0 62.365234375 0
2024-11-24T08:33:27.512203 - Requested to load AutoencodingEngine
2024-11-24T08:33:27.512203 - Loading 1 new model
2024-11-24T08:33:29.233266 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:33:29.302265 - Requested to load Flux
2024-11-24T08:33:29.302265 - Requested to load ControlNetFlux
2024-11-24T08:33:29.302265 - Loading 2 new models
2024-11-24T08:33:31.235244 - loaded completely 0.0 11350.067443847656 True
2024-11-24T08:33:31.250244 - loaded partially 64.0 62.365234375 0
2024-11-24T08:33:56.670201 - Requested to load AutoencodingEngine
2024-11-24T08:33:56.670201 - Loading 1 new model
2024-11-24T08:33:58.343183 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:33:58.464243 - !!! Exception during processing !!! The expanded size of the tensor (240) must match the existing size (504) at non-singleton dimension 2.  Target sizes: [1, 367, 240, 3].  Tensor sizes: [504, 504, 3]
2024-11-24T08:33:58.464243 - Traceback (most recent call last):
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\AI\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-InpaintEasy\image_crop_merge.py", line 31, in merge_images
    result[:, crop_y:crop_y+cropped_original_height, crop_x:crop_x+cropped_original_width] = resized_image
RuntimeError: The expanded size of the tensor (240) must match the existing size (504) at non-singleton dimension 2.  Target sizes: [1, 367, 240, 3].  Tensor sizes: [504, 504, 3]

2024-11-24T08:33:58.498244 - Prompt executed in 34.45 seconds
2024-11-24T08:36:01.079710 - got prompt
2024-11-24T08:36:01.123671 - Requested to load CLIPVisionModelProjection
2024-11-24T08:36:01.123671 - Loading 1 new model
2024-11-24T08:36:01.274988 - loaded completely 0.0 787.7150573730469 True
2024-11-24T08:36:01.493715 - Requested to load AutoencodingEngine
2024-11-24T08:36:01.493715 - Loading 1 new model
2024-11-24T08:36:01.620371 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:36:01.742491 - Requested to load Flux
2024-11-24T08:36:01.742491 - Requested to load ControlNetFlux
2024-11-24T08:36:01.742491 - Loading 2 new models
2024-11-24T08:36:03.761068 - loaded completely 0.0 11350.067443847656 True
2024-11-24T08:36:03.776068 - loaded partially 64.0 62.365234375 0
2024-11-24T08:36:03.780893 - Requested to load AutoencodingEngine
2024-11-24T08:36:03.780893 - Loading 1 new model
2024-11-24T08:36:05.413431 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:36:05.481959 - Requested to load Flux
2024-11-24T08:36:05.481959 - Requested to load ControlNetFlux
2024-11-24T08:36:05.481959 - Loading 2 new models
2024-11-24T08:36:07.417086 - loaded completely 0.0 11350.067443847656 True
2024-11-24T08:36:07.430989 - loaded partially 64.0 62.365234375 0
2024-11-24T08:36:32.606988 - Requested to load AutoencodingEngine
2024-11-24T08:36:32.606988 - Loading 1 new model
2024-11-24T08:36:34.347150 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:36:34.472165 - !!! Exception during processing !!! The expanded size of the tensor (240) must match the existing size (504) at non-singleton dimension 2.  Target sizes: [1, 367, 240, 3].  Tensor sizes: [504, 504, 3]
2024-11-24T08:36:34.473165 - Traceback (most recent call last):
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\AI\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-InpaintEasy\image_crop_merge.py", line 31, in merge_images
    result[:, crop_y:crop_y+cropped_original_height, crop_x:crop_x+cropped_original_width] = resized_image
RuntimeError: The expanded size of the tensor (240) must match the existing size (504) at non-singleton dimension 2.  Target sizes: [1, 367, 240, 3].  Tensor sizes: [504, 504, 3]

2024-11-24T08:36:34.510165 - Prompt executed in 33.41 seconds
2024-11-24T08:37:45.180435 - got prompt
2024-11-24T08:37:45.224351 - Requested to load Flux
2024-11-24T08:37:45.224351 - Requested to load ControlNetFlux
2024-11-24T08:37:45.224351 - Loading 2 new models
2024-11-24T08:37:47.212601 - loaded completely 0.0 11350.067443847656 True
2024-11-24T08:37:47.227601 - loaded partially 64.0 62.365234375 0
2024-11-24T08:37:47.232601 - Requested to load AutoencodingEngine
2024-11-24T08:37:47.232601 - Loading 1 new model
2024-11-24T08:37:48.998541 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:37:49.065541 - Requested to load Flux
2024-11-24T08:37:49.065541 - Requested to load ControlNetFlux
2024-11-24T08:37:49.065541 - Loading 2 new models
2024-11-24T08:37:51.037473 - loaded completely 0.0 11350.067443847656 True
2024-11-24T08:37:51.051872 - loaded partially 64.0 62.365234375 0
2024-11-24T08:38:16.353822 - Requested to load AutoencodingEngine
2024-11-24T08:38:16.353822 - Loading 1 new model
2024-11-24T08:38:18.076179 - loaded completely 0.0 159.87335777282715 True
2024-11-24T08:38:18.200747 - !!! Exception during processing !!! The expanded size of the tensor (240) must match the existing size (504) at non-singleton dimension 2.  Target sizes: [1, 367, 240, 3].  Tensor sizes: [504, 504, 3]
2024-11-24T08:38:18.200747 - Traceback (most recent call last):
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "d:\ai\comfyui-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "D:\AI\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-InpaintEasy\image_crop_merge.py", line 31, in merge_images
    result[:, crop_y:crop_y+cropped_original_height, crop_x:crop_x+cropped_original_width] = resized_image
RuntimeError: The expanded size of the tensor (240) must match the existing size (504) at non-singleton dimension 2.  Target sizes: [1, 367, 240, 3].  Tensor sizes: [504, 504, 3]

2024-11-24T08:38:18.235747 - Prompt executed in 33.03 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

Workflow too large. Please manually upload the workflow from local file system.

Additional Context

(Please add any additional context or steps to reproduce the error here)



### Other

_No response_
@xieyao2 xieyao2 added the Potential Bug User is reporting a bug. This should be tested. label Nov 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Potential Bug User is reporting a bug. This should be tested.
Projects
None yet
Development

No branches or pull requests

1 participant