We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 591, in doit enhanced_img, cropped_enhanced, cropped_enhanced_alpha, mask, cnet_pil_list = FaceDetailer.enhance_face( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 547, in enhance_face DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for_bbox, max_size, seed, steps, cfg, File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 323, in do_detail enhanced_image, cnet_pils = core.enhance_detail(cropped_image, model, clip, vae, guide_size, guide_size_for_bbox, max_size, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\core.py", line 351, in enhance_detail refined_latent = impact_sampling.ksampler_wrapper(model2, seed2, steps2, cfg2, sampler_name2, scheduler2, positive2, negative2, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_sampling.py", line 241, in ksampler_wrapper refined_latent = separated_sample(model, True, seed, advanced_steps, cfg, sampler_name, scheduler, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_sampling.py", line 214, in separated_sample res = sample_with_custom_noise(model, add_noise, seed, cfg, positive, negative, impact_sampler, sigmas, latent_image, noise=noise, callback=callback) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_sampling.py", line 158, in sample_with_custom_noise samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample return orig_comfy_sample(model, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample return orig_comfy_sample(model, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 753, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 740, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 719, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 624, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 709, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper return orig_apply_model(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 144, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\flux\model.py", line 181, in forward out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-11-18T14:14:53.745466 - set det-size:2024-11-18T14:14:53.745466 - 2024-11-18T14:14:53.745466 - (640, 640)2024-11-18T14:14:53.745466 - 2024-11-18T14:14:55.908500 - 2024-11-18T14:14:55.939981 - 0: 640x448 1 face, 6.0ms 2024-11-18T14:14:55.939981 - Speed: 9.7ms preprocess, 6.0ms inference, 0.0ms postprocess per image at shape (1, 3, 640, 448) 2024-11-18T14:14:56.795626 - Detailer: segment upscale for ((167.31683, 237.2572)) | crop region (501, 711) x 1.4406915848584727 -> (721, 1024)2024-11-18T14:14:56.795626 - 2024-11-18T14:14:56.812033 - Requested to load AutoencodingEngine 2024-11-18T14:14:56.812033 - Loading 1 new model 2024-11-18T14:14:57.469410 - loaded completely 0.0 319.7467155456543 True 2024-11-18T14:14:57.808004 - Requested to load Flux 2024-11-18T14:14:57.808004 - Loading 1 new model 2024-11-18T14:14:58.631223 - loaded partially 6958.787728118897 7750.20703125 0 2024-11-18T14:14:58.631223 - 0%| | 0/8 [00:00<?, ?it/s]2024-11-18T14:14:58.631223 - 0%| | 0/8 [00:00<?, ?it/s]2024-11-18T14:14:58.631223 - 2024-11-18T14:14:58.647299 - !!! Exception during processing !!! forward_orig() takes from 7 to 9 positional arguments but 10 were given 2024-11-18T14:14:58.647299 - Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 591, in doit enhanced_img, cropped_enhanced, cropped_enhanced_alpha, mask, cnet_pil_list = FaceDetailer.enhance_face( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 547, in enhance_face DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for_bbox, max_size, seed, steps, cfg, File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 323, in do_detail enhanced_image, cnet_pils = core.enhance_detail(cropped_image, model, clip, vae, guide_size, guide_size_for_bbox, max_size, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\core.py", line 351, in enhance_detail refined_latent = impact_sampling.ksampler_wrapper(model2, seed2, steps2, cfg2, sampler_name2, scheduler2, positive2, negative2, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_sampling.py", line 241, in ksampler_wrapper refined_latent = separated_sample(model, True, seed, advanced_steps, cfg, sampler_name, scheduler, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_sampling.py", line 214, in separated_sample res = sample_with_custom_noise(model, add_noise, seed, cfg, positive, negative, impact_sampler, sigmas, latent_image, noise=noise, callback=callback) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_sampling.py", line 158, in sample_with_custom_noise samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample return orig_comfy_sample(model, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample return orig_comfy_sample(model, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 753, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 740, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 719, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 624, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 709, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper return orig_apply_model(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 144, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\flux\model.py", line 181, in forward out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: forward_orig() takes from 7 to 9 positional arguments but 10 were given 2024-11-18T14:14:58.647299 - Prompt executed in 7.76 seconds 2024-11-18T14:15:30.797112 - got prompt 2024-11-18T14:15:32.463096 - model weight dtype torch.float16, manual cast: None 2024-11-18T14:15:32.463096 - model_type EPS 2024-11-18T14:15:34.215795 - Using pytorch attention in VAE 2024-11-18T14:15:34.215795 - Using pytorch attention in VAE 2024-11-18T14:15:36.317552 - �[34mWAS Node Suite: �[0mFace found with: lbpcascade_animeface.xml�[0m2024-11-18T14:15:36.317552 - 2024-11-18T14:15:36.317552 - !!! Exception during processing !!! OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:4062: error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize' 2024-11-18T14:15:36.354842 - Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 3121, in image_crop_face return self.crop_face(tensor2pil(image), cascade_xml, crop_padding_factor) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\was-node-suite-comfyui\WAS_Node_Suite.py", line 3219, in crop_face face_img = cv2.resize(face_img, (size, size)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\resize.cpp:4062: error: (-215:Assertion failed) !ssize.empty() in function 'cv::resize' 2024-11-18T14:15:36.354842 - Prompt executed in 4.89 seconds 2024-11-18T14:15:44.826135 - got prompt 2024-11-18T14:15:46.481865 - �[34mWAS Node Suite: �[0mFace found with: haarcascade_frontalface_default.xml�[0m2024-11-18T14:15:46.481865 - 2024-11-18T14:15:46.542625 - Requested to load SDXLClipModel 2024-11-18T14:15:46.542625 - Loading 1 new model 2024-11-18T14:15:48.047442 - loaded completely 0.0 1560.802734375 True 2024-11-18T14:15:49.808602 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-18T14:15:49.808602 - 2024-11-18T14:15:49.866149 - find model:2024-11-18T14:15:49.866149 - 2024-11-18T14:15:49.866149 - D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\1k3d68.onnx2024-11-18T14:15:49.866149 - 2024-11-18T14:15:49.866149 - landmark_3d_682024-11-18T14:15:49.866149 - 2024-11-18T14:15:49.866149 - ['None', 3, 192, 192]2024-11-18T14:15:49.866149 - 2024-11-18T14:15:49.866149 - 0.02024-11-18T14:15:49.866149 - 2024-11-18T14:15:49.866149 - 1.02024-11-18T14:15:49.866149 - 2024-11-18T14:15:49.915789 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-18T14:15:49.915789 - 2024-11-18T14:15:49.915789 - find model:2024-11-18T14:15:49.915789 - 2024-11-18T14:15:49.915789 - D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\2d106det.onnx2024-11-18T14:15:49.915789 - 2024-11-18T14:15:49.915789 - landmark_2d_1062024-11-18T14:15:49.915789 - 2024-11-18T14:15:49.915789 - ['None', 3, 192, 192]2024-11-18T14:15:49.915789 - 2024-11-18T14:15:49.915789 - 0.02024-11-18T14:15:49.915789 - 2024-11-18T14:15:49.915789 - 1.02024-11-18T14:15:49.915789 - 2024-11-18T14:15:49.944667 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-18T14:15:49.944667 - 2024-11-18T14:15:49.960322 - find model:2024-11-18T14:15:49.960322 - 2024-11-18T14:15:49.960322 - D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\genderage.onnx2024-11-18T14:15:49.960322 - 2024-11-18T14:15:49.960322 - genderage2024-11-18T14:15:49.960322 - 2024-11-18T14:15:49.960322 - ['None', 3, 96, 96]2024-11-18T14:15:49.960322 - 2024-11-18T14:15:49.960322 - 0.02024-11-18T14:15:49.960322 - 2024-11-18T14:15:49.960322 - 1.02024-11-18T14:15:49.960322 - 2024-11-18T14:15:50.339093 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-18T14:15:50.339093 - 2024-11-18T14:15:50.465642 - find model:2024-11-18T14:15:50.465642 - 2024-11-18T14:15:50.465642 - D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\glintr100.onnx2024-11-18T14:15:50.465642 - 2024-11-18T14:15:50.465642 - recognition2024-11-18T14:15:50.465642 - 2024-11-18T14:15:50.465642 - ['None', 3, 112, 112]2024-11-18T14:15:50.465642 - 2024-11-18T14:15:50.465642 - 127.52024-11-18T14:15:50.465642 - 2024-11-18T14:15:50.465642 - 127.52024-11-18T14:15:50.465642 - 2024-11-18T14:15:50.545093 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-18T14:15:50.545093 - 2024-11-18T14:15:50.545093 - find model:2024-11-18T14:15:50.545093 - 2024-11-18T14:15:50.545093 - D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\scrfd_10g_bnkps.onnx2024-11-18T14:15:50.545093 - 2024-11-18T14:15:50.545093 - detection2024-11-18T14:15:50.545093 - 2024-11-18T14:15:50.545093 - [1, 3, '?', '?']2024-11-18T14:15:50.545093 - 2024-11-18T14:15:50.545093 - 127.52024-11-18T14:15:50.545093 - 2024-11-18T14:15:50.545093 - 128.02024-11-18T14:15:50.545093 - 2024-11-18T14:15:50.545093 - set det-size:2024-11-18T14:15:50.545093 - 2024-11-18T14:15:50.545093 - (640, 640)2024-11-18T14:15:50.545093 - 2024-11-18T14:15:50.545093 - Loaded EVA02-CLIP-L-14-336 model config. 2024-11-18T14:15:50.557558 - Shape of rope freq: torch.Size([576, 64]) 2024-11-18T14:15:55.748402 - Loading pretrained EVA02-CLIP-L-14-336 weights (eva_clip). 2024-11-18T14:15:56.072506 - incompatible_keys.missing_keys: ['visual.rope.freqs_cos', 'visual.rope.freqs_sin', 'visual.blocks.0.attn.rope.freqs_cos', 'visual.blocks.0.attn.rope.freqs_sin', 'visual.blocks.1.attn.rope.freqs_cos', 'visual.blocks.1.attn.rope.freqs_sin', 'visual.blocks.2.attn.rope.freqs_cos', 'visual.blocks.2.attn.rope.freqs_sin', 'visual.blocks.3.attn.rope.freqs_cos', 'visual.blocks.3.attn.rope.freqs_sin', 'visual.blocks.4.attn.rope.freqs_cos', 'visual.blocks.4.attn.rope.freqs_sin', 'visual.blocks.5.attn.rope.freqs_cos', 'visual.blocks.5.attn.rope.freqs_sin', 'visual.blocks.6.attn.rope.freqs_cos', 'visual.blocks.6.attn.rope.freqs_sin', 'visual.blocks.7.attn.rope.freqs_cos', 'visual.blocks.7.attn.rope.freqs_sin', 'visual.blocks.8.attn.rope.freqs_cos', 'visual.blocks.8.attn.rope.freqs_sin', 'visual.blocks.9.attn.rope.freqs_cos', 'visual.blocks.9.attn.rope.freqs_sin', 'visual.blocks.10.attn.rope.freqs_cos', 'visual.blocks.10.attn.rope.freqs_sin', 'visual.blocks.11.attn.rope.freqs_cos', 'visual.blocks.11.attn.rope.freqs_sin', 'visual.blocks.12.attn.rope.freqs_cos', 'visual.blocks.12.attn.rope.freqs_sin', 'visual.blocks.13.attn.rope.freqs_cos', 'visual.blocks.13.attn.rope.freqs_sin', 'visual.blocks.14.attn.rope.freqs_cos', 'visual.blocks.14.attn.rope.freqs_sin', 'visual.blocks.15.attn.rope.freqs_cos', 'visual.blocks.15.attn.rope.freqs_sin', 'visual.blocks.16.attn.rope.freqs_cos', 'visual.blocks.16.attn.rope.freqs_sin', 'visual.blocks.17.attn.rope.freqs_cos', 'visual.blocks.17.attn.rope.freqs_sin', 'visual.blocks.18.attn.rope.freqs_cos', 'visual.blocks.18.attn.rope.freqs_sin', 'visual.blocks.19.attn.rope.freqs_cos', 'visual.blocks.19.attn.rope.freqs_sin', 'visual.blocks.20.attn.rope.freqs_cos', 'visual.blocks.20.attn.rope.freqs_sin', 'visual.blocks.21.attn.rope.freqs_cos', 'visual.blocks.21.attn.rope.freqs_sin', 'visual.blocks.22.attn.rope.freqs_cos', 'visual.blocks.22.attn.rope.freqs_sin', 'visual.blocks.23.attn.rope.freqs_cos', 'visual.blocks.23.attn.rope.freqs_sin'] 2024-11-18T14:15:58.403227 - �[33mINFO: InsightFace detection resolution lowered to (512, 512).�[0m2024-11-18T14:15:58.405231 - 2024-11-18T14:16:02.318240 - 2024-11-18T14:16:02.333873 - 0: 640x448 1 face, 5.5ms 2024-11-18T14:16:02.333873 - Speed: 3.2ms preprocess, 5.5ms inference, 0.0ms postprocess per image at shape (1, 3, 640, 448) 2024-11-18T14:16:03.052164 - Detailer: segment upscale for ((162.74744, 235.1896)) | crop region (488, 705) x 1.4530818156157967 -> (709, 1024)2024-11-18T14:16:03.052164 - 2024-11-18T14:16:03.071371 - Requested to load AutoencoderKL 2024-11-18T14:16:03.071371 - Loading 1 new model 2024-11-18T14:16:03.367789 - loaded completely 0.0 319.11416244506836 True 2024-11-18T14:16:03.689690 - Requested to load SDXL 2024-11-18T14:16:03.689690 - Requested to load ControlNet 2024-11-18T14:16:03.689690 - Loading 2 new models 2024-11-18T14:16:04.968702 - loaded completely 0.0 4897.0483474731445 True 2024-11-18T14:16:05.016169 - loaded partially 64.0 63.9996337890625 0 2024-11-18T14:16:22.799238 - 100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:17<00:00, 1.44it/s]2024-11-18T14:16:22.799238 - 100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:17<00:00, 1.41it/s]2024-11-18T14:16:22.799238 - 2024-11-18T14:16:22.799238 - Requested to load AutoencoderKL 2024-11-18T14:16:22.799238 - Loading 1 new model 2024-11-18T14:16:23.695148 - loaded completely 0.0 319.11416244506836 True 2024-11-18T14:16:24.641332 - Prompt executed in 39.18 seconds 2024-11-18T14:16:42.811178 - got prompt 2024-11-18T14:16:43.893345 - 2024-11-18T14:16:43.957132 - 0: 640x448 1 face, 47.8ms 2024-11-18T14:16:43.957132 - Speed: 0.0ms preprocess, 47.8ms inference, 0.0ms postprocess per image at shape (1, 3, 640, 448) 2024-11-18T14:16:44.811265 - Detailer: segment upscale for ((162.74744, 235.1896)) | crop region (488, 705) x 1.4530818156157967 -> (709, 1024)2024-11-18T14:16:44.811265 - 2024-11-18T14:16:44.850561 - Unloading models for lowram load. 2024-11-18T14:16:44.875000 - 0 models unloaded. 2024-11-18T14:16:45.226177 - Requested to load SDXL 2024-11-18T14:16:45.226177 - Requested to load ControlNet 2024-11-18T14:16:45.226177 - Loading 2 new models 2024-11-18T14:16:46.253747 - loaded partially 4282.283631134033 4282.2827224731445 0 2024-11-18T14:16:46.269846 - loaded partially 64.0 63.9996337890625 0 2024-11-18T14:17:06.998016 - 100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.18it/s]2024-11-18T14:17:06.998016 - 100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:20<00:00, 1.21it/s]2024-11-18T14:17:06.998016 - 2024-11-18T14:17:06.998016 - Requested to load AutoencoderKL 2024-11-18T14:17:06.998016 - Loading 1 new model 2024-11-18T14:17:08.053601 - loaded completely 0.0 319.11416244506836 True 2024-11-18T14:17:09.035732 - Prompt executed in 25.56 seconds 2024-11-18T14:20:01.786516 - got prompt 2024-11-18T14:20:04.993642 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self' 2024-11-18T14:20:05.359158 - [Deep Translator] Service: "GoogleTranslator"2024-11-18T14:20:05.359158 - 2024-11-18T14:20:05.359158 - [Deep Translator] Proxy disabled or input field is empty!2024-11-18T14:20:05.359158 - 2024-11-18T14:20:05.359158 - [Deep Translator] Authorization input field is empty!2024-11-18T14:20:05.359158 - 2024-11-18T14:20:05.359158 - Deep Translator] Service detect language disabled! Services support: DeeplTranslator, QcriTranslator, LingueeTranslator, PonsTranslator, PapagoTranslator, BaiduTranslator, MyMemoryTranslator. The selected service has its own way of detecting the language. Property "detect_lang_api_key" in Authorization data is empty or incorrect!2024-11-18T14:20:05.359158 - 2024-11-18T14:20:06.366231 - HTTP Request: POST https://translate.google.com/_/TranslateWebserverUi/data/batchexecute?rpcids=MkEWBc&bl=boq_translate-webserver_20201207.13_p0&soc-app=1&soc-platform=1&soc-device=1&rt=c "HTTP/2 200 OK" 2024-11-18T14:20:08.461526 - model weight dtype torch.float8_e4m3fn, manual cast: torch.float16 2024-11-18T14:20:08.463535 - model_type FLUX 2024-11-18T14:20:13.609814 - Using pytorch attention in VAE 2024-11-18T14:20:13.611821 - Using pytorch attention in VAE 2024-11-18T14:20:18.312422 - model_path is D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\depth-anything/Depth-Anything-V2-Large\depth_anything_v2_vitl.pth2024-11-18T14:20:18.312422 - 2024-11-18T14:20:18.312422 - using MLP layer as FFN 2024-11-18T14:20:23.136281 - using sdpa for attention2024-11-18T14:20:23.136281 - 2024-11-18T14:20:29.910924 - </s><s><s>a high-resolution photograph featuring a young asian woman with light skin and long, straight brown hair, she is seated on a beige couch, with soft, diffused light streaming through sheer white curtains in the background, creating a serene and intimate atmosphere, the woman is topless, revealing her small to medium-sized breasts with light pink nipples, and is wearing only a pair of black lace panties, her expression is neutral, with a slight hint of a smile, and she is holding a gray knitted scarf around her neck, partially covering her breasts, the lighting is soft and diffused, highlighting the smooth texture of her skin and the softness of her hair, the overall mood of intimate and sensual, with the subject's natural beauty and the serene, natural surroundings adding to the tranquil ambiance</s>2024-11-18T14:20:29.910924 - 2024-11-18T14:20:29.910924 - Offloading model...2024-11-18T14:20:29.910924 - 2024-11-18T14:20:30.281120 - Requested to load SDXLClipModel 2024-11-18T14:20:30.282868 - Loading 1 new model 2024-11-18T14:20:30.616565 - loaded completely 0.0 1560.802734375 True 2024-11-18T14:20:30.783683 - ----------------------------------------2024-11-18T14:20:30.783683 - 2024-11-18T14:20:30.783683 - �[36mEfficient Loader Models Cache:�[0m2024-11-18T14:20:30.783683 - 2024-11-18T14:20:30.783683 - Ckpt:2024-11-18T14:20:30.783683 - 2024-11-18T14:20:30.783683 - [1] PornMasterPro_unrealPonyV1VAE2024-11-18T14:20:30.783683 - 2024-11-18T14:20:30.918571 - Requested to load FluxClipModel_ 2024-11-18T14:20:30.918571 - Loading 1 new model 2024-11-18T14:20:31.834781 - loaded completely 0.0 4777.53759765625 True 2024-11-18T14:20:34.967318 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-18T14:20:34.967318 - 2024-11-18T14:20:35.047959 - find model:2024-11-18T14:20:35.047959 - 2024-11-18T14:20:35.047959 - D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\1k3d68.onnx2024-11-18T14:20:35.047959 - 2024-11-18T14:20:35.047959 - landmark_3d_682024-11-18T14:20:35.047959 - 2024-11-18T14:20:35.047959 - ['None', 3, 192, 192]2024-11-18T14:20:35.047959 - 2024-11-18T14:20:35.047959 - 0.02024-11-18T14:20:35.047959 - 2024-11-18T14:20:35.047959 - 1.02024-11-18T14:20:35.047959 - 2024-11-18T14:20:35.087693 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-18T14:20:35.087693 - 2024-11-18T14:20:35.091670 - find model:2024-11-18T14:20:35.091670 - 2024-11-18T14:20:35.091670 - D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\2d106det.onnx2024-11-18T14:20:35.091670 - 2024-11-18T14:20:35.091670 - landmark_2d_1062024-11-18T14:20:35.091670 - 2024-11-18T14:20:35.091670 - ['None', 3, 192, 192]2024-11-18T14:20:35.091670 - 2024-11-18T14:20:35.091670 - 0.02024-11-18T14:20:35.091670 - 2024-11-18T14:20:35.091670 - 1.02024-11-18T14:20:35.091670 - 2024-11-18T14:20:35.114732 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-18T14:20:35.114732 - 2024-11-18T14:20:35.114732 - find model:2024-11-18T14:20:35.114732 - 2024-11-18T14:20:35.114732 - D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\genderage.onnx2024-11-18T14:20:35.114732 - 2024-11-18T14:20:35.114732 - genderage2024-11-18T14:20:35.114732 - 2024-11-18T14:20:35.114732 - ['None', 3, 96, 96]2024-11-18T14:20:35.114732 - 2024-11-18T14:20:35.114732 - 0.02024-11-18T14:20:35.114732 - 2024-11-18T14:20:35.114732 - 1.02024-11-18T14:20:35.114732 - 2024-11-18T14:20:35.551851 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-18T14:20:35.551851 - 2024-11-18T14:20:35.678295 - find model:2024-11-18T14:20:35.678295 - 2024-11-18T14:20:35.678295 - D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\glintr100.onnx2024-11-18T14:20:35.678295 - 2024-11-18T14:20:35.678295 - recognition2024-11-18T14:20:35.678295 - 2024-11-18T14:20:35.678295 - ['None', 3, 112, 112]2024-11-18T14:20:35.678295 - 2024-11-18T14:20:35.678295 - 127.52024-11-18T14:20:35.678295 - 2024-11-18T14:20:35.678295 - 127.52024-11-18T14:20:35.678295 - 2024-11-18T14:20:35.749765 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-18T14:20:35.749765 - 2024-11-18T14:20:35.749765 - find model:2024-11-18T14:20:35.749765 - 2024-11-18T14:20:35.749765 - D:\ComfyUI_windows_portable\ComfyUI\models\insightface\models\antelopev2\scrfd_10g_bnkps.onnx2024-11-18T14:20:35.749765 - 2024-11-18T14:20:35.749765 - detection2024-11-18T14:20:35.749765 - 2024-11-18T14:20:35.749765 - [1, 3, '?', '?']2024-11-18T14:20:35.749765 - 2024-11-18T14:20:35.749765 - 127.52024-11-18T14:20:35.749765 - 2024-11-18T14:20:35.749765 - 128.02024-11-18T14:20:35.749765 - 2024-11-18T14:20:35.749765 - set det-size:2024-11-18T14:20:35.749765 - 2024-11-18T14:20:35.749765 - (640, 640)2024-11-18T14:20:35.749765 - 2024-11-18T14:20:35.749765 - Loaded EVA02-CLIP-L-14-336 model config. 2024-11-18T14:20:35.749765 - Shape of rope freq: torch.Size([576, 64]) 2024-11-18T14:20:40.915537 - Loading pretrained EVA02-CLIP-L-14-336 weights (eva_clip). 2024-11-18T14:20:41.273499 - incompatible_keys.missing_keys: ['visual.rope.freqs_cos', 'visual.rope.freqs_sin', 'visual.blocks.0.attn.rope.freqs_cos', 'visual.blocks.0.attn.rope.freqs_sin', 'visual.blocks.1.attn.rope.freqs_cos', 'visual.blocks.1.attn.rope.freqs_sin', 'visual.blocks.2.attn.rope.freqs_cos', 'visual.blocks.2.attn.rope.freqs_sin', 'visual.blocks.3.attn.rope.freqs_cos', 'visual.blocks.3.attn.rope.freqs_sin', 'visual.blocks.4.attn.rope.freqs_cos', 'visual.blocks.4.attn.rope.freqs_sin', 'visual.blocks.5.attn.rope.freqs_cos', 'visual.blocks.5.attn.rope.freqs_sin', 'visual.blocks.6.attn.rope.freqs_cos', 'visual.blocks.6.attn.rope.freqs_sin', 'visual.blocks.7.attn.rope.freqs_cos', 'visual.blocks.7.attn.rope.freqs_sin', 'visual.blocks.8.attn.rope.freqs_cos', 'visual.blocks.8.attn.rope.freqs_sin', 'visual.blocks.9.attn.rope.freqs_cos', 'visual.blocks.9.attn.rope.freqs_sin', 'visual.blocks.10.attn.rope.freqs_cos', 'visual.blocks.10.attn.rope.freqs_sin', 'visual.blocks.11.attn.rope.freqs_cos', 'visual.blocks.11.attn.rope.freqs_sin', 'visual.blocks.12.attn.rope.freqs_cos', 'visual.blocks.12.attn.rope.freqs_sin', 'visual.blocks.13.attn.rope.freqs_cos', 'visual.blocks.13.attn.rope.freqs_sin', 'visual.blocks.14.attn.rope.freqs_cos', 'visual.blocks.14.attn.rope.freqs_sin', 'visual.blocks.15.attn.rope.freqs_cos', 'visual.blocks.15.attn.rope.freqs_sin', 'visual.blocks.16.attn.rope.freqs_cos', 'visual.blocks.16.attn.rope.freqs_sin', 'visual.blocks.17.attn.rope.freqs_cos', 'visual.blocks.17.attn.rope.freqs_sin', 'visual.blocks.18.attn.rope.freqs_cos', 'visual.blocks.18.attn.rope.freqs_sin', 'visual.blocks.19.attn.rope.freqs_cos', 'visual.blocks.19.attn.rope.freqs_sin', 'visual.blocks.20.attn.rope.freqs_cos', 'visual.blocks.20.attn.rope.freqs_sin', 'visual.blocks.21.attn.rope.freqs_cos', 'visual.blocks.21.attn.rope.freqs_sin', 'visual.blocks.22.attn.rope.freqs_cos', 'visual.blocks.22.attn.rope.freqs_sin', 'visual.blocks.23.attn.rope.freqs_cos', 'visual.blocks.23.attn.rope.freqs_sin'] 2024-11-18T14:20:43.529885 - Loading PuLID-Flux model. 2024-11-18T14:20:47.611102 - 2024-11-18T14:20:47.642813 - 0: 640x448 1 face, 10.8ms 2024-11-18T14:20:47.642813 - Speed: 4.9ms preprocess, 10.8ms inference, 0.0ms postprocess per image at shape (1, 3, 640, 448) 2024-11-18T14:20:47.736982 - !!! Exception during processing !!! Allocation on device 2024-11-18T14:20:47.738997 - Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 591, in doit enhanced_img, cropped_enhanced, cropped_enhanced_alpha, mask, cnet_pil_list = FaceDetailer.enhance_face( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 530, in enhance_face sam_mask = core.make_sam_mask(sam_model_opt, segs, image, sam_detection_hint, sam_dilation, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\core.py", line 712, in make_sam_mask detected_masks = sam_obj.predict(image, points, plabs, dilated_bbox, threshold) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\core.py", line 583, in predict predictor.set_image(image, "RGB") File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\segment_anything\predictor.py", line 60, in set_image self.set_torch_image(input_image_torch, image.shape[:2]) File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\segment_anything\predictor.py", line 89, in set_torch_image self.features = self.model.image_encoder(input_image) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\segment_anything\modeling\image_encoder.py", line 112, in forward x = blk(x) ^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\segment_anything\modeling\image_encoder.py", line 174, in forward x = self.attn(x) ^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\segment_anything\modeling\image_encoder.py", line 231, in forward attn = (q * self.scale) @ k.transpose(-2, -1) ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~ torch.cuda.OutOfMemoryError: Allocation on device 2024-11-18T14:20:47.741009 - Got an OOM, unloading all loaded models. 2024-11-18T14:20:48.988681 - Prompt executed in 45.64 seconds 2024-11-18T14:20:54.365229 - got prompt 2024-11-18T14:20:57.650551 - WARNING: PlaySound.IS_CHANGED() missing 1 required positional argument: 'self' 2024-11-18T14:20:57.693519 - 2024-11-18T14:20:57.775926 - 0: 640x448 1 face, 59.0ms 2024-11-18T14:20:57.777931 - Speed: 5.0ms preprocess, 59.0ms inference, 10.0ms postprocess per image at shape (1, 3, 640, 448) 2024-11-18T14:20:58.468444 - Detailer: segment upscale for ((167.31683, 237.2572)) | crop region (501, 711) x 1.4406915848584727 -> (721, 1024)2024-11-18T14:20:58.468444 - 2024-11-18T14:20:58.497738 - Requested to load AutoencodingEngine 2024-11-18T14:20:58.497738 - Loading 1 new model 2024-11-18T14:20:58.570243 - loaded completely 0.0 319.7467155456543 True 2024-11-18T14:20:58.924470 - Requested to load Flux 2024-11-18T14:20:58.924470 - Loading 1 new model 2024-11-18T14:21:00.269300 - loaded partially 5403.568978118897 6192.8994140625 0 2024-11-18T14:21:00.272192 - 0%| | 0/8 [00:00<?, ?it/s]2024-11-18T14:21:00.278488 - 0%| | 0/8 [00:00<?, ?it/s]2024-11-18T14:21:00.278488 - 2024-11-18T14:21:00.287888 - !!! Exception during processing !!! forward_orig() takes from 7 to 9 positional arguments but 10 were given 2024-11-18T14:21:00.291075 - Traceback (most recent call last): File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 323, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 198, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list process_inputs(input_dict, i) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 591, in doit enhanced_img, cropped_enhanced, cropped_enhanced_alpha, mask, cnet_pil_list = FaceDetailer.enhance_face( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 547, in enhance_face DetailerForEach.do_detail(image, segs, model, clip, vae, guide_size, guide_size_for_bbox, max_size, seed, steps, cfg, File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_pack.py", line 323, in do_detail enhanced_image, cnet_pils = core.enhance_detail(cropped_image, model, clip, vae, guide_size, guide_size_for_bbox, max_size, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\core.py", line 351, in enhance_detail refined_latent = impact_sampling.ksampler_wrapper(model2, seed2, steps2, cfg2, sampler_name2, scheduler2, positive2, negative2, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_sampling.py", line 241, in ksampler_wrapper refined_latent = separated_sample(model, True, seed, advanced_steps, cfg, sampler_name, scheduler, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_sampling.py", line 214, in separated_sample res = sample_with_custom_noise(model, add_noise, seed, cfg, positive, negative, impact_sampler, sigmas, latent_image, noise=noise, callback=callback) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\impact_sampling.py", line 158, in sample_with_custom_noise samples = comfy.sample.sample_custom(model, noise, cfg, sampler, sigmas, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample return orig_comfy_sample(model, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample return orig_comfy_sample(model, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 48, in sample_custom samples = comfy.samplers.sample(model, noise, positive, negative, cfg, model.load_device, sampler, sigmas, model_options=model.model_options, latent_image=latent_image, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 753, in sample return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 740, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 719, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 624, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 155, in sample_euler denoised = model(x, sigma_hat * s_in, **extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 299, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 706, in __call__ return self.predict_noise(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 709, in predict_noise return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 279, in sampling_function out = calc_cond_batch(model, conds, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 228, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper return orig_apply_model(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 144, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\comfy\ldm\flux\model.py", line 181, in forward out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control, transformer_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: forward_orig() takes from 7 to 9 positional arguments but 10 were given 2024-11-18T14:21:00.294816 - Prompt executed in 4.27 seconds
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Workflow too large. Please manually upload the workflow from local file system.
(Please add any additional context or steps to reproduce the error here)
The text was updated successfully, but these errors were encountered:
No branches or pull requests
ComfyUI Error Report
Error Details
Stack Trace
System Information
Devices
Logs
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Additional Context
(Please add any additional context or steps to reproduce the error here)
The text was updated successfully, but these errors were encountered: