You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
报错:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/transformers/models/idefics2/modeling_idefics2.py", line 617, in forward
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] hidden_states = self.embeddings(pixel_values=pixel_values, patch_attention_mask=patch_attention_mask)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/transformers/models/idefics2/modeling_idefics2.py", line 162, in forward
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] patch_embeds = self.patch_embedding(pixel_values)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return self._conv_forward(input, self.weight, self.bias)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return F.conv2d(input, weight, bias, self.stride,
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226]
^C^C^C^C^C/root/miniconda3/envs/minicpmv/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
Killed
How would you like to use vllm
No response
The text was updated successfully, but these errors were encountered:
Your current environment
报错:RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/transformers/models/idefics2/modeling_idefics2.py", line 617, in forward
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] hidden_states = self.embeddings(pixel_values=pixel_values, patch_attention_mask=patch_attention_mask)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/transformers/models/idefics2/modeling_idefics2.py", line 162, in forward
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] patch_embeds = self.patch_embedding(pixel_values)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 460, in forward
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return self._conv_forward(input, self.weight, self.bias)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] File "/root/miniconda3/envs/minicpmv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] return F.conv2d(input, weight, bias, self.stride,
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226] RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper_CUDA__cudnn_convolution)
(VllmWorkerProcess pid=1470366) ERROR 07-26 13:31:20 multiproc_worker_utils.py:226]
^C^C^C^C^C/root/miniconda3/envs/minicpmv/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
Killed
How would you like to use vllm
No response
The text was updated successfully, but these errors were encountered: