-
Notifications
You must be signed in to change notification settings - Fork 277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
module 'torch' has no attribute 'compiler' #24
Comments
(venv) D:\talkingface\V-Express>python inference.py --reference_image_path "./test_samples/short_case/tys/ref.jpg" --audio_path "./test_samples/short_case/tys/aud.mp3" --output_path "./output/short_case/talk_tys_fix_face.mp4" --retarget_strategy "fix_face" --num_inference_steps 25 (venv) D:\talkingface\V-Express> |
I see in your previous log there is a line python inference.py \
--reference_image_path "./test_samples/short_case/AOC/ref.jpg" \
--audio_path "./test_samples/short_case/AOC/chattts.mp3" \
--output_path "./output/short_case/talk_AOC_chattts_fix_face.mp4" \
--retarget_strategy "fix_face" \
--num_inference_steps 25 \
--device "cpu" |
i followed your instructions about pip install packages |
(venv) D:\talkingface\V-Express>python inference.py --reference_image_path "./test_samples/short_case/tys/ref.jpg" --audio_path "./test_samples/short_case/tys/aud.mp3" --output_path "./output/short_case/talk_tys_fix_face.mp4" --retarget_strategy "fix_face" --num_inference_steps 25 --device "cpu"
(venv) D:\talkingface\V-Express> |
|
atleast it will work then i will know only thing i need is pytorch not torch |
what is gpu version of torch? |
(venv) D:\Talking Pictures\V-Express>python inference.py --reference_image_path "./test_samples/short_case/10/ref.jpg" --audio_path "./test_samples/short_case/10/aud.mp3" --output_path "./output/short_case/talk_AOC_chattts_fix_face.mp4" --retarget_strategy "fix_face" --num_inference_steps 25 --device "cpu" --dtype fp32
(venv) D:\Talking Pictures\V-Express> |
i made a full tutorial if you still couldn't make works with python 3.10, cuda 11.8, venv |
You can find information about it at here. |
Here is completely free tutorial for Windows Let me know if you are still facing the issues |
(venv) D:\talkingface\V-Express>python inference.py --reference_image_path "./test_samples/short_case/10/ref.jpg" --audio_path "./test_samples/short_case/10/aud.mp3" --kps_path "./test_samples/short_case/10/kps.pth" --output_path "./output/short_case/talk_10_no_retarget.mp4" --retarget_strategy "no_retarget" --num_inference_steps 25
D:\talkingface\V-Express\venv\lib\site-packages\torchaudio\backend\utils.py:74: UserWarning: No audio backend is available.
warnings.warn("No audio backend is available.")
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.3.0+cu121 with CUDA 1201 (you have 2.0.1+cpu)
Python 3.10.11 (you have 3.10.9)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Traceback (most recent call last):
File "D:\talkingface\V-Express\venv\lib\site-packages\diffusers\utils\import_utils.py", line 710, in get_module
return importlib.import_module("." + module_name, self.name)
File "D:\talkingface\V-Express\python\lib\importlib_init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\talkingface\V-Express\venv\lib\site-packages\diffusers\models\autoencoder_kl.py", line 22, in
from .attention_processor import (
File "D:\talkingface\V-Express\venv\lib\site-packages\diffusers\models\attention_processor.py", line 31, in
import xformers
File "D:\talkingface\V-Express\venv\lib\site-packages\xformers_init.py", line 12, in
from .checkpoint import ( # noqa: E402, F401
File "D:\talkingface\V-Express\venv\lib\site-packages\xformers\checkpoint.py", line 464, in
class SelectiveCheckpointWrapper(ActivationWrapper):
File "D:\talkingface\V-Express\venv\lib\site-packages\xformers\checkpoint.py", line 481, in SelectiveCheckpointWrapper
@torch.compiler.disable
AttributeError: module 'torch' has no attribute 'compiler'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\talkingface\V-Express\inference.py", line 10, in
from diffusers import AutoencoderKL, DDIMScheduler
File "", line 1075, in _handle_fromlist
File "D:\talkingface\V-Express\venv\lib\site-packages\diffusers\utils\import_utils.py", line 701, in getattr
value = getattr(module, name)
File "D:\talkingface\V-Express\venv\lib\site-packages\diffusers\utils\import_utils.py", line 700, in getattr
module = self._get_module(self._class_to_module[name])
File "D:\talkingface\V-Express\venv\lib\site-packages\diffusers\utils\import_utils.py", line 712, in _get_module
raise RuntimeError(
RuntimeError: Failed to import diffusers.models.autoencoder_kl because of the following error (look up to see its traceback):
module 'torch' has no attribute 'compiler'
The text was updated successfully, but these errors were encountered: