Please note that our code is built based on [TalkSHOW], [SHOW], [LSP].
Clone the repo:
git clone https://github.com/LIRENDA621/2nd-place-Solution-for-AAAI2024-Workshop-AI-for-Digital-Human-Task4.git
For TalkSHOW
Create conda environment:
conda create --name talkshow python=3.7
conda activate talkshow
Please install pytorch (v1.10.1).
pip install -r requirements.txt
Please install MPI-Mesh.
For LSP
pip install -r requirements.txt
For SHOW & OpenPose
Environmental dependencies are complex, please refer to [SHOW]
Steps for Training and Evaluation:
Generate data for TalkSHOW and LSP
cd SHOW/SHOW
sh multi_demo.sh
Train VQ-VAEs.
bash my_train_body_vq.sh
# 2. Train PixelCNN. Please modify "Model:vq_path" in config/body_pixel.json to the path of VQ-VAEs.
bash my_train_body_pixel.sh
# 3. Train face generator.
bash my_train_face.sh
Infer(visulazation/get mesh .obj file)
python scripts/diversity4single_wav.py
please note that due to the company copyright issues, I can not provide training code.
Infer
sh run_test_tmp.sh