Skip to content

LIRENDA621/2nd-place-Solution-for-AAAI2024-Workshop-AI-for-Digital-Human-Task4

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 

Repository files navigation

2nd-place-Solution-for-AAAI2024-Workshop-AI-for-Digital-Human-Task4

[Challenge]

Please note that our code is built based on [TalkSHOW], [SHOW], [LSP].

Setup environment

Clone the repo:

git clone https://github.com/LIRENDA621/2nd-place-Solution-for-AAAI2024-Workshop-AI-for-Digital-Human-Task4.git

For TalkSHOW

Create conda environment:

conda create --name talkshow python=3.7
conda activate talkshow

Please install pytorch (v1.10.1).

pip install -r requirements.txt

Please install MPI-Mesh.

For LSP

pip install -r requirements.txt

For SHOW & OpenPose

Environmental dependencies are complex, please refer to [SHOW]

Usage

Steps for Training and Evaluation:

Data preprocess

Generate data for TalkSHOW and LSP

cd SHOW/SHOW
sh multi_demo.sh

Audio2gesture

Train VQ-VAEs. 
bash my_train_body_vq.sh
# 2. Train PixelCNN. Please modify "Model:vq_path" in config/body_pixel.json to the path of VQ-VAEs.
bash my_train_body_pixel.sh
# 3. Train face generator.
bash my_train_face.sh

Infer(visulazation/get mesh .obj file)
python scripts/diversity4single_wav.py

gesture2video

please note that due to the company copyright issues, I can not provide training code.

Infer
sh run_test_tmp.sh

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published