Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Face Image Animation testing with our own pictures #71

Open
ms-ufad opened this issue Jan 20, 2021 · 7 comments
Open

Face Image Animation testing with our own pictures #71

ms-ufad opened this issue Jan 20, 2021 · 7 comments

Comments

@ms-ufad
Copy link

ms-ufad commented Jan 20, 2021

The face image animation task, when testing with our own pictures and videos, is to put the pictures in the ./demo_ Results/face/ ,
But there was no face change in the video。

@RenYurui
Copy link
Owner

Hi,
Currently, our model cannot be used for arbitrary face image animation.
One way to improve it is to retrain our model in larger face image dataset.
Meanwhile, removing the background edges from the input instruction may help to improve the performance of the cross-identity animation task.

@RenYurui
Copy link
Owner

Datasets such as VoxCeleb can be used for this task.

@ms-ufad
Copy link
Author

ms-ufad commented Jan 20, 2021

Thank you very much for your reply. Maybe I didn't describe it clearly. Can I input an image and a video to replace the face in the video with the face in the image?

@ms-ufad
Copy link
Author

ms-ufad commented Jan 20, 2021

I see this sentence: “Given a source face and a sequence of edge images, our model generates the result video with specific motions.”
Where is the implementation code?

@RenYurui
Copy link
Owner

Hi, we generate new videos by extracting motions from the driving videos.
I think you may want something like face swapping. However, our model is a tool for face reenactment.
The difference can be found in many papers such as https://arxiv.org/abs/2008.02793

@ms-ufad
Copy link
Author

ms-ufad commented Jan 28, 2021

python demo.py
--name=face_checkpoints
--model=face
--attn_layer=2,3
--kernel_size=2=5,3=3
--gpu_id=0
--dataset_mode=face
--dataroot=./dataset/FaceForensics
--results_dir=./demo_results/face
你好,我用自己录制的一段朗读视频,运行上面这段代码后,得到了每帧图像对应的人脸motion草图以及一个视频。
现在的疑问是怎么根据这些motion图像和另一张目标人脸图像,生成一段目标人脸的视频,像根据人体motion图生成对应视频序列那样。

@Zhenggen-deng
Copy link

python demo.py
--name=face_checkpoints
--model=face
--attn_layer=2,3
--kernel_size=2=5,3=3
--gpu_id=0
--dataset_mode=face
--dataroot=./dataset/FaceForensics
--results_dir=./demo_results/face
你好,我用自己录制的一段朗读视频,运行上面这段代码后,得到了每帧图像对应的人脸motion草图以及一个视频。
现在的疑问是怎么根据这些motion图像和另一张目标人脸图像,生成一段目标人脸的视频,像根据人体motion图生成对应视频序列那样。

你的问题解决了吗/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants