You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I used both SHOW and TALKSHOW.
I had a video that was split into 9-second segments, and I used SHOW to generate mesh videos for each segment. For each split, I used the final_all.pkl files and new audio (WAV) files in TALKSHOW, and trained three models: body pixel, face, and body-vq. However, the mesh video generated after training doesn't match what I got from SHOW, and it doesn't accurately reflect how the person is positioned in the original video provided to SHOW and because of this the anchor output videos are distorted. What can be the issue, how to train TALKSHOW for exact same mesh videos and mesh position as we gor from SHOW, on which the unet and controlnet are trained.
The text was updated successfully, but these errors were encountered:
Hello, how can i get mesh videos from TALKSHOW which resembles to the pose and human which is used to train unet and controlnet in this repo. The mesh from talkshow is used for inference we get a distorted video output
I used both SHOW and TALKSHOW.
I had a video that was split into 9-second segments, and I used SHOW to generate mesh videos for each segment. For each split, I used the final_all.pkl files and new audio (WAV) files in TALKSHOW, and trained three models: body pixel, face, and body-vq. However, the mesh video generated after training doesn't match what I got from SHOW, and it doesn't accurately reflect how the person is positioned in the original video provided to SHOW and because of this the anchor output videos are distorted. What can be the issue, how to train TALKSHOW for exact same mesh videos and mesh position as we gor from SHOW, on which the unet and controlnet are trained.
The text was updated successfully, but these errors were encountered: