Bring portraits to life!
-
Updated
Feb 28, 2025 - Python
Bring portraits to life!
text and image to video generation: CogVideoX (2024) and CogVideo (ICLR 2023)
HunyuanVideo: A Systematic Framework For Large Video Generation Model
VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance
A curated list of recent diffusion models for video generation, editing, and various other applications.
[ICCV 2023 Oral] Text-to-Image Diffusion Models are Zero-Shot Video Generators
InternGPT (iGPT) is an open source demo platform where you can easily showcase your AI models. Now it supports DragGAN, ChatGPT, ImageBind, multimodal chat like GPT-4, SAM, interactive image editing, etc. Try it at igpt.opengvlab.com (支持DragGAN、ChatGPT、ImageBind、SAM的在线Demo系统)
[ICLR 2025] Pyramidal Flow Matching for Efficient Video Generative Modeling
[ECCV 2024, Oral] DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance
A python tool that uses GPT-4, FFmpeg, and OpenCV to automatically analyze videos, extract the most interesting sections, and crop them for an improved viewing experience.
Official implementations for paper: DreamTalk: When Expressive Talking Head Generation Meets Diffusion Probabilistic Models
[AAAI 2024] Follow-Your-Pose: This repo is the official implementation of "Follow-Your-Pose : Pose-Guided Text-to-Video Generation using Pose-Free Videos"
MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators
Implementation of Video Diffusion Models, Jonathan Ho's new paper extending DDPMs to Video Generation - in Pytorch
MiniSora: A community aims to explore the implementation path and future development direction of Sora.
Code for Motion Representations for Articulated Animation paper
Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.
Add a description, image, and links to the video-generation topic page so that developers can more easily learn about it.
To associate your repository with the video-generation topic, visit your repo's landing page and select "manage topics."