MV-Adapter: Multi-view Consistent Image Generation Made Easy Paper • 2412.03632 • Published Dec 4, 2024 • 23
FitDiT: Advancing the Authentic Garment Details for High-fidelity Virtual Try-on Paper • 2411.10499 • Published Nov 15, 2024 • 13
AnimateAnything: Consistent and Controllable Animation for Video Generation Paper • 2411.10836 • Published Nov 16, 2024 • 23
GaussianAnything: Interactive Point Cloud Latent Diffusion for 3D Generation Paper • 2411.08033 • Published Nov 12, 2024 • 22
MagicQuill: An Intelligent Interactive Image Editing System Paper • 2411.09703 • Published Nov 14, 2024 • 62
LLaMA-Mesh: Unifying 3D Mesh Generation with Language Models Paper • 2411.09595 • Published Nov 14, 2024 • 71
SVDQunat: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models Paper • 2411.05007 • Published Nov 7, 2024 • 16
SG-I2V: Self-Guided Trajectory Control in Image-to-Video Generation Paper • 2411.04989 • Published Nov 7, 2024 • 14
Adaptive Caching for Faster Video Generation with Diffusion Transformers Paper • 2411.02397 • Published Nov 4, 2024 • 23
MVPaint: Synchronized Multi-View Diffusion for Painting Anything 3D Paper • 2411.02336 • Published Nov 4, 2024 • 23
Fashion-VDM: Video Diffusion Model for Virtual Try-On Paper • 2411.00225 • Published Oct 31, 2024 • 9
HelloMeme: Integrating Spatial Knitting Attentions to Embed High-Level and Fidelity-Rich Conditions in Diffusion Models Paper • 2410.22901 • Published Oct 30, 2024 • 8
TEDi: Temporally-Entangled Diffusion for Long-Term Motion Synthesis Paper • 2307.15042 • Published Jul 27, 2023 • 7