LLAVADI: What Matters For Multimodal Large Language Models Distillation Paper • 2407.19409 • Published Jul 28, 2024
Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos Paper • 2501.04001 • Published about 20 hours ago • 14
Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos Paper • 2501.04001 • Published about 20 hours ago • 14
DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation Paper • 2412.07589 • Published 29 days ago • 46
Mamba or RWKV: Exploring High-Quality and High-Efficiency Segment Anything Model Paper • 2406.19369 • Published Jun 27, 2024 • 2
OMG-LLaVA: Bridging Image-level, Object-level, Pixel-level Reasoning and Understanding Paper • 2406.19389 • Published Jun 27, 2024 • 52
Mamba or RWKV: Exploring High-Quality and High-Efficiency Segment Anything Model Paper • 2406.19369 • Published Jun 27, 2024 • 2