LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token
Abstract
The advent of real-time large multimodal models (LMMs) like GPT-4o has sparked considerable interest in efficient LMMs. LMM frameworks typically encode visual inputs into vision tokens (continuous representations) and integrate them and textual instructions into the context of large language models (LLMs), where large-scale parameters and numerous context tokens (predominantly vision tokens) result in substantial computational overhead. Previous efforts towards efficient LMMs always focus on replacing the LLM backbone with smaller models, while neglecting the crucial issue of token quantity. In this paper, we introduce LLaVA-Mini, an efficient LMM with minimal vision tokens. To achieve a high compression ratio of vision tokens while preserving visual information, we first analyze how LMMs understand vision tokens and find that most vision tokens only play a crucial role in the early layers of LLM backbone, where they mainly fuse visual information into text tokens. Building on this finding, LLaVA-Mini introduces modality pre-fusion to fuse visual information into text tokens in advance, thereby facilitating the extreme compression of vision tokens fed to LLM backbone into one token. LLaVA-Mini is a unified large multimodal model that can support the understanding of images, high-resolution images, and videos in an efficient manner. Experiments across 11 image-based and 7 video-based benchmarks demonstrate that LLaVA-Mini outperforms LLaVA-v1.5 with just 1 vision token instead of 576. Efficiency analyses reveal that LLaVA-Mini can reduce FLOPs by 77%, deliver low-latency responses within 40 milliseconds, and process over 10,000 frames of video on the GPU hardware with 24GB of memory.
Community
LLaVA-Mini is a unified large multimodal model that can support the understanding of images, high-resolution images, and videos in an efficient manner. Guided by the interpretability within LMM, LLaVA-Mini significantly improves efficiency while ensuring vision capabilities.
LLaVA-Mini only requires 1 token to represent each image, which improves the efficiency of image and video understanding, including:
- Computational effort: 77% FLOPs reduction
- Response latency: reduce from 100 milliseconds to 40 milliseconds
- VRAM memory usage: reduce from 360 MB/image to 0.6 MB/image, support 3-hour video processing
That’s a strange way to frame their approach. Is it actually compressing the visual tokens into a single token? I think the framing should be “you can drop the visual tokens after the first few attention blocks”. The visual info is likely “compressed” into the full set of language tokens rather then this single token. I wonder why they don’t report the performance with “0 vision tokens”.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Efficient Multi-modal Large Language Models via Visual Token Grouping (2024)
- LinVT: Empower Your Image-level Large Language Model to Understand Videos (2024)
- Multimodal Instruction Tuning with Hybrid State Space Models (2024)
- TimeMarker: A Versatile Video-LLM for Long and Short Video Understanding with Superior Temporal Localization Ability (2024)
- AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning (2024)
- Enhancing Instruction-Following Capability of Visual-Language Models by Reducing Image Redundancy (2024)
- ATP-LLaVA: Adaptive Token Pruning for Large Vision Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper