File size: 2,884 Bytes
d8717d4
 
03c6561
 
d8717d4
03c6561
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---

license: llama2
datasets:
- liuhaotian/LLaVA-Instruct-150K
---


# Ada-LLaVA Model Card

__AdaLLaVA-H: This model is another verson of Ada-LLaVA-13B where we drop attention heads and neurons.__

<!-- Provide a quick summary of what the model is/does. -->

Ada-LLaVA-13B-drop_heads (AdaLLaVA-H) is an open-source adaptive inference framework for multimodal Large Language Models (MLLMs) that dynamically adjusts its operations based on available computational resources and latency requirements. 







See the paper for more details: [Learning to Inference Adaptively for Multimodal Large Language Models]()



**Model details**: https://zhuoyan-xu.github.io/ada-llava/



## Model Details



<!-- Provide a longer summary of what this model is. -->



**Model Type:** Ada LLaVA 13B follows the [LLaVA-v1.5](https://arxiv.org/abs/2310.03744) stage-2 training pipeline, 

with [CLIP-ViT-L-336px](https://huggingface.co/openai/clip-vit-large-patch14-336) as visual encoder (336*336 image resolution), 

[Vicuna-v1.5-13B](https://huggingface.co/lmsys/vicuna-13b-v1.5) as base LLM and a two-layer MLP as vision-language connector, customized embedding model and MLP as latency scheduler. 



It was trained with stage-2 pipeline as LLaVA:



Instruction tuning: Freeze vision encoder, train the remaining model with multimodal instruction following data of tabular and non-tabular tasks.



**Code Base:** We use the official code of [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) for model training and inference, 

and the saved model checkpoint is uploaded to this repository. 



**Model Date:** Ada-LLaVA 13B was trained in Oct 2024.





## License



AdaLLaVA is based on LLaVA-1.5 and thus follows its license. Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.



## Intended use



**Primary intended uses:** The primary use of Ada LLaVA is research on multimodal large multimodal models and chatbots, especially for resource-constrain inference and deployment.



**Primary intended users:** The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.



## Training dataset

- 665K image level instruction data from LLaVA-1.5 stage-2, see details in original [LLaVA repo](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#visual-instruction-tuning).



## Limitations



While Ada-LLaVA is currently limited to processing one image at a time and only applies adaptive operations in its later half of layers, future work could explore multi-image input support and extend the adaptive mechanisms throughout the entire model architecture, including the vision encoder. These improvements would make the model more versatile and applicable to a broader range of real-world scenarios.