aarticerebras
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,15 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
# Model Card for Cerebras-ViT-L-336-patch14-llava7b-ShareGPT4V
|
5 |
+
|
6 |
+
The checkpoints here are for the vision encoder part of **cerebras/Cerebras-LLaVA-7B**.
|
7 |
+
|
8 |
+
**Note**: _ShareGPT4V_ is added to the model name to ensure correct loading of checkpoints in [LLaVA source repo](https://github.com/haotian-liu/LLaVA/blob/main/llava/model/multimodal_encoder/builder.py#L8)
|
9 |
+
|
10 |
+
For full details of this model and training details, please read our paper and release blog post **to be released shortly**.
|
11 |
+
|
12 |
+
# Model Architecture
|
13 |
+
Cerebras-ViT-L-336-patch14-llava7b-ShareGPT4V is a transformer model based on CLIP-VisionModel-Large(openai/clip-vit-large-patch14-336). It handles images of size 336 x 336 with patch size of 14
|
14 |
+
|
15 |
+
|