belkhale commited on
Commit
3822264
·
verified ·
1 Parent(s): e6dcc7c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -0
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - robotics
5
+ - vla
6
+ - image-text-to-text
7
+ - multimodal
8
+ - pretraining
9
+ license: mit
10
+ language:
11
+ - en
12
+ pipeline_tag: image-text-to-text
13
+ ---
14
+
15
+ # MiniVLA 1B (Prismatic-Compatible Version)
16
+
17
+ <b>This checkpoint is in a format that is compatible with the training script from the original [Prismatic VLMs project codebase](https://github.com/TRI-ML/prismatic-vlms), which the OpenVLA
18
+ team built on top of to develop the OpenVLA model.</b>
19
+
20
+ This Prismatic-compatible checkpoint may be useful if you wish to <b>fully fine-tune</b> MiniVLA (all 1 billion parameters) via native PyTorch Fully
21
+ Sharded Data Parallel (FSDP) using the Prismatic VLMs training script. If you instead wish to do Parameter-Efficient Fine-Tuning via LoRA, you
22
+ can use the MiniVLA checkpoint linked above, which is compatible with the Hugging Face `transformers` library. We recommend fine-tuning via LoRA if
23
+ you do not have sufficient compute to fully fine-tune a 1B-parameter model (e.g., multiple A100/H100 GPUs).
24
+
25
+ ## Usage Instructions
26
+
27
+ See the [MiniVLA GitHub README](https://github.com/Stanford-ILIAD/openvla-mini/blob/main/README.md) for instructions on how to use this checkpoint for full fine-tuning.
28
+
29
+ ## Citation
30
+
31
+ **BibTeX:**
32
+
33
+ ```bibtex
34
+ @article{kim24openvla,
35
+ title={MiniVLA: A Better VLA with a Smaller Footprint},
36
+ author={Suneel Belkhale and Dorsa Sadigh},
37
+ url={https://github.com/Stanford-ILIAD/openvla-mini}
38
+ year={2024}
39
+ }
40
+ ```