|
--- |
|
pipeline_tag: image-text-to-text |
|
--- |
|
<br> |
|
<br> |
|
|
|
# Math-LLaVA-13B Model Card |
|
|
|
## Model details |
|
|
|
**Model type:** |
|
Math-LLaVA is an open-source MLLM by fine-tuning LLaVA-1.5-13B on selected and GPT4-Vision-assisted synthesized [MathV360K](https://huggingface.co/datasets/Zhiqiang007/MathV360K/tree/main) data. |
|
|
|
**Model date:** |
|
Math-LLaVA-13B was trained in June 2024. |
|
|
|
**Paper or resources for more information:** |
|
[[Paper](http://arxiv.org/abs/2406.17294)] [[Code](https://github.com/HZQ950419/Math-LLaVA)] |
|
|