Ovis1.5-Llama3-8B / README.md
runninglsy's picture
initial commit
578731e
|
raw
history blame
5.98 kB
metadata
license: apache-2.0
datasets:
  - AIDC-AI/Ovis-dataset
library_name: transformers
tags:
  - MLLM
pipeline_tag: image-text-to-text

Introduction

Ovis is a novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings. For a comprehensive introduction, please refer to Ovis paper and Ovis GitHub.

Model

As always, Ovis1.5 remains fully open-source: we release the training datasets, training & inference codes, and model weights for reproducible transparency and community collaboration.

MiniCPM-Llama3-V2.5 Ovis1.5-Llama3-8B
Training scripts - Github
ViT Siglip-400M Siglip-400M
LLM Llama3-8B-Instruct Llama3-8B-Instruct
MMTBench-VAL 57.6 60.7
MMBench-EN-V1.1 74 78.2
MMBench-CN-V1.1 70.1 75.2
MMStar 51.8 57.2
MMMU-Val 45.8 48.6
MathVista-Mini 54.3 62.4
HallusionBenchAvg 42.4 44.5
AI2D 78.4 82.5
OCRBench 725 743
MMVet 52.8 52.2
RealWorldQA 63.5 64.6

Usage

Below is a code snippet to run Ovis with multimodal inputs. For additional usage instructions, including inference wrapper and Gradio UI, please refer to Ovis GitHub.

pip install torch==2.1.0 transformers==4.42.4 deepspeed==0.14.0 pillow==10.3.0
import torch
from PIL import Image
from transformers import AutoModelForCausalLM

# load model
model = AutoModelForCausalLM.from_pretrained("AIDC-AI/Ovis1.5-Llama3-8B",
                                             torch_dtype=torch.bfloat16,
                                             multimodal_max_length=8192,
                                             trust_remote_code=True).cuda()
text_tokenizer = model.get_text_tokenizer()
visual_tokenizer = model.get_visual_tokenizer()
conversation_formatter = model.get_conversation_formatter()

# enter image path and prompt
image_path = input("Enter image path: ")
image = Image.open(image_path)
text = input("Enter prompt: ")
query = f'<image>\n{text}'
prompt, input_ids = conversation_formatter.format_query(query)
input_ids = torch.unsqueeze(input_ids, dim=0).to(device=model.device)
attention_mask = torch.ne(input_ids, text_tokenizer.pad_token_id).to(device=model.device)
pixel_values = [visual_tokenizer.preprocess_image(image).to(
    dtype=visual_tokenizer.dtype, device=visual_tokenizer.device)]

# generate output
with torch.inference_mode():
    gen_kwargs = dict(
        max_new_tokens=1024,
        do_sample=False,
        top_p=None,
        top_k=None,
        temperature=None,
        repetition_penalty=None,
        eos_token_id=model.generation_config.eos_token_id,
        pad_token_id=text_tokenizer.pad_token_id,
        use_cache=True
    )
    output_ids = model.generate(input_ids, pixel_values=pixel_values, attention_mask=attention_mask, **gen_kwargs)[0]
    output = text_tokenizer.decode(output_ids, skip_special_tokens=True)
    print(f'Output: {output}')

Citation

If you find Ovis useful, please cite the paper

@article{lu2024ovis,
  title={Ovis: Structural Embedding Alignment for Multimodal Large Language Model}, 
  author={Shiyin Lu and Yang Li and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and Han-Jia Ye},
  year={2024},
  journal={arXiv:2405.20797}
}

License

The project is licensed under the Apache 2.0 License and is restricted to uses that comply with the license agreements of Qwen, Llama3, Clip, and Siglip.