Sri-Vigneshwar-DJ commited on
Commit
22d2fea
·
verified ·
1 Parent(s): 01a2ef6

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md CHANGED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ library_name: transformers
2
+ license: apache-2.0
3
+ pipeline_tag: image-text-to-text
4
+ language:
5
+
6
+ en
7
+ base_model:
8
+ meta-llama/llama-3.3-70b
9
+ google/siglip-so400m-patch14-384
10
+
11
+
12
+ Llama3.3 70B VLM
13
+ Llama3.3 70B VLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images. Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on visual content. The model does not support image generation.
14
+ To fine-tune Llama3.3 70B VLM on a specific task, you can follow the fine-tuning tutorial.
15
+ <!-- todo: add link to fine-tuning tutorial -->
16
+ Technical Summary
17
+ Llama3.3 70B VLM leverages the powerful Llama-3.3-70B language model to provide a comprehensive multimodal experience. It introduces several changes compared to previous models:
18
+
19
+ Image compression: We introduce a more radical image compression to enable the model to infer faster and use less RAM.
20
+ Visual Token Encoding: Llama3.3 70B VLM uses 81 visual tokens to encode image patches of size 384×384. Larger images are divided into patches, each encoded separately, enhancing efficiency without compromising performance.
21
+
22
+ More details about the training and architecture are available in our technical report.
23
+ How to get started
24
+ You can use transformers to load, infer and fine-tune Llama3.3 70B VLM.
25
+ pythonCopyimport torch
26
+ from PIL import Image
27
+ from transformers import AutoProcessor, AutoModelForVision2Seq
28
+ from transformers.image_utils import load_image
29
+
30
+ DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
31
+
32
+ # Load images
33
+ image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
34
+ image2 = load_image("https://huggingface.co/spaces/merve/chameleon-7b/resolve/main/bee.jpg")
35
+
36
+ # Initialize processor and model
37
+ processor = AutoProcessor.from_pretrained("meta-llama/Llama3.3-70B-VLM-Instruct")
38
+ model = AutoModelForVision2Seq.from_pretrained(
39
+ "meta-llama/Llama3.3-70B-VLM-Instruct",
40
+ torch_dtype=torch.bfloat16,
41
+ _attn_implementation="flash_attention_2" if DEVICE == "cuda" else "eager",
42
+ ).to(DEVICE)
43
+
44
+ # Create input messages
45
+ messages = [
46
+ {
47
+ "role": "user",
48
+ "content": [
49
+ {"type": "image"},
50
+ {"type": "image"},
51
+ {"type": "text", "text": "Can you describe the two images?"}
52
+ ]
53
+ },
54
+ ]
55
+
56
+ # Prepare inputs
57
+ prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
58
+ inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt")
59
+ inputs = inputs.to(DEVICE)
60
+
61
+ # Generate outputs
62
+ generated_ids = model.generate(**inputs, max_new_tokens=500)
63
+ generated_texts = processor.batch_decode(
64
+ generated_ids,
65
+ skip_special_tokens=True,
66
+ )
67
+
68
+ print(generated_texts[0])
69
+ """
70
+ Assistant: The first image shows a green statue of the Statue of Liberty standing on a stone pedestal in front of a body of water.
71
+ The statue is holding a torch in its right hand and a tablet in its left hand. The water is calm and there are no boats or other objects visible.
72
+ The sky is clear and there are no clouds. The second image shows a bee on a pink flower.
73
+ The bee is black and yellow and is collecting pollen from the flower. The flower is surrounded by green leaves.
74
+ """
75
+ Our Approach
76
+ Instruct SAM
77
+ Model optimizations
78
+ Precision: For better performance, load and run the model in half-precision (torch.float16 or torch.bfloat16) if your hardware supports it.
79
+ pythonCopyfrom transformers import AutoModelForVision2Seq
80
+ import torch
81
+
82
+ model = AutoModelForVision2Seq.from_pretrained(
83
+ "meta-llama/Llama3.3-70B-VLM-Instruct",
84
+ torch_dtype=torch.bfloat16
85
+ ).to("cuda")
86
+ You can also load Llama3.3 70B VLM with 4/8-bit quantization using bitsandbytes, torchao or Quanto. Refer to this page for other options.
87
+ pythonCopyfrom transformers import AutoModelForVision2Seq, BitsAndBytesConfig
88
+ import torch
89
+
90
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
91
+ model = AutoModelForVision2Seq.from_pretrained(
92
+ "meta-llama/Llama3.3-70B-VLM-Instruct",
93
+ quantization_config=quantization_config,
94
+ )
95
+ Vision Encoder Efficiency: Adjust the image resolution by setting size={"longest_edge": N*384} when initializing the processor, where N is your desired value. The default N=4 works well, which results in input images of
96
+ size 1536×1536. For documents, N=5 might be beneficial. Decreasing N can save GPU memory and is appropriate for lower-resolution images. This is also useful if you want to fine-tune on videos.