JustinLin610 commited on
Commit
85e71ad
·
verified ·
1 Parent(s): 334a500

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -206
README.md CHANGED
@@ -8,11 +8,11 @@ tags:
8
  library_name: transformers
9
  ---
10
 
11
- # Qwen2-VL-7B-Base
12
 
13
  ## Introduction
14
 
15
- We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation.
16
 
17
  ### What’s New in Qwen2-VL?
18
 
@@ -47,210 +47,6 @@ The code of Qwen2-VL has been in the latest Hugging face transformers and we adv
47
  KeyError: 'qwen2_vl'
48
  ```
49
 
50
- ## Quickstart
51
-
52
- Here we show a code snippet to show you how to use the chat model with `transformers`:
53
-
54
- ### Single Media inference
55
-
56
- The model can accept both images and videos as input. Here's an example code for inference.
57
-
58
- ```python
59
- from PIL import Image
60
- import requests
61
- import torch
62
- from torchvision import io
63
- from typing import Dict
64
- from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
65
-
66
- # Load the model in half-precision on the available device(s)
67
- model = Qwen2VLForConditionalGeneration.from_pretrained("Qwen/QQwen2-VL-7B-Base", device_map="auto")
68
- processor = AutoProcessor.from_pretrained("Qwen/QQwen2-VL-7B-Base")
69
-
70
- # Image
71
- url = "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg"
72
- image = Image.open(requests.get(url, stream=True).raw)
73
-
74
- conversation = [
75
- {
76
- "type":"image",
77
- },
78
- {
79
- "type":"text",
80
- "text":"In this image,"
81
- }
82
- ]
83
-
84
-
85
- # Preprocess the inputs
86
- text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
87
- # Excepted output: '<|vision_start|><|image_pad|><|vision_end|>In this image,'
88
-
89
- inputs = processor(text=[text_prompt], images=[image], padding=True, return_tensors="pt")
90
- inputs = inputs.to('cuda')
91
-
92
- # Inference: Generation of the output
93
- output_ids = model.generate(**inputs, max_new_tokens=128)
94
- generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
95
- output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
96
- print(output_text)
97
-
98
-
99
-
100
- # Video
101
- def fetch_video(ele: Dict, nframe_factor=2):
102
- if isinstance(ele['video'], str):
103
- def round_by_factor(number: int, factor: int) -> int:
104
- return round(number / factor) * factor
105
-
106
- video = ele["video"]
107
- if video.startswith("file://"):
108
- video = video[7:]
109
-
110
- video, _, info = io.read_video(
111
- video,
112
- start_pts=ele.get("video_start", 0.0),
113
- end_pts=ele.get("video_end", None),
114
- pts_unit="sec",
115
- output_format="TCHW",
116
- )
117
- assert not ("fps" in ele and "nframes" in ele), "Only accept either `fps` or `nframes`"
118
- if "nframes" in ele:
119
- nframes = round_by_factor(ele["nframes"], nframe_factor)
120
- else:
121
- fps = ele.get("fps", 1.0)
122
- nframes = round_by_factor(video.size(0) / info["video_fps"] * fps, nframe_factor)
123
- idx = torch.linspace(0, video.size(0) - 1, nframes, dtype=torch.int64)
124
- return video[idx]
125
-
126
- video_info = {"type": "video", "video": "/path/to/video.mp4", "fps": 1.0}
127
- video = fetch_video(video_info)
128
- conversation = [
129
- {"type": "video"},
130
- {"type": "text", "text": "What happened in the video? Answer:"},
131
- ]
132
-
133
- # Preprocess the inputs
134
- text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
135
- # Excepted output: '<|vision_start|><|video_pad|><|vision_end|>What happened in the video? Answer:'
136
-
137
- inputs = processor(text=[text_prompt], videos=[video], padding=True, return_tensors="pt")
138
- inputs = inputs.to('cuda')
139
-
140
- # Inference: Generation of the output
141
- output_ids = model.generate(**inputs, max_new_tokens=128)
142
- generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
143
- output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
144
- print(output_text)
145
- ```
146
-
147
- ### Batch Mixed Media Inference
148
-
149
- The model can batch inputs composed of mixed samples of various types such as images, videos, and text. Here is an example.
150
-
151
- ```python
152
- image1 = Image.open("/path/to/image1.jpg")
153
- image2 = Image.open("/path/to/image2.jpg")
154
- image3 = Image.open("/path/to/image3.jpg")
155
- image4 = Image.open("/path/to/image4.jpg")
156
- image5 = Image.open("/path/to/image5.jpg")
157
- video = fetch_video({
158
- "type": "video",
159
- "video": "/path/to/video.mp4",
160
- "fps": 1.0
161
- })
162
-
163
- # Conversation for the first image
164
- conversation1 = [
165
- {"type": "image"},
166
- {"type": "text", "text": "In this image."}
167
- ]
168
-
169
- # Conversation with two images
170
- conversation2 = [
171
- {"type": "image"},
172
- {"type": "image"},
173
- {"type": "text", "text": "What is written in the pictures?"}
174
- ]
175
-
176
- # Conversation with pure text
177
- conversation3 = "who are you?"
178
-
179
-
180
- # Conversation with mixed midia
181
- conversation4 = [
182
- {"type": "image"},
183
- {"type": "image"},
184
- {"type": "video"},
185
- {"type": "text", "text": "What are the common elements in these medias?"},
186
- ]
187
-
188
- conversations = [conversation1, conversation2, conversation3, conversation4]
189
- # Preparation for batch inference
190
- texts = [processor.apply_chat_template(msg, add_generation_prompt=True) for msg in conversations]
191
- inputs = processor(
192
- text=texts,
193
- images=[image1, image2, image3, image4, image5],
194
- videos=[video],
195
- padding=True,
196
- return_tensors="pt",
197
- )
198
- inputs = inputs.to('cuda')
199
-
200
- # Batch Inference
201
- output_ids = model.generate(**inputs, max_new_tokens=128)
202
- generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
203
- output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
204
- print(output_text)
205
- ```
206
-
207
- #### Image Resolution for performance boost
208
-
209
- The model supports a wide range of resolution inputs. By default, it uses the native resolution for input, but higher resolutions can enhance performance at the cost of more computation. Users can set the minimum and maximum number of pixels to achieve an optimal configuration for their needs, such as a token count range of 256-1280, to balance speed and memory usage.
210
-
211
- ```python
212
- min_pixels = 256 * 28 * 28
213
- max_pixels = 1280 * 28 * 28
214
- processor = AutoProcessor.from_pretrained(
215
- "Qwen/Qwen2-VL-7B-Base", min_pixels=min_pixels, max_pixels=max_pixels
216
- )
217
- ```
218
-
219
- #### Flash-Attention 2 to speed up generation
220
-
221
- First, make sure to install the latest version of Flash Attention 2:
222
-
223
- ```bash
224
- pip install -U flash-attn --no-build-isolation
225
- ```
226
-
227
- Also, you should have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of the [flash attention repository](https://github.com/Dao-AILab/flash-attention). FlashAttention-2 can only be used when a model is loaded in `torch.float16` or `torch.bfloat16`.
228
-
229
- To load and run a model using Flash Attention-2, simply add `attn_implementation="flash_attention_2"` when loading the model as follows:
230
-
231
- ```python
232
- from transformers import Qwen2VLForConditionalGeneration
233
-
234
- model = Qwen2VLForConditionalGeneration.from_pretrained(
235
- "Qwen/Qwen2-VL-2B-Instruct",
236
- torch_dtype=torch.bfloat16,
237
- attn_implementation="flash_attention_2",
238
- )
239
- ```
240
-
241
- ## Limitations
242
-
243
- While Qwen2-VL are applicable to a wide range of visual tasks, it is equally important to understand its limitations. Here are some known restrictions:
244
-
245
- 1. Lack of Audio Support: The current model does **not comprehend audio information** within videos.
246
- 2. Data timeliness: Our image dataset is **updated until June 2023**, and information subsequent to this date may not be covered.
247
- 3. Constraints in Individuals and Intellectual Property (IP): The model's capacity to recognize specific individuals or IPs is limited, potentially failing to comprehensively cover all well-known personalities or brands.
248
- 4. Limited Capacity for Complex Instruction: When faced with intricate multi-step instructions, the model's understanding and execution capabilities require enhancement.
249
- 5. Insufficient Counting Accuracy: Particularly in complex scenes, the accuracy of object counting is not high, necessitating further improvements.
250
- 6. Weak Spatial Reasoning Skills: Especially in 3D spaces, the model's inference of object positional relationships is inadequate, making it difficult to precisely judge the relative positions of objects.
251
-
252
- These limitations serve as ongoing directions for model optimization and improvement, and we are committed to continually enhancing the model's performance and scope of application.
253
-
254
  ## Citation
255
 
256
  If you find our work helpful, feel free to give us a cite.
 
8
  library_name: transformers
9
  ---
10
 
11
+ # Qwen2-VL-7B
12
 
13
  ## Introduction
14
 
15
+ We're excited to unveil **Qwen2-VL**, the latest iteration of our Qwen-VL model, representing nearly a year of innovation. This is the base pretrained model of Qwen2-VL without instruction tuning.
16
 
17
  ### What’s New in Qwen2-VL?
18
 
 
47
  KeyError: 'qwen2_vl'
48
  ```
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  ## Citation
51
 
52
  If you find our work helpful, feel free to give us a cite.