File size: 7,201 Bytes
c242c38
 
1eaae60
33e78ff
 
 
c242c38
 
33e78ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c242c38
046a989
e32103d
 
0452fad
77ed10f
33e78ff
 
 
 
 
d04b7a5
e32103d
 
 
4d53906
 
 
 
e32103d
 
 
 
 
54521e8
e32103d
54521e8
 
e32103d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54521e8
 
 
 
 
 
 
 
 
 
33e78ff
 
 
 
 
54521e8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33e78ff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
---
library_name: transformers
pipeline_tag: image-text-to-text
license: apache-2.0
base_model:
- mistral-community/pixtral-12b
---

# Disclaimer and Requirements

This model is a clone of [**mistral-community/pixtral-12b**](https://huggingface.co/mistral-community/pixtral-12b) compressed using ZipNN. Compressed losslessly to 67% its original size, ZipNN saved ~9GB in storage and potentially ~113TB in data transfer **monthly**.

### Requirement

In order to use the model, ZipNN is necessary:
```bash
pip install zipnn
```
### Use This Model
```python
# Load model directly
from transformers import AutoProcessor, AutoModelForPreTraining
from zipnn import zipnn_hf

zipnn_hf()

processor = AutoProcessor.from_pretrained("royleibov/pixtral-12b-ZipNN-Compressed")
model = AutoModelForPreTraining.from_pretrained("royleibov/pixtral-12b-ZipNN-Compressed")
```
### ZipNN
ZipNN also allows you to seemlessly save local disk space in your cache after the model is downloaded.

To compress the cached model, simply run:
```bash
python zipnn_compress_path.py safetensors --model royleibov/pixtral-12b-ZipNN-Compressed --hf_cache
```

The model will be decompressed automatically and safely as long as `zipnn_hf()` is added at the top of the file like in the [example above](#use-this-model).

To decompress manualy, simply run:
```bash
python zipnn_decompress_path.py --model royleibov/pixtral-12b-ZipNN-Compressed --hf_cache
```

# Model Card for Model ID
Transformers compatible pixtral checkpoints. Make sure to install from source or wait for v4.45! 

```python
from PIL import Image
from transformers import AutoProcessor, LlavaForConditionalGeneration
from zipnn import zipnn_hf

zipnn_hf()

model_id = "royleibov/pixtral-12b-ZipNN-Compressed"
model = LlavaForConditionalGeneration.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)

IMG_URLS = [
"https://picsum.photos/id/237/400/300", 
"https://picsum.photos/id/231/200/300", 
"https://picsum.photos/id/27/500/500",
"https://picsum.photos/id/17/150/600",
]
PROMPT = "<s>[INST]Describe the images.\n[IMG][IMG][IMG][IMG][/INST]"

inputs = processor(text=PROMPT, images=IMG_URLS, return_tensors="pt").to("cuda")
generate_ids = model.generate(**inputs, max_new_tokens=500)
output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```

You should get an output similar to the below:
```

"""
Describe the images.
Sure, let's break down each image description:

1. **Image 1:**
   - **Description:** A black dog with a glossy coat is sitting on a wooden floor. The dog has a focused expression and is looking directly at the camera.
   - **Details:** The wooden floor has a rustic appearance with visible wood grain patterns. The dog's eyes are a striking color, possibly brown or amber, which contrasts with its black fur.

2. **Image 2:**
   - **Description:** A scenic view of a mountainous landscape with a winding road cutting through it. The road is surrounded by lush green vegetation and leads to a distant valley.
   - **Details:** The mountains are rugged with steep slopes, and the sky is clear, indicating good weather. The winding road adds a sense of depth and perspective to the image.

3. **Image 3:**
   - **Description:** A beach scene with waves crashing against the shore. There are several people in the water and on the beach, enjoying the waves and the sunset.
   - **Details:** The waves are powerful, creating a dynamic and lively atmosphere. The sky is painted with hues of orange and pink from the setting sun, adding a warm glow to the scene.

4. **Image 4:**
   - **Description:** A garden path leading to a large tree with a bench underneath it. The path is bordered by well-maintained grass and flowers.
   - **Details:** The path is made of small stones or gravel, and the tree provides a shaded area with the bench invitingly placed beneath it. The surrounding area is lush and green, suggesting a well-kept garden.

Each image captures a different scene, from a close-up of a dog to expansive natural landscapes, showcasing various elements of nature and human interaction with it.
"""
```

You can also use a chat template to format your chat history for Pixtral. Make sure that the `images` argument to the `processor` contains the images in the order
that they appear in the chat, so that the model understands where each image is supposed to go.

Here's an example with text and multiple images interleaved in the same message:

```python
from PIL import Image
from transformers import AutoProcessor, LlavaForConditionalGeneration
from zipnn import zipnn_hf

zipnn_hf()

model_id = "royleibov/pixtral-12b-ZipNN-Compressed"
model = LlavaForConditionalGeneration.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)

url_dog = "https://picsum.photos/id/237/200/300"
url_mountain = "https://picsum.photos/seed/picsum/200/300"

chat = [
    {
      "role": "user", "content": [
        {"type": "text", "content": "Can this animal"}, 
        {"type": "image"}, 
        {"type": "text", "content": "live here?"}, 
        {"type": "image"}
      ]
    }
]

prompt = processor.apply_chat_template(chat)
inputs = processor(text=prompt, images=[url_dog, url_mountain], return_tensors="pt").to(model.device)
generate_ids = model.generate(**inputs, max_new_tokens=500)
output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
```

You should get something like this:

```
Can this animallive here?Certainly! Here are some details about the images you provided:

### First Image
- **Description**: The image shows a black dog lying on a wooden surface. The dog has a curious expression with its head tilted slightly to one side.
- **Details**: The dog appears to be a young puppy with soft, shiny fur. Its eyes are wide and alert, and it has a playful demeanor.
- **Context**: This image could be used to illustrate a pet-friendly environment or to showcase the dog's personality.

### Second Image
- **Description**: The image depicts a serene landscape with a snow-covered hill in the foreground. The sky is painted with soft hues of pink, orange, and purple, indicating a sunrise or sunset.
- **Details**: The hill is covered in a blanket of pristine white snow, and the horizon meets the sky in a gentle curve. The scene is calm and peaceful.
- **Context**: This image could be used to represent tranquility, natural beauty, or a winter wonderland.

### Combined Context
If you're asking whether the dog can "live here," referring to the snowy landscape, it would depend on the breed and its tolerance to cold weather. Some breeds, like Huskies or Saint Bernards, are well-adapted to cold environments, while others might struggle. The dog in the first image appears to be a breed that might prefer warmer climates.

Would you like more information on any specific aspect?
```

While it may appear that spacing in the input is disrupted, this is caused by us skipping special tokens for display, and actually "Can this animal" and "live here" are
correctly separated by image tokens. Try decoding with special tokens included to see exactly what the model sees!