Sri-Vigneshwar-DJ commited on
Commit
75a0239
·
verified ·
1 Parent(s): 22d2fea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -81
README.md CHANGED
@@ -1,96 +1,66 @@
 
 
1
  library_name: transformers
2
- license: apache-2.0
3
- pipeline_tag: image-text-to-text
4
- language:
 
 
 
 
 
 
 
 
5
 
6
- en
7
- base_model:
8
- meta-llama/llama-3.3-70b
9
- google/siglip-so400m-patch14-384
10
 
 
 
11
 
12
- Llama3.3 70B VLM
13
- Llama3.3 70B VLM can be used for inference on multimodal (image + text) tasks where the input comprises text queries along with one or more images. Text and images can be interleaved arbitrarily, enabling tasks like image captioning, visual question answering, and storytelling based on visual content. The model does not support image generation.
14
- To fine-tune Llama3.3 70B VLM on a specific task, you can follow the fine-tuning tutorial.
15
- <!-- todo: add link to fine-tuning tutorial -->
16
- Technical Summary
17
- Llama3.3 70B VLM leverages the powerful Llama-3.3-70B language model to provide a comprehensive multimodal experience. It introduces several changes compared to previous models:
18
 
19
- Image compression: We introduce a more radical image compression to enable the model to infer faster and use less RAM.
20
- Visual Token Encoding: Llama3.3 70B VLM uses 81 visual tokens to encode image patches of size 384×384. Larger images are divided into patches, each encoded separately, enhancing efficiency without compromising performance.
21
 
22
- More details about the training and architecture are available in our technical report.
23
- How to get started
24
- You can use transformers to load, infer and fine-tune Llama3.3 70B VLM.
25
- pythonCopyimport torch
26
- from PIL import Image
27
- from transformers import AutoProcessor, AutoModelForVision2Seq
28
- from transformers.image_utils import load_image
29
 
30
- DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
31
 
32
- # Load images
33
- image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
34
- image2 = load_image("https://huggingface.co/spaces/merve/chameleon-7b/resolve/main/bee.jpg")
35
 
36
- # Initialize processor and model
37
- processor = AutoProcessor.from_pretrained("meta-llama/Llama3.3-70B-VLM-Instruct")
38
- model = AutoModelForVision2Seq.from_pretrained(
39
- "meta-llama/Llama3.3-70B-VLM-Instruct",
40
- torch_dtype=torch.bfloat16,
41
- _attn_implementation="flash_attention_2" if DEVICE == "cuda" else "eager",
42
- ).to(DEVICE)
43
 
44
- # Create input messages
45
- messages = [
46
- {
47
- "role": "user",
48
- "content": [
49
- {"type": "image"},
50
- {"type": "image"},
51
- {"type": "text", "text": "Can you describe the two images?"}
52
- ]
53
- },
54
- ]
55
 
56
- # Prepare inputs
57
- prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
58
- inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt")
59
- inputs = inputs.to(DEVICE)
60
 
61
- # Generate outputs
62
- generated_ids = model.generate(**inputs, max_new_tokens=500)
63
- generated_texts = processor.batch_decode(
64
- generated_ids,
65
- skip_special_tokens=True,
66
- )
67
 
68
- print(generated_texts[0])
69
- """
70
- Assistant: The first image shows a green statue of the Statue of Liberty standing on a stone pedestal in front of a body of water.
71
- The statue is holding a torch in its right hand and a tablet in its left hand. The water is calm and there are no boats or other objects visible.
72
- The sky is clear and there are no clouds. The second image shows a bee on a pink flower.
73
- The bee is black and yellow and is collecting pollen from the flower. The flower is surrounded by green leaves.
74
- """
75
- Our Approach
76
- Instruct SAM
77
- Model optimizations
78
- Precision: For better performance, load and run the model in half-precision (torch.float16 or torch.bfloat16) if your hardware supports it.
79
- pythonCopyfrom transformers import AutoModelForVision2Seq
80
- import torch
81
 
82
- model = AutoModelForVision2Seq.from_pretrained(
83
- "meta-llama/Llama3.3-70B-VLM-Instruct",
84
- torch_dtype=torch.bfloat16
85
- ).to("cuda")
86
- You can also load Llama3.3 70B VLM with 4/8-bit quantization using bitsandbytes, torchao or Quanto. Refer to this page for other options.
87
- pythonCopyfrom transformers import AutoModelForVision2Seq, BitsAndBytesConfig
88
- import torch
89
 
90
- quantization_config = BitsAndBytesConfig(load_in_8bit=True)
91
- model = AutoModelForVision2Seq.from_pretrained(
92
- "meta-llama/Llama3.3-70B-VLM-Instruct",
93
- quantization_config=quantization_config,
94
- )
95
- Vision Encoder Efficiency: Adjust the image resolution by setting size={"longest_edge": N*384} when initializing the processor, where N is your desired value. The default N=4 works well, which results in input images of
96
- size 1536×1536. For documents, N=5 might be beneficial. Decreasing N can save GPU memory and is appropriate for lower-resolution images. This is also useful if you want to fine-tune on videos.
 
 
1
+ ---
2
+ base_model: meta-llama/Llama-3.3-70B-Instruct
3
  library_name: transformers
4
+ license: other
5
+ tags:
6
+ - llama-cpp
7
+ - Llama-3.3
8
+ - Llama-3.3-70B
9
+ - Llama
10
+ - Llama-3.3-70B-Instruct
11
+ - 4Bit
12
+ - GGUF
13
+ datasets: hawky_market_research_prompts
14
+ ---
15
 
16
+ # Sri-Vigneshwar-DJ/Llama-3.3-70B-4bit
17
+ This model was converted to GGUF format from [`AIDC-AI/Marco-o1`](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) using llama.cpp
18
+ Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) for more details on the model.
 
19
 
20
+ ## Use with llama.cpp
21
+ Install llama.cpp through brew (works on Mac and Linux) from []
22
 
23
+ ```bash
24
+ brew install llama.cpp or !git clone https://github.com/ggerganov/llama.cpp.git
 
 
 
 
25
 
26
+ ```
27
+ Invoke the llama.cpp server or the CLI.
28
 
29
+ ### CLI:
30
+ ```bash
31
+ ! /content/llama.cpp/llama-cli -m ./Llama-3.3-70B-4bit -n 90 --repeat_penalty 1.0 --color -i -r "User:" -f /content/llama.cpp/prompts/chat-with-bob.txt
 
 
 
 
32
 
33
+ or
34
 
35
+ llama-cli --hf-repo Sri-Vigneshwar-DJ/meta-llama/Llama-3.3-70B-4bit --hf-file FP8.gguf -p "Create Meta Ads Templates"
36
+ ```
 
37
 
38
+ ### Server:
39
+ ```bash
40
+ llama-server --hf-repo Sri-Vigneshwar-DJ/Llama-3.3-70B-4bit --hf-file FP8.gguf -c 2048
41
+ ```
 
 
 
42
 
43
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
 
 
 
 
 
 
 
 
 
44
 
45
+ Step 1: Clone llama.cpp from GitHub.
46
+ ```
47
+ git clone https://github.com/ggerganov/llama.cpp
48
+ ```
49
 
50
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag or ''!make GGML_OPENBLAS=1' along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
51
+ ```
52
+ cd llama.cpp && LLAMA_CURL=1 make
 
 
 
53
 
54
+ or
 
 
 
 
 
 
 
 
 
 
 
 
55
 
56
+ !make GGML_OPENBLAS=1
57
+ ```
 
 
 
 
 
58
 
59
+ Step 3: Run inference through the main binary.
60
+ ```
61
+ ./llama-cli --hf-repo Sri-Vigneshwar-DJ/Llama-3.3-70B-4bit --hf-file FP8.gguf -p "The meaning to life and the universe is"
62
+ ```
63
+ or
64
+ ```
65
+ ./llama-server --hf-repo Sri-Vigneshwar-DJ/Llama-3.3-70B-4bit --hf-file sFP8.gguf -c 2048
66
+ ```