File size: 2,181 Bytes
75a0239 22d2fea 75a0239 22d2fea 75a0239 bbcc785 75a0239 22d2fea 75a0239 22d2fea 75a0239 22d2fea 75a0239 22d2fea 75a0239 22d2fea 75a0239 22d2fea 75a0239 22d2fea 75a0239 22d2fea 75a0239 22d2fea 75a0239 22d2fea 75a0239 22d2fea 75a0239 22d2fea 75a0239 22d2fea 75a0239 1600050 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
base_model: meta-llama/Llama-3.3-70B-Instruct
library_name: transformers
license: other
tags:
- llama-cpp
- Llama-3.3
- Llama-3.3-70B
- Llama
- Llama-3.3-70B-Instruct
- 4Bit
- GGUF
datasets: hawky_market_research_prompts
---
# Sri-Vigneshwar-DJ/Llama-3.3-70B-4bit
This model was converted to GGUF format from [`meta-llama/Llama-3.3-70B-Instruct`](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) using llama.cpp
This model was converted to GGUF format from [`unsloth/Llama-3.3-70B-Instruct-bnb-4bit`](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) using llama.cpp
Refer to the [original model card](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux) from []
```bash
brew install llama.cpp or !git clone https://github.com/ggerganov/llama.cpp.git
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
! /content/llama.cpp/llama-cli -m ./Llama-3.3-70B-4bit -n 90 --repeat_penalty 1.0 --color -i -r "User:" -f /content/llama.cpp/prompts/chat-with-bob.txt
or
llama-cli --hf-repo Sri-Vigneshwar-DJ/meta-llama/Llama-3.3-70B-4bit --hf-file FP8.gguf -p "Create Meta Ads Templates"
```
### Server:
```bash
llama-server --hf-repo Sri-Vigneshwar-DJ/Llama-3.3-70B-4bit --hf-file FP8.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag or ''!make GGML_OPENBLAS=1' along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
or
!make GGML_OPENBLAS=1
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Sri-Vigneshwar-DJ/Llama-3.3-70B-4bit --hf-file FP8.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Sri-Vigneshwar-DJ/Llama-3.3-70B-4bit --hf-file sFP8.gguf -c 2048
```
Step 4: On Ollama
|