Triangle104
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -14,6 +14,49 @@ tags:
|
|
14 |
This model was converted to GGUF format from [`Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated`](https://huggingface.co/Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
15 |
Refer to the [original model card](https://huggingface.co/Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated) for more details on the model.
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
## Use with llama.cpp
|
18 |
Install llama.cpp through brew (works on Mac and Linux)
|
19 |
|
|
|
14 |
This model was converted to GGUF format from [`Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated`](https://huggingface.co/Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
15 |
Refer to the [original model card](https://huggingface.co/Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated) for more details on the model.
|
16 |
|
17 |
+
---
|
18 |
+
Model details:
|
19 |
+
-
|
20 |
+
Small but Smart
|
21 |
+
|
22 |
+
Fine-Tuned on Vast dataset of Conversations
|
23 |
+
|
24 |
+
Able to Generate Human like text with high performance within its size.
|
25 |
+
|
26 |
+
It is Very Versatile when compared for it's size and Parameters and offers capability almost as good as Llama 3.1 8B Instruct
|
27 |
+
|
28 |
+
Feel free to Check it out!!
|
29 |
+
|
30 |
+
[This model was trained for 5hrs on GPU T4 15gb vram]
|
31 |
+
|
32 |
+
Developed by: Meta AI
|
33 |
+
Fine-Tuned by: Devarui379
|
34 |
+
Model type: Transformers
|
35 |
+
Language(s) (NLP): English
|
36 |
+
License: cc-by-4.0
|
37 |
+
|
38 |
+
Model Sources [optional]
|
39 |
+
|
40 |
+
base model:meta-llama/Llama-3.2-3B-Instruct
|
41 |
+
|
42 |
+
Repository: Devarui379/VersatiLlama-Llama-3.2-3B-Instruct-Abliterated
|
43 |
+
Demo: Use LM Studio with the Quantized version
|
44 |
+
|
45 |
+
Uses
|
46 |
+
|
47 |
+
Use desired System prompt when using in LM Studio The optimal chat template seems to be Jinja but feel free to test it out as you want!
|
48 |
+
|
49 |
+
Technical Specifications
|
50 |
+
|
51 |
+
Model Architecture and Objective
|
52 |
+
|
53 |
+
Llama 3.2
|
54 |
+
|
55 |
+
Hardware
|
56 |
+
|
57 |
+
NVIDIA TESLA T4 GPU 15GB VRAM
|
58 |
+
|
59 |
+
---
|
60 |
## Use with llama.cpp
|
61 |
Install llama.cpp through brew (works on Mac and Linux)
|
62 |
|