Triangle104
commited on
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,11 +1,11 @@
|
|
1 |
---
|
2 |
-
base_model: arcee-ai/Llama-3.1-SuperNova-Lite
|
3 |
-
datasets:
|
4 |
-
- arcee-ai/EvolKit-20k
|
5 |
language:
|
6 |
- en
|
7 |
-
library_name: transformers
|
8 |
license: llama3
|
|
|
|
|
|
|
|
|
9 |
tags:
|
10 |
- llama-cpp
|
11 |
- gguf-my-repo
|
@@ -110,17 +110,6 @@ model-index:
|
|
110 |
This model was converted to GGUF format from [`arcee-ai/Llama-3.1-SuperNova-Lite`](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
111 |
Refer to the [original model card](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) for more details on the model.
|
112 |
|
113 |
-
---
|
114 |
-
Model details:
|
115 |
-
-
|
116 |
-
Llama-3.1-SuperNova-Lite is an 8B parameter model developed by Arcee.ai, based on the Llama-3.1-8B-Instruct architecture. It is a distilled version of the larger Llama-3.1-405B-Instruct model, leveraging offline logits extracted from the 405B parameter variant. This 8B variation of Llama-3.1-SuperNova maintains high performance while offering exceptional instruction-following capabilities and domain-specific adaptability.
|
117 |
-
|
118 |
-
The model was trained using a state-of-the-art distillation pipeline and an instruction dataset generated with EvolKit, ensuring accuracy and efficiency across a wide range of tasks. For more information on its training, visit blog.arcee.ai.
|
119 |
-
|
120 |
-
Llama-3.1-SuperNova-Lite excels in both benchmark performance and real-world applications, providing the power of large-scale models in a more compact, efficient form ideal for organizations seeking high performance with reduced resource requirements.
|
121 |
-
Open LLM Leaderboard Evaluation Results
|
122 |
-
|
123 |
-
---
|
124 |
## Use with llama.cpp
|
125 |
Install llama.cpp through brew (works on Mac and Linux)
|
126 |
|
|
|
1 |
---
|
|
|
|
|
|
|
2 |
language:
|
3 |
- en
|
|
|
4 |
license: llama3
|
5 |
+
library_name: transformers
|
6 |
+
base_model: arcee-ai/Llama-3.1-SuperNova-Lite
|
7 |
+
datasets:
|
8 |
+
- arcee-ai/EvolKit-20k
|
9 |
tags:
|
10 |
- llama-cpp
|
11 |
- gguf-my-repo
|
|
|
110 |
This model was converted to GGUF format from [`arcee-ai/Llama-3.1-SuperNova-Lite`](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
111 |
Refer to the [original model card](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) for more details on the model.
|
112 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
113 |
## Use with llama.cpp
|
114 |
Install llama.cpp through brew (works on Mac and Linux)
|
115 |
|