Triangle104 commited on
Commit
a0e5a16
·
verified ·
1 Parent(s): f1892ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -0
README.md CHANGED
@@ -10,6 +10,142 @@ tags:
10
  This model was converted to GGUF format from [`P0x0/Epos-8b`](https://huggingface.co/P0x0/Epos-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
11
  Refer to the [original model card](https://huggingface.co/P0x0/Epos-8b) for more details on the model.
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ## Use with llama.cpp
14
  Install llama.cpp through brew (works on Mac and Linux)
15
 
 
10
  This model was converted to GGUF format from [`P0x0/Epos-8b`](https://huggingface.co/P0x0/Epos-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
11
  Refer to the [original model card](https://huggingface.co/P0x0/Epos-8b) for more details on the model.
12
 
13
+ ---
14
+ Model details:
15
+ -
16
+ Epos-8B is a fine-tuned version of the base model Llama-3.1-8B
17
+ from Meta, optimized for storytelling, dialogue generation, and
18
+ creative writing. The model specializes in generating rich narratives,
19
+ immersive prose, and dynamic character interactions, making it ideal for
20
+ creative tasks.
21
+
22
+
23
+
24
+
25
+
26
+
27
+
28
+
29
+
30
+
31
+
32
+ Model Details
33
+
34
+
35
+
36
+
37
+
38
+
39
+
40
+
41
+
42
+ Model Description
43
+
44
+
45
+
46
+
47
+ Epos-8B is an 8 billion parameter language model fine-tuned for
48
+ storytelling and narrative tasks. Inspired by the grandeur of epic
49
+ tales, it is designed to produce high-quality, engaging content that
50
+ evokes the depth and imagination of ancient myths and modern
51
+ storytelling traditions.
52
+
53
+
54
+ Developed by: P0x0
55
+ Funded by: P0x0
56
+ Shared by: P0x0
57
+ Model type: Transformer-based Language Model
58
+ Language(s) (NLP): Primarily English
59
+ License: Apache 2.0
60
+ Finetuned from model: meta-llama/Llama-3.1-8B
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+ Model Sources
69
+
70
+
71
+
72
+
73
+ Repository: Epos-8B on Hugging Face
74
+ GGUF Repository: Epos-8B-GGUF (TO BE ADDED)
75
+
76
+
77
+
78
+
79
+
80
+
81
+
82
+
83
+ Uses
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
+
92
+
93
+ Direct Use
94
+
95
+
96
+
97
+
98
+ Epos-8B is ideal for:
99
+
100
+
101
+ Storytelling: Generate detailed, immersive, and engaging narratives.
102
+ Dialogue Creation: Create realistic and dynamic character interactions for stories or games.
103
+
104
+
105
+
106
+
107
+
108
+
109
+
110
+ How to Get Started with the Model
111
+
112
+
113
+
114
+
115
+ To run the quantized version of the model, you can use KoboldCPP, which allows you to run quantized GGUF models locally.
116
+
117
+
118
+
119
+
120
+
121
+
122
+
123
+ Steps:
124
+
125
+
126
+
127
+
128
+ Download KoboldCPP.
129
+ Follow the setup instructions provided in the repository.
130
+ Download the GGUF variant of Epos-8B from Epos-8B-GGUF.
131
+ Load the model in KoboldCPP and start generating!
132
+
133
+
134
+ Alternatively, integrate the model directly into your code with the following snippet:
135
+
136
+
137
+ from transformers import AutoModelForCausalLM, AutoTokenizer
138
+
139
+ tokenizer = AutoTokenizer.from_pretrained("P0x0/Epos-8B")
140
+ model = AutoModelForCausalLM.from_pretrained("P0x0/Epos-8B")
141
+
142
+ input_text = "Once upon a time in a distant land..."
143
+ inputs = tokenizer(input_text, return_tensors="pt")
144
+ outputs = model.generate(**inputs)
145
+
146
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
147
+
148
+ ---
149
  ## Use with llama.cpp
150
  Install llama.cpp through brew (works on Mac and Linux)
151