Update README.md
Browse files![tmpd2xdo_x4.png](https://cdn-uploads.huggingface.co/production/uploads/659caae6ad9aec4185951ea6/9lUd8tqgbK70Hf7wp5KfW.png)
README.md
CHANGED
@@ -1,4 +1,16 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
pipeline_tag: text-generation
|
4 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-4.0
|
3 |
pipeline_tag: text-generation
|
4 |
+
---
|
5 |
+
Proctora is a MoE model made of
|
6 |
+
- OpenPipe/mistral-ft-optimized-1227 as a base model
|
7 |
+
- SanjiWatsuki/Kunoichi-7B as a first expert dedicated to RP tasks.
|
8 |
+
- samir-fama/SamirGPT-v1 as a second expert for factual answers.
|
9 |
+
|
10 |
+
I do not have yet any metrics for this model but subjective ones. It was made at first out of curiosity and experimentation.
|
11 |
+
|
12 |
+
My goal is to produce a model excellent at being a game master for RPG sessions. However being dissatisfied with the existing evaluation tool-sets, I decided to create my own (still a WIP on 01/16/24). And among my collection of models of small/medium models Proctora gave me the best results to evaluate the answers produced by other LLMs. Therefore, I surprisingly settled with it and gave it an appropriate name according to the task.
|
13 |
+
|
14 |
+
TLDR: Proctora is a tool for a tool!
|
15 |
+
|
16 |
+
I doubt this model will be useful for the community. I publish it for the sake of transparency in my creative process.
|