Update README.md
Browse files
README.md
CHANGED
@@ -9,24 +9,18 @@ Arabic language domain. This is the repository for the 13B-chat pretrained model
|
|
9 |
|
10 |
---
|
11 |
## Model Details
|
12 |
-
We have released the AceGPT family of large language models, which is a collection of fully fine-tuned generative text models based on LlaMA2, ranging from 7B to 13B parameters. Our models include two main categories: AceGPT and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source
|
13 |
-
Arabic dialogue models in multiple benchmark tests. Furthermore, in our human evaluations, our models have shown
|
14 |
-
comparable satisfaction levels to some closed-source models, such as ChatGPT, in the Arabic language.
|
15 |
## Model Developers
|
16 |
-
We are from the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ), and
|
17 |
-
the Shenzhen Research Institute of Big Data (SRIBD).
|
18 |
## Variations
|
19 |
-
AceGPT famils comes in a range of parameter sizes —— 7B and 13B, each size of model has a base categorie
|
20 |
-
and a -chat categorie.
|
21 |
## Input
|
22 |
Models input text only.
|
23 |
## Output
|
24 |
Models output text only.
|
25 |
## Model Evaluation Results
|
26 |
|
27 |
-
Experiments on Arabic Vicuna-80, Arabic AlpacaEval. Numbers are the average perfor-mance ratio of ChatGPT
|
28 |
-
over three runs. We do not report results of raw Llama-2 models since they cannot properly generate Arabic
|
29 |
-
texts.
|
30 |
| | Arabic Vicuna-80 | Arabic AlpacaEval |
|
31 |
|------------------------------|--------------------|---------------------|
|
32 |
| Phoenix Chen et al. (2023a) | 71.92% ± 0.2% | 65.62% ± 0.3% |
|
|
|
9 |
|
10 |
---
|
11 |
## Model Details
|
12 |
+
We have released the AceGPT family of large language models, which is a collection of fully fine-tuned generative text models based on LlaMA2, ranging from 7B to 13B parameters. Our models include two main categories: AceGPT and AceGPT-chat. AceGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source Arabic dialogue models in multiple benchmark tests. Furthermore, in our human evaluations, our models have shown comparable satisfaction levels to some closed-source models, such as ChatGPT, in the Arabic language.
|
|
|
|
|
13 |
## Model Developers
|
14 |
+
We are from the School of Data Science, the Chinese University of Hong Kong, Shenzhen (CUHKSZ), and the Shenzhen Research Institute of Big Data (SRIBD).
|
|
|
15 |
## Variations
|
16 |
+
AceGPT famils comes in a range of parameter sizes —— 7B and 13B, each size of model has a base categorie and a -chat categorie.
|
|
|
17 |
## Input
|
18 |
Models input text only.
|
19 |
## Output
|
20 |
Models output text only.
|
21 |
## Model Evaluation Results
|
22 |
|
23 |
+
Experiments on Arabic Vicuna-80, Arabic AlpacaEval. Numbers are the average perfor-mance ratio of ChatGPT over three runs. We do not report results of raw Llama-2 models since they cannot properly generate Arabic texts.
|
|
|
|
|
24 |
| | Arabic Vicuna-80 | Arabic AlpacaEval |
|
25 |
|------------------------------|--------------------|---------------------|
|
26 |
| Phoenix Chen et al. (2023a) | 71.92% ± 0.2% | 65.62% ± 0.3% |
|