File size: 2,147 Bytes
9b27dc8 0388df8 02edef8 0388df8 307472a 9b27dc8 df0a325 9b27dc8 8afbeca b4c2bf2 307472a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
language:
- ja
- en
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### llm-jp-eval script(colab)
```
!git clone https://github.com/llm-jp/llm-jp-eval.git
!cd llm-jp-eval && pip install -e .
!cd llm-jp-eval && python scripts/preprocess_dataset.py --dataset-name all --output-dir ./dataset_dir
!cd llm-jp-eval && python scripts/evaluate_llm.py -cn config.yaml model.pretrained_model_name_or_path=jaeyong2/Qwen2.5-1.5B-Instruct-JaMagpie-Preview tokenizer.pretrained_model_name_or_path=jaeyong2/Qwen2.5-1.5B-Instruct-JaMagpie-Preview dataset_dir=./dataset_dir/1.4.1/evaluation/test
```
| llm-jp-eval| Qwen2.5-1.5B-Instruct |google/gemma-2-2b-jpn-it| finetuning-model |
|:-----------|----------------------:|-----------------------:|-----------------------:|
| AVG | 0.4343 | 0.4315 | 0.4540 |
| CG | 0.0600 | 0.0000 | 0.1500 |
| EL | 0.3952 | 0.3222 | 0.4106 |
| FA | 0.0690 | 0.0846 | 0.0000 |
| HE | 0.4400 | 0.4350 | 0.4300 |
| MC | 0.6800 | 0.6000 | 0.6400 |
| MR | 0.4700 | 0.4900 | 0.5800 |
| MT | 0.6137 | 0.7666 | 0.7915 |
| NLI | 0.5500 | 0.5260 | 0.4440 |
| QA | 0.2443 | 0.2813 | 0.3054 |
| RC | 0.8208 | 0.8097 | 0.7881 |
### License
Qwen/Qwen2.5-1.5B-Instruct : https://choosealicense.com/licenses/apache-2.0/
### Acknowledgement
This research is supported by TPU Research Cloud program. |