|
--- |
|
language: |
|
- ko |
|
- en |
|
license: mit |
|
--- |
|
|
|
# Model Card for free-evo-qwen72b-v0.8 |
|
|
|
## Developed by : [Freewheelin](https://freewheelin-recruit.oopy.io/) AI Technical Team |
|
|
|
## 1st place : 2024 4th May - avg. 81.28 [Open Llm Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
but this kicked away. maybe the explanation was not enough. |
|
|
|
## Method |
|
- We were inspired by this [Sakana project](https://sakana.ai/evolutionary-model-merge/) |
|
|
|
## Process |
|
You need two models with the same architecture. |
|
- Choose one model and fine-tune it to create a gap between the original model and the fine-tuned one. It doesn't matter whether the evaluation score is higher or lower. |
|
- Merge the two models. |
|
- Evaluate the merged model. |
|
- Fine-tune a specific evaluation part of the model if you need to increase the score for that part. (It's unlikely to work as you think, but you can try it.) |
|
- Merge the models again. |
|
- Evaluate again. |
|
- Keep going until the average evaluation score is higher than the original one. |
|
|
|
That's it. Simple. |
|
You can create a framework to automate this process. |
|
|
|
## Base Architecture |
|
- QWEN2 |
|
|
|
## Base Models |
|
- several QWEN2 based models |