|
--- |
|
license: apache-2.0 |
|
widget: |
|
- text: My name is El Microondas the Wise, and |
|
example_title: El Microondas |
|
- text: Kennesaw State University is a public |
|
example_title: Kennesaw State University |
|
- text: Bungie Studios is an American video game developer. They are most famous for |
|
developing the award winning Halo series of video games. They also made Destiny. |
|
The studio was founded |
|
example_title: Bungie |
|
- text: The Mona Lisa is a world-renowned painting created by |
|
example_title: Mona Lisa |
|
- text: The Harry Potter series, written by J.K. Rowling, begins with the book titled |
|
example_title: Harry Potter Series |
|
- text: 'Question: I have cities, but no houses. I have mountains, but no trees. I |
|
have water, but no fish. What am I? |
|
|
|
Answer:' |
|
example_title: Riddle |
|
- text: The process of photosynthesis involves the conversion of |
|
example_title: Photosynthesis |
|
- text: Jane went to the store to buy some groceries. She picked up apples, oranges, |
|
and a loaf of bread. When she got home, she realized she forgot |
|
example_title: Story Continuation |
|
- text: 'Problem 2: If a train leaves Station A at 9:00 AM and travels at 60 mph, |
|
and another train leaves Station B at 10:00 AM and travels at 80 mph, when will |
|
they meet if the distance between the stations is 300 miles? |
|
|
|
To determine' |
|
example_title: Math Problem |
|
- text: In the context of computer programming, an algorithm is |
|
example_title: Algorithm Definition |
|
model-index: |
|
- name: Mixsmol-4x400M-v0.1-epoch1 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 22.87 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 30.57 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 25.28 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 39.03 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 52.8 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 0.15 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=vilm/Mixsmol-4x400M-v0.1-epoch1 |
|
name: Open LLM Leaderboard |
|
--- |
|
# Mixsmol-4x400M-v0.1 by Ontocord |
|
This is the first checkpoint (Epoch 1) of Mixsmol-4x400M-v0.1 |
|
Note that this is an experimental in data mixing. Therefore, we only trained the model on 50B tokens (95% English and 5% Vietnamese) to test the following: |
|
- Reasoining capabilities through high-quality synthetic textbooks data pretraining |
|
- Crosslingual understanding through machine translation and multilingual + multiple tasks pretraining |
|
|
|
After verifying our hypothesis with this run, we will schedule a second run on bigger data and compute for it to achieve its maximum capability. |
|
|
|
## Data |
|
- Synthetic Textbooks: 8M samples |
|
- RefinedWeb: 1M samples |
|
- RedPajama-v2: 500K samples |
|
- MathPile: Everything |
|
- ThePile: MiniPile Subset |
|
- GoodWiki |
|
- The Stack Smol XL |
|
- The Vault: train_small split |
|
- Instruction Pretraining: 250k samples |
|
|
|
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |
|
|-------------|-------|------|-----:|--------|-----:|---|-----:| |
|
|arc_challenge|Yaml |none | 25|acc |0.1937|± |0.0115| |
|
| | |none | 25|acc_norm|0.2329|± |0.0124| |
|
|hellaswag|Yaml |none | 10|acc |0.2856|± |0.0045| |
|
| | |none | 10|acc_norm|0.3090|± |0.0046| |
|
|mmlu |N/A |none | 0|acc |0.2536|± |0.0483| |
|
| - humanities |N/A |none | 5|acc |0.2408|± |0.0341| |
|
| - other |N/A |none | 5|acc |0.2475|± |0.0443| |
|
| - social_sciences|N/A |none | 5|acc |0.2567|± |0.0456| |
|
| - stem |N/A |none | 5|acc |0.2756|± |0.0653| |
|
|truthfulqa_mc2|Yaml |none | 0|acc |0.3909|± |0.0148| |
|
|winogrande|Yaml |none | 5|acc |0.5107|± | 0.014| |
|
|gsm8k|Yaml |get-answer| 5|exact_match| 0|± | 0| |
|
|
|
## Contribution |
|
This work is a shared contribution between **Ontocord, BEE-spoke-data and VILM** |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_vilm__Mixsmol-4x400M-v0.1-epoch1) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |28.45| |
|
|AI2 Reasoning Challenge (25-Shot)|22.87| |
|
|HellaSwag (10-Shot) |30.57| |
|
|MMLU (5-Shot) |25.28| |
|
|TruthfulQA (0-shot) |39.03| |
|
|Winogrande (5-shot) |52.80| |
|
|GSM8k (5-shot) | 0.15| |
|
|
|
|