Update README.md
Browse files
README.md
CHANGED
@@ -19,8 +19,7 @@ language:
|
|
19 |
As part of the Assessing Large Language Models for Document Classification project by the Municipality of Amsterdam, we fine-tune Mistral, Llama, and GEITje for document classification.
|
20 |
The fine-tuning is performed using the [AmsterdamBalancedFirst200Tokens](https://huggingface.co/datasets/FemkeBakker/AmsterdamBalancedFirst200Tokens) dataset, which consists of documents truncated to the first 200 tokens.
|
21 |
In our research, we evaluate the fine-tuning of these LLMs across one, two, and three epochs.
|
22 |
-
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) and has been fine-tuned for
|
23 |
-
|
24 |
|
25 |
It achieves the following results on the evaluation set:
|
26 |
- Loss: 0.6601
|
|
|
19 |
As part of the Assessing Large Language Models for Document Classification project by the Municipality of Amsterdam, we fine-tune Mistral, Llama, and GEITje for document classification.
|
20 |
The fine-tuning is performed using the [AmsterdamBalancedFirst200Tokens](https://huggingface.co/datasets/FemkeBakker/AmsterdamBalancedFirst200Tokens) dataset, which consists of documents truncated to the first 200 tokens.
|
21 |
In our research, we evaluate the fine-tuning of these LLMs across one, two, and three epochs.
|
22 |
+
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) and has been fine-tuned for two epochs.
|
|
|
23 |
|
24 |
It achieves the following results on the evaluation set:
|
25 |
- Loss: 0.6601
|