---
library_name: transformers
license: apache-2.0
base_model: google/mt5-small
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mt5-small-Context-Based-Chat-Summary-Plus
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# mt5-small-Context-Based-Chat-Summary-Plus

This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7287
- Rouge1: 52.033
- Rouge2: 28.5069
- Rougel: 47.9951
- Rougelsum: 47.994

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 6

### Training results

| Training Loss | Epoch | Step | Validation Loss | Rouge1  | Rouge2  | Rougel  | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 3.9223        | 1.0   | 1384 | 2.0230          | 48.3053 | 25.5    | 44.5689 | 44.5717   |
| 2.4615        | 2.0   | 2768 | 1.8415          | 50.6518 | 27.4135 | 46.7611 | 46.7466   |
| 2.2896        | 3.0   | 4152 | 1.7868          | 51.4143 | 27.9301 | 47.4151 | 47.4095   |
| 2.1912        | 5.0   | 6920 | 1.7372          | 51.912  | 28.3549 | 47.8763 | 47.8849   |
| 2.1537        | 6.0   | 8304 | 1.7287          | 52.033  | 28.5069 | 47.9951 | 47.994    |


### Framework versions

- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0