File size: 1,164 Bytes
1a0e6e2 acea686 8d717b4 62d6324 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 1a0e6e2 acea686 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
---
tags:
- generated_from_trainer
widget:
- text: e2e4
example_title: King's pawn
- text: d2d4
example_title: Queen's pawn
model-index:
- name: austindavis/gpt2-pretrained-lichess-uci
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-pretrained-lichess-uci-finetuned-lichess-uci
This model is a Pretrained GPT-2 trained on an the Lichess UCI dataset from Feb 2013.
It achieves the following results on the evaluation set:
- Loss: 1.3084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Framework versions
- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
|