File size: 1,448 Bytes
1a0e6e2
acea686
 
7af4009
62d6324
bb72387
acea686
acc998d
 
 
 
 
1a0e6e2
 
acea686
 
1a0e6e2
bb72387
1a0e6e2
bb72387
acea686
bb72387
1a0e6e2
acea686
1a0e6e2
acea686
1a0e6e2
acea686
1a0e6e2
acea686
1a0e6e2
acea686
1a0e6e2
acea686
1a0e6e2
acea686
1a0e6e2
acea686
1a0e6e2
acea686
bb72387
acea686
 
 
 
 
 
1a0e6e2
bb72387
 
 
 
 
 
1a0e6e2
acea686
1a0e6e2
acea686
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
tags:
- generated_from_trainer
base_model: austindavis/gpt2-lichess-uci-201601
model-index:
- name: gpt2-lichess-uci-2016-01_11
  results: []
widget:
- text: e2e4
  example_title: King's pawn
- text: d2d4
  example_title: Queen's pawn
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# gpt2-lichess-uci-2016-01_11

This model is a fine-tuned version of [austindavis/gpt2-lichess-uci-201601](https://huggingface.co/austindavis/gpt2-lichess-uci-201601) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0379

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001715755714441261
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1

### Training results

| Training Loss | Epoch | Step   | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.0634        | 1.0   | 266171 | 1.0379          |


### Framework versions

- Transformers 4.40.1
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1