laurabernardy
commited on
Commit
·
a467ebd
1
Parent(s):
aed167b
Update README.md
Browse files
README.md
CHANGED
@@ -31,13 +31,14 @@ model-index:
|
|
31 |
- type: "perplexity" # Required. Example: wer. Use metric id from https://hf.co/metrics
|
32 |
value: "46.69" # Required. Example: 20.90
|
33 |
---
|
|
|
34 |
|
35 |
GPT-2 model for Text Generation in luxembourgish language, trained on 636.8 MB of text data, consisting of RTL.lu news articles, comments, parlament speeches, the luxembourgish Wikipedia, Newscrawl, Webcrawl and subtitles.
|
36 |
The training took place on a 32 GB Nvidia Tesla V100
|
37 |
-
with an initial learning rate of 5e-5
|
38 |
-
with Batch size 4
|
39 |
-
for 109 hours
|
40 |
-
for 30 epochs
|
41 |
|
42 |
## Usage
|
43 |
```python
|
@@ -48,6 +49,6 @@ tokenizer = AutoTokenizer.from_pretrained("laurabernardy/LuxGPT2", use_auth_toke
|
|
48 |
model = AutoModelForCausalLM.from_pretrained("laurabernardy/LuxGPT2", use_auth_token=True)
|
49 |
```
|
50 |
|
51 |
-
##Limitations and Biases
|
52 |
|
53 |
See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
|
|
|
31 |
- type: "perplexity" # Required. Example: wer. Use metric id from https://hf.co/metrics
|
32 |
value: "46.69" # Required. Example: 20.90
|
33 |
---
|
34 |
+
## LuxGPT-2
|
35 |
|
36 |
GPT-2 model for Text Generation in luxembourgish language, trained on 636.8 MB of text data, consisting of RTL.lu news articles, comments, parlament speeches, the luxembourgish Wikipedia, Newscrawl, Webcrawl and subtitles.
|
37 |
The training took place on a 32 GB Nvidia Tesla V100
|
38 |
+
- with an initial learning rate of 5e-5
|
39 |
+
- with Batch size 4
|
40 |
+
- for 109 hours
|
41 |
+
- for 30 epochs
|
42 |
|
43 |
## Usage
|
44 |
```python
|
|
|
49 |
model = AutoModelForCausalLM.from_pretrained("laurabernardy/LuxGPT2", use_auth_token=True)
|
50 |
```
|
51 |
|
52 |
+
## Limitations and Biases
|
53 |
|
54 |
See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
|