File size: 2,599 Bytes
40fd083
 
e12fd1f
40fd083
e12fd1f
 
40fd083
e12fd1f
 
40fd083
e12fd1f
 
40fd083
e12fd1f
40fd083
e12fd1f
40fd083
64b40fa
 
 
 
 
 
40fd083
 
 
 
 
 
 
 
 
 
 
 
 
a8d9758
40fd083
 
4f072db
 
40fd083
4f072db
40fd083
4f072db
40fd083
f3ebe25
40fd083
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e12fd1f
40fd083
 
 
 
 
 
 
 
 
 
 
 
81a9da4
40fd083
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
license: cc-by-sa-4.0
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
language:
- en
pretty_name: Lecture Gratuits

configs:
  - config_name: default
    data_files:
      - split: final
        path: data/*
---

# Dataset Card for LectureGratuits

![](Recursalberg.png "Clara makes her return. See recursal/Recursalberg for her description!")

*Waifu to catch your attention.*

## Dataset Details

### Dataset Description

*LectureGratuits* is a cleaned dataset of [*Ebooks Gratuits*](https://www.ebooksgratuits.com/) books. We downloaded all the publicly available ebooks books at the time and processed them.  
Filtering to a total amount of tokens of **~265.46M** (llama-2-7b-chat-tokenizer) / **~253.51M** (RWKV Tokenizer) from primarily English language.  

- **Curated by:** Darok
- **Funded by:** Recursal.ai
- **Shared by:** KaraKaraWitch
- **Language(s) (NLP):** English
- **License:** Public domain

### Dataset Sources

- **Source Data:** [ebooksgratuits.com](https://www.ebooksgratuits.com)

### Processing

KaraKaraWitch doesn't have specifics on how it's processed. We have postiluated the following workflow / processing:

0. Get the higher ID
1. Enumerate and download all the epub files: `https://www.ebooksgratuits.com/newsendbook.php?id=<ID>&format=epub`
2. Put them in a folder called `books`
3. extract content to each json file in `output` folder. (See filtering steps in `extract-text.py`)
4. Combine into a single file.

### Data Keys

```
text (str): The book's text. Converted to markdown.
meta (dict): A dictionary of metadata with the following keys:
  - title
  - author
  - publisher
```

### Dataset Curators

This dataset was mainly Darok's work. I (KaraKaraWitch) only assisted them with questions and the writing of the dataset card.

### Licensing Information

The books itself is in public domain. For the post processed data under Recursal work, it's licensed as CC-BY-SA.

Recursal Waifus (The banner image) are licensed under CC-BY-SA. 
They do not represent the related websites in any official capacity unless otherwise or announced by the website. 
You may use them as a banner image. However, you must always link back to the dataset.

### Citation Information

```
@ONLINE{lecturegratuits,
  title         = {LectureGratuits},
  author        = {Darok, KaraKaraWitch, recursal.ai},
  year          = {2024},
  howpublished  = {\url{https://huggingface.co/datasets/recursal/Recursalberg}},
}
```