TristanThrush commited on
Commit
273f456
·
1 Parent(s): 5f2fd21

update readme

Browse files
Files changed (1) hide show
  1. README.md +1 -16
README.md CHANGED
@@ -682,26 +682,11 @@ Then, you can load any subset of Wikipedia per language and per date this way:
682
  ```python
683
  from datasets import load_dataset
684
 
685
- load_dataset("wikipedia", language="sw", date="20220120")
686
  ```
687
 
688
  You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
689
 
690
- Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:
691
- ```python
692
- from datasets import load_dataset
693
-
694
- load_dataset("wikipedia", "20220301.en")
695
- ```
696
-
697
- The list of pre-processed subsets is:
698
- - "20220301.de"
699
- - "20220301.en"
700
- - "20220301.fr"
701
- - "20220301.frr"
702
- - "20220301.it"
703
- - "20220301.simple"
704
-
705
  ### Supported Tasks and Leaderboards
706
 
707
  The dataset is generally used for Language Modeling.
 
682
  ```python
683
  from datasets import load_dataset
684
 
685
+ load_dataset("Tristan/wikipedia", language="sw", date="20220120")
686
  ```
687
 
688
  You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).
689
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
690
  ### Supported Tasks and Leaderboards
691
 
692
  The dataset is generally used for Language Modeling.