# Dataset Card for The Stack ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.bigcode-project.org/ - **Repository:** https://github.com/bigcode-project - **Paper:** [Insert URL] - **Leaderboard:** N/A - **Point of Contact:** contact@bigcode-project.org ### Dataset Summary The Stack contains over 3TB of permissively-licensed source code files covering 30 programming languages crawled from GitHub. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for code (code LLMs). The Stack serves as a pre-training dataset for creating code LLMs, i.e., code-generating AI systems which enable the completion and synthesis of code, both from other code as well as from natural language descriptions. Code LLMs can assist professional and citizen developers with developing new applications. ### Supported Tasks and Leaderboards The Stack is a pre-training dataset for creating code LLMs. After large-scale pre-training, code LLMs can be used for a wide variety of downstream tasks such as code completion from natural language descriptions ([HumanEval](https://github.com/openai/human-eval), [MBPP](https://huggingface.co/datasets/mbpp)), generating documentation for program fragments, and auto-completion of code snippets ([HumanEval-Infilling](https://github.com/openai/human-eval-infilling)). However, these downstream evaluation benchmarks are outside the scope of The Stack. ### Languages The following natural languages appear in the comments and docstrings from files in the dataset: EN, ZH, FR, PT, ES, RU, DE, KO, JA, UZ, IT, ID, RO, AR, FA, CA, HU, ML, NL, TR, TE, EL, EO, BN, LV, GL, PL, GU, CEB, IA, KN, SH, MK, UR, SV, LA, JKA, MY, SU, CS, MN. This kind of data is essential for applications such as documentation generation and natural-language-to-code translation. The dataset contains 30 programming languages: ```` "assembly", "batchfile", "c++", "c", "c-sharp", "cmake", "css", "dockerfile", "fortran", "go", "haskell", "html", "java", "javascript", "julia", "lua", "makefile", "markdown", "perl", "php", "powershell", "python", "ruby", "rust", "scala", "shell", "sql", "tex", "typescript", "visual-basic" ````` ## Dataset Structure ### Data Instances Each data instance corresponds to one file. The content of the file is in the `content` feature, and other features (`repository_name`, `licenses`, etc.) provide some metadata. Note that a given file can appear in several different repositories that satisfy our safe-license criterion. If that is the case, only the first – in alphabetical order -- of these repositories is shown for simplicity. ### Data Fields - `content` (string): the content of the file. - `repository_name` (string): name of the repository. If a file appears in several repositories that satisfy our license criterion, it will only show the first in alphabetical order. - `licenses` (list of strings): list of licenses that were detected in the repository. They will all be "safe licenses". - `path` (string): relative path in the repository. - `size` (integer): size of the uncompressed file. - `lang` (string): the programming language. - `avg_line_length` (float): the average line-length of the file. - `max_line_length` (integer): the maximum line-length of the file. - `alphanum_fraction` (float): the fraction of characters in the file that are alphabetical or numerical characters. ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale One of the challenges faced by researchers working on code LLMs is the lack of openness and transparency around the development of these systems. Most prior works described the high-level data collection process but did not release the training data. It is therefore difficult for other researchers to fully reproduce these models and understand what kind of pre-training data leads to high-performing code LLMs. By releasing an open large-scale code dataset we hope to make training of code LLMs more reproducible. ### Source Data #### Initial Data Collection and Normalization 220.92M active GitHub repository names were collected from the event archives published between January 1st, 2015 and March 31st, 2022 on [GHArchive](https://gharchive.org/). Only 137.36M of these repositories were public and accessible on GitHub – others were not accessible as they had been deleted by their owners. 51.76B files were downloaded from the public repositories on GitHub between November 2021 and June 2022. 5.28B files were unique. The uncompressed size of all stored files is 92.36TB. The list of programming language extensions is taken from this [list](https://gist.github.com/ppisarczyk/43962d06686722d26d176fad46879d41) (also provided in Appendix C of the paper). Near-deduplication was implemented in the pre-processing pipeline on top of exact deduplication. To find near-duplicates, MinHash with 256 permutations of all documents was computed in linear time. Locality Sensitive Hashing was used to find the clusters of duplicates. Jaccard Similarities were computed inside these clusters to remove any false positives and with a similarity threshold of 0.85. Roughly 40% of permissively licensed files were (near-)duplicates. See section 3 of the paper for further details. The following are not stored: - Files that cannot contribute to training code: binary, empty, could not be decoded - Files larger than 1MB - The excluded file extensions are listed in Appendix B of the paper. ##### License detection Permissive licenses have minimal restrictions on how the software can be copied, modified, and redistributed. These include MIT-0, MIT, MIT-feh, Apache-2.0, BSD-3-Clause, BSD-3-Clause-Clear, BSD-3-Clause-No-Nuclear-License-2014, BSD-2-Clause, CC0-1.0, EPL-1.0, MPL-2.0, Unlicense, ISC, Artistic-2.0, deprecated\_LGPL-3.0+, deprecated\_LGPL-2.1+, ECL-2.0, SHL-0.51, MPL-2.0-no-copyleft-exception. GHArchive contained the license information for approximately 12% of the collected repositories. For the remaining repositories, [go-license-detector](https://github.com/src-d/go-license-detector) was run to detect the most likely SPDX license identifier. The detector did not detect a license for ~81% of the repositories, in which case the repository was excluded from the dataset. A file was in included in the safe license dataset if at least one of the repositories containing the file had a permissive license. #### Who are the source language producers? The source (code) language producers are users of GitHub that created unique repository names between January 1st, 2015, and March 31st, 2022. ### Personal and Sensitive Information The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. Deduplication has helped to reduce the amount of sensitive data that may exist. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their [open-access](https://en.wikipedia.org/wiki/Open_access) research. Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information. Complaints, removal requests, and "do not contact" requests can be sent to contact@bigcode-project.org. The PII pipeline for this dataset is still a work in progress (see this [issue](https://github.com/bigcode-project/admin/issues/9) for updates). Researchers that wish to contribute to the anonymization pipeline of the project can apply to join [here](https://www.bigcode-project.org/docs/about/join/). Developers with source code in the dataset can request to have it removed [here](https://www.bigcode-project.org/docs/about/ip/) (proof of code contribution is required). ## Considerations for Using the Data ### Social Impact of Dataset The Stack is an output of the BigCode Project collaboration. BigCode aims to be responsible by design and by default. The project is conducted in the spirit of Open Science, focused on responsible data governance and development for code LLMs. The Stack increases awareness of the collection of permissively-licensed open-source code data for pretraining code LLMs, and is released with a permissive open-source license to enable access, reproducibility, and transparency in the research community. Work to de-risk and improve on the implementation of ethical best practices is conducted in various working groups. The Stack aims to improve code LLMs accessibility to enable people from diverse backgrounds to write higher quality code and develop low-code applications. Mission-critical legacy software could become easier to maintain as professional developers are guided by code-generating applications on how to write robust code in unfamiliar programming languages. While the social impact is intended to be positive, the increased accessibility of code LLMs comes with certain risks such as over-reliance on the generated code and long-term effects on the software development job market. BigCode working groups have explored topics such as licensing (including copyleft and the intended use of permissively licensed code), attribution of generated code to original code, rights to restrict processing, the inclusion of Personally Identifiable Information (PII), and risks of malicious code, among other topics. The working groups discuss these challenges for the scalable implementation of potential solutions. This work is ongoing as at October 20th, 2022. The code collected from GitHub does not contain demographic information or proxy information about the demographics. However, it is not without risks, as the comments within the code may contain harmful language, which could be learned from the models, and the trained models may also memorize personal information and credentials. The broader impact and hazard analysis relating to Evaluating Large Language Models Trained on Code can be found in section 7 of this [paper](https://arxiv.org/pdf/2107.03374v2.pdf). Further discussion of risk assessments for Code Synthesis Large Language Models can be found in section 4 of this [paper](https://arxiv.org/abs/2207.14157). ### Discussion of Biases Widely adopted programming languages are preferred over niche languages for which there is little data. This topic is discussed [here](https://github.com/bigcode-project/admin/issues/11). Some programming languages such as SQL, Batchfile, TypeScript are less likely to be permissively licensed (4% vs the average 10%). This may result in a biased representation of those languages. Permissively licensed files also tend to be longer. Roughly 40 natural languages are present in docstrings and comments with English being the most prevalent. In python files, it makes up ~96% of the dataset. For further information on data analysis of the Stack, see this [repo](https://github.com/bigcode-project/bigcode-analysis). ### Other Known Limitations One of the current limitations of the BigCode dataset, is that scraped HTML for websites may not be compliant with Web Content Accessibility Guidelines ([WCAG](https://www.w3.org/WAI/standards-guidelines/wcag/)). This could have an impact on HTML-generated code that may introduce web accessibility issues. The training dataset could contain malicious code and/or the model could be used to generate malware or ransomware. To the best of our knowledge, all files contained in the dataset are licensed with one of the permissive licenses (see list in [Licensing information](#licensing-information)). The accuracy of license attribution is limited by the accuracy of GHArchive and go-license-detector. Any mistakes should be reported to BigCode Project for review and follow-up as needed. ## Additional Information ### Dataset Curators 1. Harm de Vries, ServiceNow Research, harm.devries@servicenow.com 2. Leandro von Werra, Hugging Face, leandro@huggingface.co ### Licensing Information As all files are already licensed, the dataset is not licensed additionally. Permissive Licenses - MIT-0 - MIT - MIT-feh - Apache-2.0 - BSD-3-Clause - BSD-3-Clause-Clear - BSD-3-Clause-No-Nuclear-License-2014 - BSD-2-Clause - CC0-1.0 - EPL-1.0 - MPL-2.0 - Unlicense - ISC - Artistic-2.0 - deprecated_LGPL-3.0+ - deprecated_LGPL-2.1+ - ECL-2.0 - SHL-0.51 - MPL-2.0-no-copyleft-exception ### Citation Information [More Information Needed] ### Contributions [More Information Needed]