--- datasets: - pile-of-law/pile-of-law language: - en library_name: transformers license: other --- # Model Card for budgetlongformer-diverse Legal pretrained model using Replaced Token Detection (RTD) task, trained on Pile-of-Law dataset with 4096 tokens as context windows. # Model Details ## Model Description Legal pretrained model using ELECTRA objective task, trained on Pile-of-Law dataset with 4096 tokens as context windows. - **Developed by:** Joel Niklaus, Daniele Giofré - **Model type:** Language model - **Language(s) (NLP):** en - **License:** other - **Resources for more information:** - [Associated Paper](https://arxiv.org/abs/2211.17135) # Uses ## Direct Use The model can directly be used to generate embeddings for example for similarity search. It likely works best on US focused legal data. ## Downstream Use The model can be finetuned for any NLU task or when coupled with a decoder also for generative tasks. In our experiments on summarization with the BillSum dataset, we found that random initialization of the decoder improved performance. ## Out-of-Scope Use This model will likely work worse on non-legal text in non-English languages originating from outside the US. # Bias, Risks, and Limitations Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. ## Considerations about the training dataset ### Social Impact of Dataset As described in the dataset card, the internal variation allows contextual privacy rules to be learned. If robust mechanisms for this are developed they can applied more broadly. As discussed in ``On the Opportunities and Risks of Foundation Models'', legal language models can help improve access to justice in various ways. But they can also be used in potentially harmful ways. While such models are not ready for most production environments and are the subject of significant research, we ask that model users and model creators using this model, particularly when creating generative models (e.g. attaching a decoder), consider the impacts of their model and make a good faith effort to weigh the benefits against the harms of their method. As our license, the training dataset license also restricts commercial usage. ## Discussion of Biases The data reflects the biases of governments and courts. As discussed in their work [Pile of Law](https://arxiv.org/abs/2207.00220), these can be significant, though more recent text will likely be less overtly toxic. Please consult the above statement and keep it in mind in the use and/or any modification of this model, implementing responsible use. ## Recommendations As with any large LM there is the risk of it producing biased or unfair output. Researchers using the model should put into place respective safeguards to identify biased and/or toxic language. # Training Details ## Training Data The diverse model was trained on caselaw (“Court Listener Opinions” & “Court Listener Docket Entry Documents”), legislation (“US Code”, “State Codes” & “EURLEX”) and contracts (“Atticus Contracts” & “EDGAR Contracts”) from public Pile-of-Law dataset. To balance the training data, we limited the number of documents to 500K (this affects Court Listener Opinions, Court Listener Docket Entry Documents and EDGAR Contracts). ## Training Procedure ### Preprocessing More information needed ### Speeds, Sizes, Times More information needed # Evaluation ## Testing Data, Factors & Metrics ### Testing Data We tested the model on the BillSum and PubMed summarization datasets achieving SotA Rouge scores for the respective parameter sizes in August 2022. ### Factors More information needed ### Metrics We followed the standard in research on summarization datasets and used Rouge 1, 2 and L. ## Results More information needed # Environmental Impact Carbon emissions estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** 4 x 16GB NVIDIA V100 - **Hours used:** 144 - **Cloud Provider:** AWS - **Compute Region:** US East - **Carbon Emitted:** 15.98 ## Model Architecture and Objective We used a Longformer attention window of 256 as generator and discriminator. The generator model was three times smaller than the discriminator model. In particular, the generator’s depth (number of hidden layers) instead of its width (embedding size, hidden size and intermediate size). We used a MLM probability of 25\% for the generator. ## Compute Infrastructure Amazon SageMaker Notebooks. ### Hardware 4 x 16GB NVIDIA V100 ### Software transformers # Citation **BibTeX:** @misc{niklaus2022budgetlongformer, title={BudgetLongformer: Can we Cheaply Pretrain a SotA Legal Language Model From Scratch?}, author={Joel Niklaus and Daniele Giofré}, year={2022}, eprint={2211.17135}, archivePrefix={arXiv}, primaryClass={cs.CL} } # Model Card Authors Joel Niklaus, Daniele Giofré