Mixture-of-Agents Enhances Large Language Model Capabilities Paper • 2406.04692 • Published Jun 7, 2024 • 55
Transformers Can Do Arithmetic with the Right Embeddings Paper • 2405.17399 • Published May 27, 2024 • 52
FLawN-T5: An Empirical Examination of Effective Instruction-Tuning Data Mixtures for Legal Reasoning Paper • 2404.02127 • Published Apr 2, 2024
LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models Paper • 2308.11462 • Published Aug 20, 2023 • 3
LegalLens: Leveraging LLMs for Legal Violation Identification in Unstructured Text Paper • 2402.04335 • Published Feb 6, 2024
Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding Paper • 2309.08168 • Published Sep 15, 2023
Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models Paper • 2307.14430 • Published Jul 26, 2023 • 3
Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time Paper • 2310.17157 • Published Oct 26, 2023 • 12
Resolving Legalese: A Multilingual Exploration of Negation Scope Resolution in Legal Documents Paper • 2309.08695 • Published Sep 15, 2023 • 1
Data Selection for Language Models via Importance Resampling Paper • 2302.03169 • Published Feb 6, 2023
An Explanation of In-context Learning as Implicit Bayesian Inference Paper • 2111.02080 • Published Nov 3, 2021 • 1
Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised Domain Adaptation Paper • 2204.00570 • Published Apr 1, 2022
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning Paper • 2106.09226 • Published Jun 17, 2021