Collections
Discover the best community collections!
Collections including paper arxiv:2404.14619
-
OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework
Paper • 2404.14619 • Published • 127 -
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Paper • 2303.15647 • Published • 4 -
Hyper-X: A Unified Hypernetwork for Multi-Task Multilingual Transfer
Paper • 2205.12148 • Published • 2 -
No More Adam: Learning Rate Scaling at Initialization is All You Need
Paper • 2412.11768 • Published • 41
-
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Paper • 2404.14219 • Published • 256 -
OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework
Paper • 2404.14619 • Published • 127 -
Gemini Goes to Med School: Exploring the Capabilities of Multimodal Large Language Models on Medical Challenge Problems & Hallucinations
Paper • 2402.07023 • Published • 4 -
NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment
Paper • 2405.01481 • Published • 29
-
Rho-1: Not All Tokens Are What You Need
Paper • 2404.07965 • Published • 90 -
VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time
Paper • 2404.10667 • Published • 18 -
Instruction-tuned Language Models are Better Knowledge Learners
Paper • 2402.12847 • Published • 26 -
DoRA: Weight-Decomposed Low-Rank Adaptation
Paper • 2402.09353 • Published • 26
-
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 83 -
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Paper • 2403.05530 • Published • 62 -
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 30 -
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Paper • 2312.15166 • Published • 57
-
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 16 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 12