-
One Wide Feedforward is All You Need
Paper • 2309.01826 • Published • 31 -
Gated recurrent neural networks discover attention
Paper • 2309.01775 • Published • 7 -
FLM-101B: An Open LLM and How to Train It with $100K Budget
Paper • 2309.03852 • Published • 44 -
Large Language Models as Optimizers
Paper • 2309.03409 • Published • 75
Collections
Discover the best community collections!
Collections including paper arxiv:2309.03409
-
MADLAD-400: A Multilingual And Document-Level Large Audited Dataset
Paper • 2309.04662 • Published • 22 -
Neurons in Large Language Models: Dead, N-gram, Positional
Paper • 2309.04827 • Published • 16 -
Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs
Paper • 2309.05516 • Published • 9 -
DrugChat: Towards Enabling ChatGPT-Like Capabilities on Drug Molecule Graphs
Paper • 2309.03907 • Published • 10
-
Large Language Models as Optimizers
Paper • 2309.03409 • Published • 75 -
FLAME: Factuality-Aware Alignment for Large Language Models
Paper • 2405.01525 • Published • 25 -
LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report
Paper • 2405.00732 • Published • 119 -
How Do Large Language Models Acquire Factual Knowledge During Pretraining?
Paper • 2406.11813 • Published • 30
-
Large Language Models as Optimizers
Paper • 2309.03409 • Published • 75 -
Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
Paper • 2404.02258 • Published • 104 -
OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework
Paper • 2404.14619 • Published • 126 -
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone
Paper • 2404.14219 • Published • 254