Papers
arxiv:2404.05728

A Large-Scale Exploration of μ-Transfer

Published on Apr 8, 2024
Authors:

Abstract

Large artificial neural networks have become a mainstay of language, vision, and audio processing and synthesis, yet their initializations and learning rates are often set in an unsophisticated fashion, due to the high cost of hyperparameter sweeps at scale. The mu-Parameterization (muP) offers a potential solution to this challenge, yielding scaling rules for model initialization and learning rates while reportedly enabling zero-shot hyperparameter transfer from small to large models. Despite its evident promise, the muP method is not yet widely adopted, perhaps due to higher implementation complexity, many variations, or complex theoretical background. This work investigates muP empirically, focusing on the ubiquitous transformer architecture, and aims to answer a simple question: does mu-Transfer yield optimal learning rates in practice? Studying models of up to 10B parameters and training budgets of up to 190B tokens, we find mu-Transfer works as intended for the majority of important cases, yet also identify a few cases where it may not.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2404.05728 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.05728 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2404.05728 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.