File size: 2,424 Bytes
f026b5b 4bb02aa f026b5b b67b9e8 4bb02aa b67b9e8 4bb02aa b67b9e8 8f01a00 b67b9e8 4bb02aa b67b9e8 8f01a00 b67b9e8 873ebef b67b9e8 ee450a3 b67b9e8 873ebef b67b9e8 4bb02aa b67b9e8 4bb02aa |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: mit
tags:
- audio
---
# SNAC ๐ฟ
Multi-**S**cale **N**eural **A**udio **C**odec (SNAC) compressess audio into discrete codes at a low bitrate.
๐ This model was primarily trained on music data, and its recommended use case is music (and SFX) generation. See below for other pretrained models.
๐ GitHub repository: https://github.com/hubertsiuzdak/snac/
## Overview
SNAC encodes audio into hierarchical tokens similarly to SoundStream, EnCodec, and DAC. However, SNAC introduces a simple change where coarse tokens are sampled less frequently,
covering a broader time span.
This model compresses 44 kHz audio into discrete codes at a 2.6 kbps bitrate. It uses 4 RVQ levels with token rates of 14, 29, 57, and
115 Hz.
## Pretrained models
Currently, all models support only single audio channel (mono).
| Model | Bitrate | Sample Rate | Params | Recommended use case |
|-----------------------------------------------------------------------------|-----------|-------------|--------|--------------------------|
| [hubertsiuzdak/snac_24khz](https://huggingface.co/hubertsiuzdak/snac_24khz) | 0.98 kbps | 24 kHz | 19.8 M | ๐ฃ๏ธ Speech |
| [hubertsiuzdak/snac_32khz](https://huggingface.co/hubertsiuzdak/snac_32khz) | 1.9 kbps | 32 kHz | 54.5 M | ๐ธ Music / Sound Effects |
| hubertsiuzdak/snac_44khz (this model) | 2.6 kbps | 44 kHz | 54.5 M | ๐ธ Music / Sound Effects |
## Usage
Install it using:
```bash
pip install snac
```
To encode (and decode) audio with SNAC in Python, use the following code:
```python
import torch
from snac import SNAC
model = SNAC.from_pretrained("hubertsiuzdak/snac_44khz").eval().cuda()
audio = torch.randn(1, 1, 44100).cuda() # B, 1, T
with torch.inference_mode():
codes = model.encode(audio)
audio_hat = model.decode(codes)
```
You can also encode and reconstruct in a single call:
```python
with torch.inference_mode():
audio_hat, codes = model(audio)
```
โ ๏ธ Note that `codes` is a list of token sequences of variable lengths, each corresponding to a different temporal
resolution.
```
>>> [code.shape[1] for code in codes]
[16, 32, 64, 128]
```
## Acknowledgements
Module definitions are adapted from the [Descript Audio Codec](https://github.com/descriptinc/descript-audio-codec). |