dslee2601 commited on
Commit
9642163
·
1 Parent(s): 4ad0c19

copy paste

Browse files
.fig/artifact-dac-decoding without overlap.png ADDED
.gitignore ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ token.txt
2
+ __pycache__/
3
+ tokens.pt
4
+ out.wav
.sample_sound/jazz_swing.wav ADDED
Binary file (882 kB). View file
 
README.md CHANGED
@@ -1,3 +1,138 @@
1
  ---
2
  license: mit
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ tags:
4
+ - DAC
5
+ - Descript Audio Codec
6
+ - PyTorch
7
  ---
8
+
9
+ # Descript Audio Codec (DAC)
10
+ DAC is the state-of-the-art audio tokenizer with improvement upon the previous tokenizers like SoundStream and EnCodec.
11
+
12
+ This model card provides an easy-to-use API for a *pretrained DAC* [1] for 24khz audio whose backbone and pretrained weights are from [its original reposotiry](https://github.com/descriptinc/descript-audio-codec). With this API, you can encode and decode by a single line of code either using CPU or GPU. Furhtermore, it supports chunk-based processing for memory-efficient processing, especially important for GPU processing.
13
+
14
+
15
+
16
+
17
+
18
+
19
+
20
+
21
+
22
+ ### Model variations
23
+ There are three types of model depending on an input audio sampling rate.
24
+
25
+ | Model | Input audio sampling rate [khz] |
26
+ | ------------------ | ----------------- |
27
+ | [`hance-ai/descript-audio-codec-44khz`](https://huggingface.co/hance-ai/descript-audio-codec-44khz) | 44.1khz |
28
+ | [`hance-ai/descript-audio-codec-24khz`](https://huggingface.co/hance-ai/descript-audio-codec-24khz) | 24khz |
29
+ | [`hance-ai/descript-audio-codec-16khz`](https://huggingface.co/hance-ai/descript-audio-codec-16khz) | 16khz |
30
+
31
+
32
+
33
+
34
+
35
+
36
+
37
+
38
+
39
+
40
+ # Usage
41
+
42
+ ### Load
43
+ ```python
44
+ from transformers import AutoModel
45
+
46
+ # device setting
47
+ device = 'cpu' # or 'cuda:0'
48
+
49
+ # load
50
+ model = AutoModel.from_pretrained('hance-ai/descript-audio-codec-24khz', trust_remote_code=True)
51
+ model.to(device)
52
+ ```
53
+
54
+ ### Encode
55
+ ```python
56
+ audio_filename = 'path/example_audio.wav'
57
+ zq, s = model.encode(audio_filename)
58
+ ```
59
+ `zq` is discrete embeddings with dimension of (1, num_RVQ_codebooks, token_length) and `s` is a token sequence with dimension of (1, num_RVQ_codebooks, token_length).
60
+
61
+
62
+ ### Decode
63
+ ```python
64
+ # decoding from `zq`
65
+ waveform = model.decode(zq=zq) # (1, 1, audio_length); the output has a mono channel.
66
+
67
+ # decoding from `s`
68
+ waveform = model.decode(s=s) # (1, 1, audio_length); the output has a mono channel.
69
+ ```
70
+
71
+ ### Save a waveform as an audio file
72
+ ```python
73
+ model.waveform_to_audiofile(waveform, 'out.wav')
74
+ ```
75
+
76
+ ### Save and load tokens
77
+ ```python
78
+ model.save_tensor(s, 'tokens.pt')
79
+ loaded_s = model.load_tensor('tokens.pt')
80
+ ```
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
+ # Runtime
90
+
91
+ To give you a brief idea, the following table reports average runtime on CPU and GPU to encode and decode 10s audio. The runtime is measured in second. The used CPU is Intel Core i9 11900K and GPU is RTX3060.
92
+ ```
93
+ | Task | CPU | GPU |
94
+ |-----------------|---------|---------|
95
+ | Encoding | 6.71 | 0.19 |
96
+ | Decoding | 15.4 | 0.31 |
97
+ ```
98
+ The decoding process takes a longer simply because the decoder is larger than the encoder.
99
+
100
+
101
+
102
+
103
+
104
+
105
+
106
+
107
+
108
+ # Technical Discussion
109
+
110
+ ### Chunk-based Processing
111
+ It's introduced for memory-efficient processing for both encoding and decoding.
112
+ For encoding, we simply chunk an audio into N chunks and process them iteratively.
113
+ Similarly, for decoding, we chunk a token set into M chunks of token subsets and process them iteratively.
114
+ However, the decoding process with naive chunking causes an artifact in the decoded audio.
115
+ That is because the decoder reconstructs audio given multiple neighboring tokens (i.e., multiple neighboring tokens for a segment of audio) rather than a single token for a segment of audio.
116
+
117
+ To tackle the problem, we introduce overlap between the chunks in the decoding, parameterized by `decoding_overlap_rate` in the model. By default, we introduce 10% of overlap between the chunks. Then, two subsequent chunks reuslt in two segments of audio with 10% overlap, and the overlap is averaged out for smoothing.
118
+
119
+ The following figure compares reconstructed audio with and without the overlapping.
120
+ <p align="center">
121
+ <img src=".fig/artifact-dac-decoding without overlap.png" alt="" width=50%>
122
+ </p>
123
+
124
+
125
+
126
+
127
+
128
+
129
+
130
+ # References
131
+ [1] Kumar, Rithesh, et al. "High-fidelity audio compression with improved rvqgan." Advances in Neural Information Processing Systems 36 (2024).
132
+
133
+
134
+
135
+ <!-- contributions
136
+ - chunk processing
137
+ - add device parameter in the test notebook
138
+ -->
model.py ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Union
2
+
3
+ import numpy as np
4
+ import torch
5
+ import torchaudio
6
+ import torch.nn as nn
7
+ import torchaudio.transforms as transforms
8
+ from transformers import PretrainedConfig, PreTrainedModel
9
+
10
+ import dac
11
+ from audiotools import AudioSignal
12
+
13
+ from utils import freeze
14
+
15
+
16
+ class DACConfig(PretrainedConfig):
17
+ model_type = 'dac'
18
+
19
+ def __init__(self,
20
+ model_type_by_sampling_freq:str='44khz',
21
+ encoding_chunk_size_in_sec:int=1,
22
+ decoding_chunk_rate:float=0.1,
23
+ decoding_overlap_rate:float=0.1,
24
+ **kwargs):
25
+ super().__init__(**kwargs)
26
+ """
27
+ Initializes the model object.
28
+ Args:
29
+ model_type_by_sampling_freq (str, optional): The model type based on the sampling frequency. Defaults to '44khz'. Choose among ['44khz', '24khz', '16khz']
30
+ encoding_chunk_size_in_sec (int, optional): The size of the encoding chunk in seconds. Defaults to 1.
31
+ decoding_chunk_rate (float, optional): The decoding chunk rate. Must be between 0 and 1. Defaults to 0.1.
32
+ decoding_overlap_rate (float, optional): The decoding overlap rate. Must be between 0 and 1. Defaults to 0.1.
33
+ **kwargs: Additional keyword arguments.
34
+ Raises:
35
+ AssertionError: If the model_type_by_sampling_freq is not one of ['44khz', '24khz', '16khz'].
36
+ AssertionError: If the decoding_chunk_rate is not between 0 and 1.
37
+ AssertionError: If the decoding_overlap_rate is not between 0 and 1.
38
+ """
39
+ self.model_type_by_sampling_freq = model_type_by_sampling_freq
40
+ self.encoding_chunk_size_in_sec = encoding_chunk_size_in_sec
41
+ self.decoding_chunk_rate = decoding_chunk_rate
42
+ self.decoding_overlap_rate = decoding_overlap_rate
43
+
44
+ assert model_type_by_sampling_freq.lower() in ['44khz', '24khz', '16khz']
45
+ assert decoding_chunk_rate > 0 and decoding_chunk_rate <= 1.0, '`decoding_chunk_rate` must be bewteen 0 and 1.'
46
+ assert decoding_overlap_rate >= 0 and decoding_overlap_rate < 1.0, '`decoding_overlap_rate` must be bewteen 0 and 1.'
47
+
48
+
49
+
50
+ class DAC(PreTrainedModel):
51
+ config_class = DACConfig
52
+
53
+ def __init__(self, config):
54
+ super().__init__(config)
55
+
56
+ self.model_type_by_sampling_freq = config.model_type_by_sampling_freq.lower()
57
+ self.model_type_by_sampling_freq_int = {'44khz':44100, '24khz':24000, '16khz':16000}[self.model_type_by_sampling_freq]
58
+ self.encoding_chunk_size_in_sec = config.encoding_chunk_size_in_sec
59
+ self.decoding_chunk_rate = config.decoding_chunk_rate
60
+ self.decoding_overlap_rate = config.decoding_overlap_rate
61
+
62
+
63
+ dac_path = dac.utils.download(model_type=self.model_type_by_sampling_freq)
64
+ self.dac = dac.DAC.load(dac_path)
65
+ self.dac.eval()
66
+ freeze(self.dac)
67
+
68
+ self.downsampling_rate = int(np.prod(self.dac.encoder_rates)) # 512
69
+
70
+ def load_audio(self, filename:str):
71
+ waveform, sample_rate = torchaudio.load(filename) # waveform: (n_channels, length); sample_rate: const.
72
+ return waveform, sample_rate
73
+
74
+ def resample_audio(self, waveform:torch.FloatTensor, orig_sr:int, target_sr:int):
75
+ """
76
+ - sr: sampling rate
77
+ - waveform: (n_channels, length)
78
+ """
79
+ if orig_sr == target_sr:
80
+ return waveform
81
+
82
+ converter = transforms.Resample(orig_freq=orig_sr, new_freq=target_sr)
83
+ waveform = converter(waveform) # (n_channels, new_length)
84
+ return waveform # (n_channels, new_length)
85
+
86
+ def to_mono_channel(self, waveform:torch.FloatTensor):
87
+ """
88
+ - waveform: (n_channels, length)
89
+ """
90
+ n_channels = waveform.shape[0]
91
+ if n_channels > 1:
92
+ waveform = torch.mean(waveform, dim=0, keepdim=True) # (1, length)
93
+ return waveform # (1, length)
94
+
95
+ @torch.no_grad()
96
+ def encode(self, audio_fname:str):
97
+ self.eval()
98
+
99
+ waveform, sr = self.load_audio(audio_fname)
100
+ waveform = self.resample_audio(waveform, orig_sr=sr, target_sr=self.model_type_by_sampling_freq_int)
101
+ sr = self.model_type_by_sampling_freq_int
102
+ waveform = self.to_mono_channel(waveform) # DAC accepts a mono channel only.
103
+
104
+ zq, s = self._chunk_encoding(waveform, sr)
105
+ return zq, s
106
+
107
+ def _chunk_encoding(self, waveform:torch.FloatTensor, sr:int):
108
+ # TODO: can I make it parallel?
109
+ """
110
+ waveform: (c l)
111
+ """
112
+ x = waveform # brief varname
113
+ x = x.unsqueeze(1) # (b 1 l); add a null batch dim
114
+ chunk_size = int(self.encoding_chunk_size_in_sec * sr)
115
+
116
+ # adjust `chunk_size` to prevent any padding in `dac.preprocess`, which causes a gap between the mini-batches in the resulting music.
117
+ remainer = chunk_size % self.dac.hop_length
118
+ chunk_size = chunk_size-remainer
119
+
120
+ # process
121
+ zq_list, s_list = [], []
122
+ audio_length = x.shape[-1]
123
+ for start in range(0, audio_length, chunk_size):
124
+ end = start + chunk_size
125
+ chunk = x[:, :, start:end]
126
+ chunk = self.dac.preprocess(chunk, sr)
127
+ zq, s, _, _, _ = self.dac.encode(chunk.to(self.device))
128
+ zq = zq.cpu()
129
+ s = s.cpu()
130
+ """
131
+ "zq" : Tensor[B x D x T]
132
+ Quantized continuous representation of input
133
+ = summation of all the residual quantized vectors across every rvq level
134
+ = E(x) = z = \sum_n^N{zq_n} where N is the number of codebooks
135
+ "s" : Tensor[B x N x T]
136
+ Codebook indices for each codebook
137
+ (quantized discrete representation of input)
138
+ *first element in the N dimension = first RVQ level
139
+ """
140
+ zq_list.append(zq)
141
+ s_list.append(s)
142
+ torch.cuda.empty_cache()
143
+
144
+ zq = torch.cat(zq_list, dim=2).float() # (1, d, length)
145
+ s = torch.cat(s_list, dim=2).long() # (1, n_rvq, length)
146
+
147
+ return zq, s
148
+
149
+ @torch.no_grad()
150
+ def decode(self, *, zq:Union[torch.FloatTensor,None]=None, s:Union[torch.IntTensor,None]=None):
151
+ """
152
+ zq: (b, d, length)
153
+ """
154
+ if isinstance(zq,type(None)) and isinstance(s,type(None)):
155
+ assert False, 'one of them must be valid.'
156
+ self.eval()
157
+
158
+ if not isinstance(zq,type(None)):
159
+ waveform = self._chunk_decoding(zq) # (b, 1, length); output always has a mono-channel.
160
+ if not isinstance(s,type(None)):
161
+ zq = self.code_to_zq(s)
162
+ waveform = self._chunk_decoding(zq) # (b, 1, length); output always has a mono-channel.
163
+
164
+ return waveform
165
+
166
+ def _chunk_decoding(self, zq:torch.FloatTensor):
167
+ """
168
+ zq: (b, d, length)
169
+ """
170
+ length = zq.shape[-1]
171
+ chunk_size = round(int(self.decoding_chunk_rate * length))
172
+ overlap_size = round(self.decoding_overlap_rate * chunk_size) # overlap size in terms of token length
173
+ overlap_size_in_data_space = round(overlap_size * self.downsampling_rate)
174
+ waveform_concat = None
175
+ for start in range(0, length, chunk_size-overlap_size):
176
+ end = start + chunk_size
177
+ chunk = zq[:,:, start:end] # (b, d, chunk_size)
178
+ waveform = self.dac.decode(chunk.to(self.device)) # (b, 1, chunk_size*self.downsampling_rate)
179
+ waveform = waveform.cpu()
180
+
181
+ if isinstance(waveform_concat, type(None)):
182
+ waveform_concat = waveform.clone()
183
+ else:
184
+ if self.decoding_overlap_rate != 0.:
185
+ prev_x = waveform_concat[:,:,:-overlap_size_in_data_space]
186
+ rest_of_new_x = waveform[:,:,overlap_size_in_data_space:]
187
+ overlap_x_from_prev_x = waveform_concat[:,:,-overlap_size_in_data_space:] # (b, 1, overlap_size_in_data_space)
188
+ overlap_x_from_new_x = waveform[:,:,:overlap_size_in_data_space] # (b, 1, overlap_size_in_data_space)
189
+ overlap = (overlap_x_from_prev_x + overlap_x_from_new_x) / 2 # take mean; maybe there's a better strategy but it seems to work fine.
190
+ waveform_concat = torch.cat((prev_x, overlap, rest_of_new_x), dim=-1) # (b, 1, ..)
191
+ else:
192
+ prev_x = waveform_concat
193
+ rest_of_new_x = waveform
194
+ waveform_concat = torch.cat((prev_x, rest_of_new_x), dim=-1) # (b, 1, ..)
195
+ return waveform_concat # (b, 1, length)
196
+
197
+ def code_to_zq(self, s:torch.IntTensor):
198
+ """
199
+ s: (b, n_rvq, length)
200
+ """
201
+ zq, _, _ = self.dac.quantizer.from_codes(s.to(self.device)) # zq: (b, d, length)
202
+ zq = zq.cpu()
203
+ return zq
204
+
205
+ def save_tensor(self, tensor:torch.Tensor, fname:str) -> None:
206
+ torch.save(tensor.cpu(), fname)
207
+
208
+ def load_tensor(self, fname:str):
209
+ return torch.load(fname)
210
+
211
+ def waveform_to_audiofile(self, waveform:torch.FloatTensor, fname:str) -> None:
212
+ AudioSignal(waveform, sample_rate=self.model_type_by_sampling_freq_int).write(fname)
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ torch==2.4.0
2
+ torchaudio==2.4.0
3
+ transformers==4.44.0
4
+ descript-audio-codec==1.0.0
save_model.ipynb ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "metadata": {},
7
+ "outputs": [
8
+ {
9
+ "name": "stderr",
10
+ "output_type": "stream",
11
+ "text": [
12
+ "C:\\Users\\dslee\\AppData\\Roaming\\Python\\Python38\\site-packages\\tqdm\\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
13
+ " from .autonotebook import tqdm as notebook_tqdm\n"
14
+ ]
15
+ }
16
+ ],
17
+ "source": [
18
+ "from model import DACConfig, DAC"
19
+ ]
20
+ },
21
+ {
22
+ "cell_type": "code",
23
+ "execution_count": 2,
24
+ "metadata": {},
25
+ "outputs": [],
26
+ "source": [
27
+ "# Registering a model with custom code to the auto classes\n",
28
+ "DACConfig.register_for_auto_class()\n",
29
+ "DAC.register_for_auto_class()"
30
+ ]
31
+ },
32
+ {
33
+ "cell_type": "code",
34
+ "execution_count": 3,
35
+ "metadata": {},
36
+ "outputs": [
37
+ {
38
+ "name": "stderr",
39
+ "output_type": "stream",
40
+ "text": [
41
+ "C:\\Users\\dslee\\AppData\\Roaming\\Python\\Python38\\site-packages\\audiotools\\ml\\layers\\base.py:172: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
42
+ " model_dict = torch.load(location, \"cpu\")\n",
43
+ "c:\\Users\\dslee\\anaconda3\\envs\\sound_effect_variation_generation\\lib\\site-packages\\torch\\nn\\utils\\weight_norm.py:134: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.\n",
44
+ " WeightNorm.apply(module, name, dim)\n"
45
+ ]
46
+ }
47
+ ],
48
+ "source": [
49
+ "# create instances\n",
50
+ "config = DACConfig(model_type_by_sampling_freq='24khz')\n",
51
+ "model = DAC(config)"
52
+ ]
53
+ },
54
+ {
55
+ "cell_type": "code",
56
+ "execution_count": 5,
57
+ "metadata": {},
58
+ "outputs": [
59
+ {
60
+ "name": "stderr",
61
+ "output_type": "stream",
62
+ "text": [
63
+ "c:\\Users\\dslee\\anaconda3\\envs\\sound_effect_variation_generation\\lib\\site-packages\\huggingface_hub\\file_download.py:159: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\\Users\\dslee\\.cache\\huggingface\\hub\\models--hance-ai--descript-audio-codec-24khz. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
64
+ "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
65
+ " warnings.warn(message)\n",
66
+ "model.safetensors: 100%|██████████| 299M/299M [00:11<00:00, 26.5MB/s]\n"
67
+ ]
68
+ },
69
+ {
70
+ "data": {
71
+ "text/plain": [
72
+ "CommitInfo(commit_url='https://huggingface.co/hance-ai/descript-audio-codec-24khz/commit/cf72b50044750326ebc01f01c3a032adbaf59439', commit_message='Upload DAC', commit_description='', oid='cf72b50044750326ebc01f01c3a032adbaf59439', pr_url=None, pr_revision=None, pr_num=None)"
73
+ ]
74
+ },
75
+ "execution_count": 5,
76
+ "metadata": {},
77
+ "output_type": "execute_result"
78
+ }
79
+ ],
80
+ "source": [
81
+ "# push the model to the huggingface\n",
82
+ "with open('token.txt', 'r') as file:\n",
83
+ " token = file.read().strip()\n",
84
+ "\n",
85
+ "model.push_to_hub('hance-ai/descript-audio-codec-24khz', token=token) # put your token"
86
+ ]
87
+ },
88
+ {
89
+ "cell_type": "markdown",
90
+ "metadata": {},
91
+ "source": [
92
+ "Recommend git fetching and pulling so that the uploaded model is synced locally too."
93
+ ]
94
+ },
95
+ {
96
+ "cell_type": "markdown",
97
+ "metadata": {},
98
+ "source": [
99
+ "***"
100
+ ]
101
+ },
102
+ {
103
+ "cell_type": "code",
104
+ "execution_count": 1,
105
+ "metadata": {},
106
+ "outputs": [
107
+ {
108
+ "name": "stderr",
109
+ "output_type": "stream",
110
+ "text": [
111
+ "C:\\Users\\dslee\\AppData\\Roaming\\Python\\Python38\\site-packages\\tqdm\\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
112
+ " from .autonotebook import tqdm as notebook_tqdm\n",
113
+ "c:\\Users\\dslee\\anaconda3\\envs\\sound_effect_variation_generation\\lib\\site-packages\\huggingface_hub\\file_download.py:159: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\\Users\\dslee\\.cache\\huggingface\\hub\\models--hance-ai--descript-audio-codec-24khz. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
114
+ "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
115
+ " warnings.warn(message)\n",
116
+ "A new version of the following files was downloaded from https://huggingface.co/hance-ai/descript-audio-codec-24khz:\n",
117
+ "- model.py\n",
118
+ ". Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.\n",
119
+ "C:\\Users\\dslee\\AppData\\Roaming\\Python\\Python38\\site-packages\\audiotools\\ml\\layers\\base.py:172: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
120
+ " model_dict = torch.load(location, \"cpu\")\n",
121
+ "c:\\Users\\dslee\\anaconda3\\envs\\sound_effect_variation_generation\\lib\\site-packages\\torch\\nn\\utils\\weight_norm.py:134: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.\n",
122
+ " WeightNorm.apply(module, name, dim)\n"
123
+ ]
124
+ }
125
+ ],
126
+ "source": [
127
+ "# load the uploaded model\n",
128
+ "from transformers import AutoModel\n",
129
+ "model = AutoModel.from_pretrained('hance-ai/descript-audio-codec-24khz', trust_remote_code=True)\n",
130
+ "model.to('cpu');"
131
+ ]
132
+ },
133
+ {
134
+ "cell_type": "code",
135
+ "execution_count": 2,
136
+ "metadata": {},
137
+ "outputs": [
138
+ {
139
+ "name": "stdout",
140
+ "output_type": "stream",
141
+ "text": [
142
+ "zq.shape: torch.Size([1, 1024, 750])\n",
143
+ "s.shape: torch.Size([1, 32, 750])\n"
144
+ ]
145
+ }
146
+ ],
147
+ "source": [
148
+ "# encodeing\n",
149
+ "import os\n",
150
+ "from pathlib import Path\n",
151
+ "\n",
152
+ "fname = str(Path(os.getcwd()).joinpath('.sample_sound', 'jazz_swing.wav'))\n",
153
+ "zq, s = model.encode(fname)\n",
154
+ "print('zq.shape:', zq.shape)\n",
155
+ "print('s.shape:', s.shape)"
156
+ ]
157
+ },
158
+ {
159
+ "cell_type": "code",
160
+ "execution_count": 3,
161
+ "metadata": {},
162
+ "outputs": [
163
+ {
164
+ "name": "stdout",
165
+ "output_type": "stream",
166
+ "text": [
167
+ "waveform.shape: torch.Size([1, 1, 239904])\n"
168
+ ]
169
+ }
170
+ ],
171
+ "source": [
172
+ "# decoding (from zq -- discrete latent vectors)\n",
173
+ "waveform = model.decode(zq=zq)\n",
174
+ "print('waveform.shape:', waveform.shape)"
175
+ ]
176
+ },
177
+ {
178
+ "cell_type": "code",
179
+ "execution_count": null,
180
+ "metadata": {},
181
+ "outputs": [],
182
+ "source": []
183
+ }
184
+ ],
185
+ "metadata": {
186
+ "kernelspec": {
187
+ "display_name": "sound_effect_variation_generation",
188
+ "language": "python",
189
+ "name": "python3"
190
+ },
191
+ "language_info": {
192
+ "codemirror_mode": {
193
+ "name": "ipython",
194
+ "version": 3
195
+ },
196
+ "file_extension": ".py",
197
+ "mimetype": "text/x-python",
198
+ "name": "python",
199
+ "nbconvert_exporter": "python",
200
+ "pygments_lexer": "ipython3",
201
+ "version": "3.8.19"
202
+ }
203
+ },
204
+ "nbformat": 4,
205
+ "nbformat_minor": 2
206
+ }
test_DAC.ipynb ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# DAC Audio Tokenizer"
8
+ ]
9
+ },
10
+ {
11
+ "cell_type": "code",
12
+ "execution_count": 1,
13
+ "metadata": {},
14
+ "outputs": [
15
+ {
16
+ "name": "stderr",
17
+ "output_type": "stream",
18
+ "text": [
19
+ "C:\\Users\\dslee\\AppData\\Roaming\\Python\\Python38\\site-packages\\tqdm\\auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
20
+ " from .autonotebook import tqdm as notebook_tqdm\n"
21
+ ]
22
+ }
23
+ ],
24
+ "source": [
25
+ "import os\n",
26
+ "from pathlib import Path\n",
27
+ "\n",
28
+ "from model import DAC, DACConfig"
29
+ ]
30
+ },
31
+ {
32
+ "cell_type": "code",
33
+ "execution_count": 2,
34
+ "metadata": {},
35
+ "outputs": [],
36
+ "source": [
37
+ "# settings\n",
38
+ "fname = str(Path(os.getcwd()).joinpath('.sample_sound', 'jazz_swing.wav'))\n",
39
+ "device = 'cpu'\n",
40
+ "model_type_by_sampling_freq = '24khz'"
41
+ ]
42
+ },
43
+ {
44
+ "cell_type": "code",
45
+ "execution_count": 3,
46
+ "metadata": {},
47
+ "outputs": [
48
+ {
49
+ "name": "stderr",
50
+ "output_type": "stream",
51
+ "text": [
52
+ "C:\\Users\\dslee\\AppData\\Roaming\\Python\\Python38\\site-packages\\audiotools\\ml\\layers\\base.py:172: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
53
+ " model_dict = torch.load(location, \"cpu\")\n",
54
+ "c:\\Users\\dslee\\anaconda3\\envs\\sound_effect_variation_generation\\lib\\site-packages\\torch\\nn\\utils\\weight_norm.py:134: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`.\n",
55
+ " WeightNorm.apply(module, name, dim)\n"
56
+ ]
57
+ }
58
+ ],
59
+ "source": [
60
+ "# load the model\n",
61
+ "config = DACConfig(model_type_by_sampling_freq=model_type_by_sampling_freq)\n",
62
+ "model = DAC(config).to(device)"
63
+ ]
64
+ },
65
+ {
66
+ "cell_type": "code",
67
+ "execution_count": 4,
68
+ "metadata": {},
69
+ "outputs": [
70
+ {
71
+ "name": "stdout",
72
+ "output_type": "stream",
73
+ "text": [
74
+ "zq.shape: torch.Size([1, 1024, 750])\n",
75
+ "s.shape: torch.Size([1, 32, 750])\n"
76
+ ]
77
+ }
78
+ ],
79
+ "source": [
80
+ "# encoding\n",
81
+ "zq, s = model.encode(fname)\n",
82
+ "print('zq.shape:', zq.shape)\n",
83
+ "print('s.shape:', s.shape)"
84
+ ]
85
+ },
86
+ {
87
+ "cell_type": "code",
88
+ "execution_count": 5,
89
+ "metadata": {},
90
+ "outputs": [
91
+ {
92
+ "name": "stdout",
93
+ "output_type": "stream",
94
+ "text": [
95
+ "waveform.shape: torch.Size([1, 1, 239904])\n"
96
+ ]
97
+ }
98
+ ],
99
+ "source": [
100
+ "# decoding (from zq -- discrete latent vectors)\n",
101
+ "waveform = model.decode(zq=zq)\n",
102
+ "print('waveform.shape:', waveform.shape)"
103
+ ]
104
+ },
105
+ {
106
+ "cell_type": "code",
107
+ "execution_count": 6,
108
+ "metadata": {},
109
+ "outputs": [
110
+ {
111
+ "name": "stdout",
112
+ "output_type": "stream",
113
+ "text": [
114
+ "waveform.shape: torch.Size([1, 1, 239904])\n"
115
+ ]
116
+ }
117
+ ],
118
+ "source": [
119
+ "# decoding (from s -- tokens)\n",
120
+ "waveform = model.decode(s=s)\n",
121
+ "print('waveform.shape:', waveform.shape)"
122
+ ]
123
+ },
124
+ {
125
+ "cell_type": "code",
126
+ "execution_count": 7,
127
+ "metadata": {},
128
+ "outputs": [],
129
+ "source": [
130
+ "# save waveform into an audio file\n",
131
+ "model.waveform_to_audiofile(waveform, 'out.wav')"
132
+ ]
133
+ },
134
+ {
135
+ "cell_type": "code",
136
+ "execution_count": 8,
137
+ "metadata": {},
138
+ "outputs": [
139
+ {
140
+ "name": "stderr",
141
+ "output_type": "stream",
142
+ "text": [
143
+ "d:\\projects\\descript-audio-codec-24khz\\model.py:209: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
144
+ " return torch.load(fname)\n"
145
+ ]
146
+ }
147
+ ],
148
+ "source": [
149
+ "# save and load tokens\n",
150
+ "model.save_tensor(s, 'tokens.pt')\n",
151
+ "loaded_s = model.load_tensor('tokens.pt') # s == loaded_s"
152
+ ]
153
+ },
154
+ {
155
+ "cell_type": "code",
156
+ "execution_count": null,
157
+ "metadata": {},
158
+ "outputs": [],
159
+ "source": []
160
+ }
161
+ ],
162
+ "metadata": {
163
+ "kernelspec": {
164
+ "display_name": "sound_effect_variation_generation",
165
+ "language": "python",
166
+ "name": "python3"
167
+ },
168
+ "language_info": {
169
+ "codemirror_mode": {
170
+ "name": "ipython",
171
+ "version": 3
172
+ },
173
+ "file_extension": ".py",
174
+ "mimetype": "text/x-python",
175
+ "name": "python",
176
+ "nbconvert_exporter": "python",
177
+ "pygments_lexer": "ipython3",
178
+ "version": "3.8.19"
179
+ }
180
+ },
181
+ "nbformat": 4,
182
+ "nbformat_minor": 2
183
+ }
utils.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+
2
+
3
+ def freeze(model):
4
+ for param in model.parameters():
5
+ param.requires_grad = False
6
+