File size: 5,330 Bytes
ab2ad71
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
---
language: "en"
thumbnail:
tags:
- ASR
- CTC
- Attention
- pytorch
license: "apache-2.0"
datasets:
- librispeech
metrics:
- wer
- cer
---

# CRDNN with CTC/Attention and RNNLM trained on LibriSpeech

This repository provides all the necessary tools to perform automatic speech
recognition from an end-to-end system pretrained on LibriSpeech (EN) within
SpeechBrain. For a better experience we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The given ASR model performance are:

| Release | hyperparams file | Test WER | Model link | GPUs |
|:-------------:|:---------------------------:| -----:| -----:| --------:|
| 20-05-22 | BPE_1000.yaml | 3.08 | Not Available | 1xV100 32GB |
| 20-05-22 | BPE_5000.yaml | 2.89 | Not Available | 1xV100 32GB |

## Pipeline description

This ASR system is composed with 3 different but linked blocks:
1. Tokenizer (unigram) that transforms words into subword units and trained with
the train transcriptions of LibriSpeech.
2. Neural language model (RNNLM) trained on the full 10M words dataset.
3. Acoustic model (CRDNN + CTC/Attention). The CRDNN architecture is made of
N blocks of convolutional neural networks with normalisation and pooling on the
frequency domain. Then, a bidirectional LSTM is connected to a final DNN to obtain
the final acoustic representation that is given to the CTC and attention decoders.

## Intended uses & limitations

This model has been primilarly developed to be run within SpeechBrain as a pretrained ASR model
for the english language. Thanks to the flexibility of SpeechBrain, any of the 3 blocks
detailed above can be extracted and connected to you custom pipeline as long as SpeechBrain is
installed.

## Install SpeechBrain

First of all, please install SpeechBrain with the following command:

```
pip install \\we hide ! SpeechBrain is still private :p
```

Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).

### Transcribing your own audio files

```python
import torch
import torchaudio
import speechbrain
from speechbrain.lobes.pretrained.librispeech.asr_crdnn_ctc_att_rnnlm.acoustic import ASR

asr_model = ASR()

# Make sure your output is sampled at 16 kHz.
audio_file='path_to_your_audio_file'
wav, fs = torchaudio.load(audio_file)
wav_lens = torch.tensor([1]).float()

# Transcribe!
words, tokens = asr_model.transcribe(wav, wav_lens)
print(words)

```

### Obtaining encoded features

The SpeechBrain ASR() Class provides an easy way to encode the speech signal
without running the decoding phase. Hence, one can obtain the output of the
CRDNN model.

```python
import torch
import torchaudio
import speechbrain
from speechbrain.lobes.pretrained.librispeech.asr_crdnn_ctc_att_rnnlm.acoustic import ASR

asr_model = ASR()

# Make sure your output is sampled at 16 kHz.
audio_file='path_to_your_audio_file'
wav, fs = torchaudio.load(audio_file)
wav_lens = torch.tensor([1]).float()

# Transcribe!
words, tokens = asr_model.encode(wav, wav_lens)
print(words)

```

### Playing with the language model only

Thanks to SpeechBrain lobes, it is feasible to simply instantiate the language
model to further processing on your custom pipeline:

```python
import torch
import speechbrain
from speechbrain.lobes.pretrained.librispeech.asr_crdnn_ctc_att_rnnlm.lm import LM

lm = LM()

text = "THE CAT IS ON"

# Next word prediction
encoded_text = lm.tokenizer.encode_as_ids(text)
encoded_text = torch.Tensor(encoded_text).unsqueeze(0)
prob_out, _ = lm(encoded_text.to(lm.device))
index = int(torch.argmax(prob_out[0,-1,:]))
print(lm.tokenizer.decode(index))

# Text generation
encoded_text = torch.tensor([0, 2]) # bos token + the
encoded_text = encoded_text.unsqueeze(0).to(lm.device)
for i in range(19):
  prob_out, _ = lm(encoded_text)
  index = torch.argmax(prob_out[0,-1,:]).unsqueeze(0)
  encoded_text = torch.cat([encoded_text, index.unsqueeze(0)], dim=1)
encoded_text = encoded_text[0,1:].tolist()
print(lm.tokenizer.decode(encoded_text))

```

### Playing with the tokenizer only

In the same manner as for the language model, one can isntantiate the tokenizer
only with the corresponding lobes in SpeechBrain.  

```python
import speechbrain
from speechbrain.lobes.pretrained.librispeech.asr_crdnn_ctc_att_rnnlm.tokenizer import tokenizer

# HuggingFace paths to download the pretrained models
token_file = 'tokenizer/1000_unigram.model'
model_name = 'sb/asr-crdnn-librispeech'
save_dir = 'model_checkpoints'

text = "THE CAT IS ON THE TABLE"

tokenizer = tokenizer(token_file, model_name, save_dir)

# Tokenize!
print(tokenizer.spm.encode(text))
print(tokenizer.spm.encode(text, out_type='str'))

```

#### Referencing SpeechBrain

```
@misc{SB2021,
    author = {Ravanelli, Mirco and Parcollet, Titouan and Rouhe, Aku and Plantinga, Peter and Rastorgueva, Elena and Lugosch, Loren and Dawalatabad, Nauman and Ju-Chieh, Chou and Heba, Abdel and Grondin, Francois and Aris, William and Liao, Chien-Feng and Cornell, Samuele and Yeh, Sung-Lin and Na, Hwidong and Gao, Yan and Fu, Szu-Wei and Subakan, Cem and De Mori, Renato and Bengio, Yoshua },
    title = {SpeechBrain},
    year = {2021},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/speechbrain/speechbrain}},
  }
```