--- language: ja license: apache-2.0 tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week datasets: - common_voice metrics: - wer - cer base_model: facebook/wav2vec2-large-xlsr-53 model-index: - name: Japanese XLSR Wav2Vec2 Large 53 results: - task: type: automatic-speech-recognition name: Speech Recognition dataset: name: Common Voice ja type: common_voice args: ja metrics: - type: wer value: {} name: Test WER --- # Wav2Vec2-Large-XLSR-53-{language} #TODO: replace language with your {language}, _e.g._ French Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on {language} using the [Common Voice](https://huggingface.co/datasets/common_voice), ... and ... dataset{s}. #TODO: replace {language} with your language, _e.g._ French and eventually add more datasets that were used and eventually remove common voice if model was not trained on common voice When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ja", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the {language} test data of Common Voice. # TODO: replace #TODO: replace language with your {language}, _e.g._ French ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ja", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") model = Wav2Vec2ForCTC.from_pretrained("qqhann/wav2vec2-large-xlsr-japanese-0325-1200") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # TODO: adapt this list to include all special characters you removed from the data resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: XX.XX % ## Training The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... The script used for training can be found [here](...)