Automatic Speech Recognition
Transformers
audio
asr
hf-asr-leaderboard
Inference Endpoints
TheStigh commited on
Commit
158da39
·
verified ·
1 Parent(s): ba44ba4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -31,8 +31,7 @@ widget:
31
  # NB-Whisper Large - converted for Faster-Whisper using cTranslate2
32
 
33
 
34
- ### This model is converted for using together with faster-whisper using cTranslate2.
35
- ### It is a fork of original, and original description is below with original URL's
36
 
37
 
38
  Introducing the **_Norwegian NB-Whisper Small model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article.
 
31
  # NB-Whisper Large - converted for Faster-Whisper using cTranslate2
32
 
33
 
34
+ ### This model is converted for using together with faster-whisper using cTranslate2. Original description is below with original URL's
 
35
 
36
 
37
  Introducing the **_Norwegian NB-Whisper Small model_**, proudly developed by the National Library of Norway. NB-Whisper is a cutting-edge series of models designed for automatic speech recognition (ASR) and speech translation. These models are based on the work of [OpenAI's Whisper](https://arxiv.org/abs/2212.04356). Each model in the series has been trained for 250,000 steps, utilizing a diverse dataset of 8 million samples. These samples consist of aligned audio clips, each 30 seconds long, culminating in a staggering 66,000 hours of speech. For an in-depth understanding of our training methodology and dataset composition, keep an eye out for our upcoming article.