Radford Alec ; Kim Jong Wook ; Xu Tao ; Brockman Greg ; McLeavey Christine ; Sutskever Ilya

Abstract
We study the capabilities of speech processing systems trained simply topredict large amounts of transcripts of audio on the internet. When scaled to680,000 hours of multilingual and multitask supervision, the resulting modelsgeneralize well to standard benchmarks and are often competitive with priorfully supervised results but in a zero-shot transfer setting without the needfor any fine-tuning. When compared to humans, the models approach theiraccuracy and robustness. We are releasing models and inference code to serve asa foundation for further work on robust speech processing.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| speech-recognition-on-common-voice-english | Whisper (Large v2) | Word Error Rate (WER): 9.4% |
| speech-recognition-on-common-voice-french | Whisper (Large v2) | Test WER: 13.9% |
| speech-recognition-on-common-voice-german | Whisper (Large v2) | Test WER: 6.4% |
| speech-recognition-on-common-voice-italian | Whisper (Large v2) | Test WER: 7.1% |
| speech-recognition-on-common-voice-japanese | Whisper (Large v2) | Test WER: 9.1% |
| speech-recognition-on-common-voice-russian | Whisper (Large v2) | Test WER: 7.1% |
| speech-recognition-on-common-voice-spanish | Whisper (Large v2) | Test WER: 5.6% |
| speech-to-speech-translation-on-fleurs-x-eng | WhisperV2 | ASR-BLEU: 23.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.