Parallel wavenet: Fast high-fidelity speech synthesis

A Oord, Y Li, I Babuschkin… - International …, 2018 - proceedings.mlr.press
International conference on machine learning, 2018proceedings.mlr.press
The recently-developed WaveNet architecture is the current state of the art in realistic
speech synthesis, consistently rated as more natural sounding for many different languages
than any previous system. However, because WaveNet relies on sequential generation of
one audio sample at a time, it is poorly suited to today's massively parallel computers, and
therefore hard to deploy in a real-time production setting. This paper introduces Probability
Density Distillation, a new method for training a parallel feed-forward network from a trained …
Abstract
The recently-developed WaveNet architecture is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today’s massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, a 1000x speed up relative to the original WaveNet, and capable of serving multiple English and Japanese voices in a production setting.
proceedings.mlr.press
Showing the best result for this search. See all results