To date, many speech synthesis systems have adopted the vocoder approach, a method for synthesizing speech waveforms that is widely used in cellular-phone networks and other applications. However, the quality of the speech waveforms synthesized by these methods has remained inferior to that of the human voice. In 2016, an influential overseas technology company proposed WaveNet--a speech-synthesis method based on deep-learning algorithms--and demonstrated the ability to synthesize high-quality speech waveforms resembling the human voice. However, one drawback of WaveNet is the extremely complex structure of its neural networks, which demand large quantities of voice data for machine learning and require parameter tuning and various other laborious trial-and-error procedures to be repeated many times before accurate predictions can be obtained.
Overview and achievements of the research
One of the most well-known vocoders is the source-filter vocoder, which was developed in the 1960s and remains in widespread use today. The NII research team infused the conventional source-filter vocoder method with modern neural-network algorithms to develop a new technique for synthesizing high-quality speech waveforms resembling the human voice. Among the advantages of this neural source-filter (NSF) method is the simple structure of its neural networks, which require only about 1 hour of voice data for machine learning and can obtain correct predictive results without extensive parameter tuning. Moreover, large-scale listening tests have demonstrated that speech waveforms produced by NSF techniques are comparable in quality to those generated by WaveNet.
Because the theoretical basis of NSF differs from the patented technologies used by influential overseas ICT companies, the adoption of NSF techniques is likely to spur new technological advances in speech synthesis. For this reason, the source code implementing the NSF method has been made available to the public at no cost, allowing it to be widely used.
Source code, trained NSF models, and the actual NSF-synthesized speech samples (both Japanese and English) are available at the following sites:
Trained models (may be executed to generate English-language voices):
Voice samples (Japanese or English):
Associate Professor Junichi Yamagishi makes the following comment:
"We hope that our NSF method will create new business opportunities for Japanese AI firms that use voice-based interfaces. For future work, we will work to make the method available for use as a real-time voice-synthesis engine in a wide variety of systems. We are also planning to add speaker adaption and other related features to the NSF methods."
Please visit the following page for comparisons of actual human voices to voice waveforms produced by source-filter vocoder methods, by WaveNet, and by NSF.
*It is explained in Japanese only in this movie.
About this research project
The research described here was supported by the Japan Science and Technology Agency under CREST JPMJCR18A6 and by the Japan Society for the Promotion of Science under Grants-in-Aid for Scientific Research "KAKENHI" 16H06302, 16K16096, 17H04687, 18H04120, 18H04112, and 18KT0051.
Paper title and authors
Title: Neural source-filter-based waveform model for statistical parametric speech synthesis
Authors: Xin Wang, Shinji Takaki, Junichi Yamagishi
Publication for: International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2019 (Accepted: February 01, 2019)
Date announced: October 30, 2018 (ArXiV: https://arxiv.org/abs/1810.11946 )