VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop

Yaniv Taigman, Lior Wolf, Adam Polyak, Eliya Nachmani
Facebook AI Research (FAIR)

We present a new neural text to speech (TTS) method that is able to transform text to speech in voices that are sampled in the wild. Unlike other systems, our solution is able to deal with unconstrained voice samples and without requiring aligned phonemes or linguistic features. The network architecture is simpler than those in the existing literature and is based on a novel shifting buffer working memory. The same buffer is used for estimating the attention, computing the output audio, and for updating the buffer itself. The input sentence is encoded using a context-free lookup table that contains one entry per character or phoneme. The speakers are similarly represented by a short vector that can also be fitted to new identities, even with only a few samples. Variability in the generated speech is achieved by priming the buffer prior to generating the audio. Experimental results on several datasets demonstrate convincing capabilities, making TTS accessible to a wider range of applications. In order to promote reproducibility, we release our source code and models.

Code: Link

Paper: Link

Network #1: Single Speaker


Network #3: Multiple-Speakers from VCTK:

Input: “Some have accepted this as a miracle without any physical explanation”