Demos from "Sequence-to-Sequence Voice Reconstruction for Silent Speech in a Tonal Language "

Abstract: Silent Speech Decoding (SSD) based on surface electromyography (sEMG) has become a prevalent task of Brain-Computer Interface (BCI) in recent years. Many works have been devoted to decoding sEMG. However, restoring silent speech in tonal languages such as Mandarin Chinese is still difficult. In this paper, we propose an optimized sequence-to-sequence (Seq2Seq) approach to synthesize voice from the sEMG-based silent speech. We extract duration information to regulate the sEMG-based silent speech using the audio length. Then, we provide a deep-learning model with an encoder-decoder structure and a state-of-art vocoder to generate the audio waveform. Experiments based on six Mandarin Chinese speakers demonstrate that the proposed model can successfully decode silent speech in Mandarin Chinese, and achieves a character error rate (CER) of 6.41% on average with human evaluation.

Spk id Spk-1 Spk-2 Spk-3 Spk-4 Spk-5 Spk-6
Ground-Truth Voices
SSRNet results
Baseline results
Spk id Spk-1 Spk-2 Spk-3 Spk-4 Spk-5 Spk-6
Ground-Truth Voices
SSRNet results
Baseline results