Towards Hierarchical Spoken Language Disfluency Modeling
Jiachen Lian, Gopala Anumanchipalli
Main: Multimodality Oral Paper
Session 3: Multimodality (Oral)
Conference Room: Marie Louise 2
Conference Time: March 18, 14:00-15:30 (CET) (Europe/Malta)
TLDR:
You can open the
#paper-70-Oral
channel in a separate window.
Abstract:
Speech dysfluency modeling is the bottleneck for both speech therapy and language learning. However, there is no AI solution to systematically tackle this problem. We first propose to define the concept of dysfluent speech and dysfluent speech modeling. We then present Hierarchical Unconstrained Dysfluency Modeling (H-UDM) approach that addresses both dysfluency transcription and detection to eliminate the need for extensive manual annotation. Furthermore, we introduce a simulated dysfluent dataset called VCTK++ to enhance the capabilities of H-UDM in phonetic transcription. Our experimental results demonstrate the effectiveness and robustness of our proposed methods in both transcription and detection tasks.