TESS: Text-to-Text Self-Conditioned Simplex Diffusion
Rabeeh Karimi mahabadi, Hamish Ivison, Jaesung Tae, James Henderson, Iz Beltagy, Matthew E Peters, Arman Cohan
Main: Machine Learning for NLP Oral Paper
Session 2: Machine Learning for NLP (Oral)
Conference Room: Carlson
Conference Time: March 18, 11:00-12:30 (CET) (Europe/Malta)
TLDR:
You can open the
#paper-351-Oral
channel in a separate window.
Abstract:
Diffusion models have emerged as a powerful paradigm for generation, obtaining strong performance in various continuous domains. However, applying continuous diffusion models to natural language remains challenging due to its discrete nature and the need for a large number of diffusion steps to generate text, making diffusion-based generation expensive. In this work, we propose Text-to-text Self-conditioned Simplex Diffusion (TESS), a text diffusion model that is fully non-autoregressive, employs a new form of self-conditioning, and applies the diffusion process on the logit simplex space rather than the learned embedding space. Through extensive experiments on natural language understanding and generation tasks including summarization, text simplification, paraphrase generation, and question generation, we demonstrate that TESS outperforms state-of-the-art non-autoregressive models, requires fewer diffusion steps with minimal drop in performance, and is competitive with pretrained autoregressive sequence-to-sequence models.