Diffusion-NAT: Self-Prompting Discrete Diffusion for Non-Autoregressive Text Generation
Kun Zhou, Yifan Li, Xin Zhao, Ji-Rong Wen
Main: Generation Oral Paper
Session 7: Generation (Oral)
Conference Room: Marie Louise 2
Conference Time: March 19, 14:00-15:30 (CET) (Europe/Malta)
TLDR:
You can open the
#paper-210-Oral
channel in a separate window.
Abstract:
Recently, continuous diffusion models (CDM) have been introduced into non-autoregressive (NAR) text-to-text generation. However, the discrete nature of text increases the difficulty of CDM to generate coherent and fluent texts, and also causes the incompatibility problem between CDM and advanced NLP techniques, especially the popular pre-trained language models~(PLMs). To solve it, we propose Diffusion-NAT, which introduces discrete diffusion models~(DDM) into NAR text-to-text generation and integrates BART to improve the performance. By revising the decoding process of BART and the typical settings of DDM, we unify the inference process of BART and the denoising process of DDM into the same NAR masked tokens recovering task. In this way, DDM can rely on BART to perform denoising, which can benefit from both the rich pre-learned knowledge of BART and the iterative refining paradigm of DDM. Besides, we also propose the iterative self-prompting strategy to further improve the generation quality. Experimental results on 7 datasets show that our approach can outperform competitive NAR methods, and even surpass autoregressive methods. Our code and data are released at \url{https://github.com/RUCAIBox/DiffusionNAT}.