Pre-Training Methods for Question Reranking

Stefano Campese, Ivano Lauriola, Alessandro Moschitti

Main: Question Answering Oral Paper

Session 7: Question Answering (Oral)
Conference Room: Carlson
Conference Time: March 19, 14:00-15:30 (CET) (Europe/Malta)
TLDR:
You can open the #paper-432-Oral channel in a separate window.
Abstract: One interesting approach to Question Answering (QA) is to search for semantically similar questions, which have been answered before. This task is different from answer retrieval as it focuses on questions rather than only on the answers, therefore it requires different model training on different data. In this work, we introduce a novel unsupervised pre-training method specialized for retrieving and ranking questions. This leverages (i) knowledge distillation from a basic question retrieval model, and (ii) new pre-training task and objective for learning to rank questions in terms of their relevance with the query. Our experiments show that (i) the proposed technique achieves state-of-the-art performance on QRC and Quora-match datasets, and (ii) the benefit of combining re-ranking and retrieval models.