Improving the TENOR of Labeling: Re-evaluating Topic Models for Content Analysis
Zongxia Li, Andrew Mao, Daniel Kofi Stephens, Pranav Goel, Emily Walpole, Alden Dima, Juan Francisco Fung, Jordan Lee Boyd-Graber
Main: Interpretability and Model Analysis in NLP Oral Paper
Session 10: Interpretability and Model Analysis in NLP (Oral)
Conference Room: Carlson
Conference Time: March 20, 11:00-12:30 (CET) (Europe/Malta)
TLDR:
You can open the
#paper-126-Oral
channel in a separate window.
Abstract:
Topic models are a popular tool for understanding text collections, but their evaluation has been a point of contention. Automated evaluation metrics such as coherence are often used, however, their validity has been questioned for neural topic models (NTMs) and can overlook a model's benefits in real-world applications. To this end, we conduct the first evaluation of neural, supervised and classical topic models in an interactive task-based setting. We combine topic models with a classifier and test their ability to help humans conduct content analysis and document annotation. From simulated, real user and expert pilot studies, the Contextual Neural Topic Model does the best on cluster evaluation metrics and human evaluations; however, LDA is competitive with two other NTMs under our simulated experiment and user study results, contrary to what coherence scores suggest. We show that current automated metrics do not provide a complete picture of topic modeling capabilities, but the right choice of NTMs can be better than classical models on practical tasks.