Scaling up Discovery of Latent Concepts in Deep NLP Models
Majd Hawasly, Fahim Dalvi, Nadir Durrani
Main: Semantics and Applications Oral Paper
Session 9: Semantics and Applications (Oral)
Conference Room: Marie Louise 2
Conference Time: March 20, 09:00-10:30 (CET) (Europe/Malta)
TLDR:
You can open the
#paper-121-Oral
channel in a separate window.
Abstract:
Despite the revolution caused by deep NLP models, they remain black boxes, necessitating research to understand their decision-making processes. A recent work by Dalvi et al. (2022) carried out representation analysis through the lens of clustering latent spaces within pre-trained models (PLMs), but that approach is limited to small scale due to the high cost of running Agglomerative hierarchical clustering. This paper studies clustering algorithms in order to scale the discovery of encoded concepts in PLM representations to larger datasets and models. We propose metrics for assessing the quality of discovered latent concepts and use them to compare the studied clustering algorithms. We found that K-Means-based concept discovery significantly enhances efficiency while maintaining the quality of the obtained concepts. Furthermore, we demonstrate the practicality of this newfound efficiency by scaling latent concept discovery to LLMs and phrasal concepts.