Anchor Points: Benchmarking Models with Much Fewer Examples
Rajan Pathe Vivek, Kawin Ethayarajh, Diyi Yang, Douwe Kiela
Main: Efficient Low-resource methods in NLP Oral Paper
Session 3: Efficient Low-resource methods in NLP (Oral)
Conference Room: Carlson
Conference Time: March 18, 14:00-15:30 (CET) (Europe/Malta)
TLDR:
You can open the
#paper-222-Oral
channel in a separate window.
Abstract:
Modern language models often exhibit powerful but brittle behavior, leading to the development of larger and more diverse benchmarks to reliably assess their behavior. Here, we suggest that model performance can be benchmarked and elucidated with much smaller evaluation sets. We first show that in six popular language classification benchmarks, model confidence in the correct class on many pairs of points is strongly correlated across models. We build upon this phenomenon to propose Anchor Point Selection, a technique to select small subsets of datasets that capture model behavior across the entire dataset. Anchor points reliably rank models: across 87 diverse language model-prompt pairs, evaluating models using 1-30 anchor points outperforms uniform sampling and other baselines at accurately ranking models. Moreover, just a dozen anchor points can be used to estimate model per-class predictions on all other points in a dataset with low error, sufficient for gauging where the model is likely to fail. Lastly, we present Anchor Point Maps for visualizing these insights and facilitating comparisons of the performance of different models on various regions within the dataset distribution.