The Parrot Dilemma: Human-Labeled vs. LLM-augmented Data in Classification Tasks

Anders Giovanni Møller, Arianna Pera, Jacob Aarup Dalsgaard, Luca Maria Aiello

Main: Semantics and Applications Oral Paper

Session 9: Semantics and Applications (Oral)
Conference Room: Marie Louise 2
Conference Time: March 20, 09:00-10:30 (CET) (Europe/Malta)
TLDR:
You can open the #paper-110-Oral channel in a separate window.
Abstract: In the realm of Computational Social Science (CSS), practitioners often navigate complex, low-resource domains and face the costly and time-intensive challenges of acquiring and annotating data. We aim to establish a set of guidelines to address such challenges, comparing the use of human-labeled data with synthetically generated data from GPT-4 and Llama-2 in ten distinct CSS classification tasks of varying complexity. Additionally, we examine the impact of training data sizes on performance. Our findings reveal that models trained on human-labeled data consistently exhibit superior or comparable performance compared to their synthetically augmented counterparts. Nevertheless, synthetic augmentation proves beneficial, particularly in improving performance on rare classes within multi-class tasks. Furthermore, we leverage GPT-4 and Llama-2 for zero-shot classification and find that, while they generally display strong performance, they often fall short when compared to specialized classifiers trained on moderately sized training sets.