AnaDE1.0: A Novel Data Set for Benchmarking Analogy Detection and Extraction

Bhavya Bhavya, Shradha Sehgal, Jinjun Xiong, ChengXiang Zhai

Main: Resources and Evaluation Oral Paper

Session 10: Resources and Evaluation (Oral)
Conference Room: Marie Louise 1
Conference Time: March 20, 11:00-12:30 (CET) (Europe/Malta)
TLDR:
You can open the #paper-247-Oral channel in a separate window.
Abstract: Textual analogies that make comparisons between two concepts are often used for explaining complex ideas, creative writing, and scientific discovery. In this paper, we propose and study a new task, called Analogy Detection and Extraction (AnaDE), which includes three synergistic sub-tasks: 1) detecting documents containing analogies, 2) extracting text segments that make up the analogy, and 3) identifying the (source and target) concepts being compared. To facilitate the study of this new task, we create a benchmark dataset by scraping Metamia.com and investigate the performances of state-of-the-art models on all sub-tasks to establish the first-generation benchmark results for this new task. We find that the Longformer model achieves the best performance on all the three sub-tasks demonstrating its effectiveness for handling long texts. Moreover, smaller models fine-tuned on our dataset perform better than non-finetuned ChatGPT, suggesting high task difficulty. Overall, the models achieve a high performance on documents detection suggesting that it could be used to develop applications like analogy search engines. Further, there is a large room for improvement on the segment and concept extraction tasks.