LegalLens: Leveraging LLMs for Legal Violation Identification in Unstructured Text
Dor Bernsohn, Gil Semo, Yaron Vazana, Gila Hayat, Ben Hagag, Joel Niklaus, Rohit Saha, Kyryl Truskovskyi
Main: NLP Applications Oral Paper
Session 6: NLP Applications (Oral)
Conference Room: Marie Louise 2
Conference Time: March 19, 10:30-12:00 (CET) (Europe/Malta)
TLDR:
You can open the
#paper-320-Oral
channel in a separate window.
Abstract:
In this study, we focus on two main tasks, the first for detecting legal violations within unstructured textual data, and the second for associating these violations with potentially affected individuals. We constructed two datasets using Large Language Models (LLMs) which were subsequently validated by domain expert annotators. Both tasks were designed specifically for the context of class-action cases. The experimental design incorporated fine-tuning models from the BERT family and open-source LLMs, and conducting few-shot experiments using closed-source LLMs. Our results, with an F1-score of 62.69% (violation identification) and 81.02% (associating victims), show that our datasets and setups can be used for both tasks. Finally, we publicly release the datasets and the code used for the experiments in order to advance further research in the area of legal natural language processing (NLP).