Quality Does Matter: A Detailed Look at the Quality and Utility of Web-Mined Parallel Corpora

Surangika Ranathunga, Nisansa de Silva, Velayuthan Menan, Aloka Fernando, Charitha S.M. Rathnayake

Main: Efficient Low-resource methods in NLP Oral Paper

Session 3: Efficient Low-resource methods in NLP (Oral)
Conference Room: Carlson
Conference Time: March 18, 14:00-15:30 (CET) (Europe/Malta)
TLDR:
You can open the #paper-130-Oral channel in a separate window.
Abstract: We conducted a detailed analysis on the quality of web-mined corpora for two low-resource languages (making three language pairs, English-Sinhala, English-Tamil and Sinhala-Tamil). We ranked each corpus according to a similarity measure and carried out an intrinsic and extrinsic evaluation on different portions of this ranked corpus. We show that there are significant quality differences between different portions of web-mined corpora and that the quality varies across languages and datasets. We also show that, for some web-mined datasets, Neural Machine Translation (NMT) models trained with their highest-ranked 25k portion can be on par with human-curated datasets.