Translation Errors Significantly Impact Low-Resource Languages in Cross-Lingual Learning

Ashish Sunil Agrawal, Barah Fazili, Preethi Jyothi

Main: Multilinguality and Language Diversity 1 Oral Paper

Session 7: Multilinguality and Language Diversity 1 (Oral)
Conference Room: Marie Louise 1
Conference Time: March 19, 14:00-15:30 (CET) (Europe/Malta)
TLDR:
You can open the #paper-292-Oral channel in a separate window.
Abstract: Popular benchmarks (e.g., XNLI) used to evaluate cross-lingual language understanding consist of parallel versions of English evaluation sets in multiple target languages created with the help of professional translators. When creating such parallel data, it is critical to ensure high-quality translations for all target languages for an accurate characterization of cross-lingual transfer. In this work, we find that translation inconsistencies \emph{do exist} and interestingly they \emph{disproportionally impact low-resource languages} in XNLI. To identify such inconsistencies, we propose measuring the gap in performance between zero-shot evaluations on the human-translated and machine-translated target text across multiple target languages; relatively large gaps are indicative of translation errors. We also corroborate that translation errors exist for two target languages, namely Hindi and Urdu, by doing a manual reannotation of human-translated test instances in these two languages and finding poor agreement with the original English labels these instances were supposed to inherit.