Multilingual NLP

An Exploration of Data Augmentation Techniques for Improving English to Tigrinya Translation

It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, often requiring large amounts of auxiliary data to achieve competitive results. An effective method of generating auxiliary data is back-translation of target language sentences. In this work, we present a case study of Tigrinya where we investigate several back-translation methods to generate synthetic source sentences. We find that in low-resource conditions, back-translation by pivoting through a higher-resource language related to the target language proves most effective resulting in substantial improvements over baselines.


People

Sachin Kumar
PhD '23 @ CMU -> Postdoc @ AI2 -> Assistant Professor @ OSU
Zirui Wang
PhD '22 @ CMU -> Google Research
Chan Young Park
PhD Student, CMU