Multilingual NLP
An Exploration of Data Augmentation Techniques for Improving English to Tigrinya Translation
It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, often requiring large amounts of auxiliary data to achieve competitive results. An effective method of generating auxiliary data is back-translation of target language sentences. In this work, we present a case study of Tigrinya where we investigate several back-translation methods to generate synthetic source sentences. We find that in low-resource conditions, back-translation by pivoting through a higher-resource language related to the target language proves most effective resulting in substantial improvements over baselines.
People
PhD '23 @ CMU -> Postdoc @ AI2 -> Assistant Professor @ OSU
PhD '22 @ CMU -> Google Research -> Apple Research
PhD'24 @ CMU -> Postdoc @ UW, MSR -> Assistant Professor @ Purdue University
Related Papers
-
An Exploration of Data Augmentation Techniques for Improving English to Tigrinya Translation, (Lidia et al., 2021), AfricaNLP
-
On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment, (Wang et al., 2020), EMNLP
-
Automatic Extraction of Rules Governing Morphological Agreement, (Chaudhary et al., 2020), EMNLP
-
A Deep Reinforced Model for Cross-Lingual Summarization with Bilingual Semantic Similarity Reward, (Dou et al., 2020), WNGT
-
Balancing Training for Multilingual Neural Machine Translation, (Wang et al., 2020), ACL
-
Learning to Generate Word- and Phrase-Embeddings for Efficient Phrase-Based Neural Machine Translation, (Park and Tsvetkov, 2019), WNGT
-
A Margin-based Loss with Synthetic Negative Samples for Continuous-output Machine Translation, (Bhat et al., 2019), WNGT
-
CMU-01 at the SIGMORPHON 2019 Shared Task on Crosslinguality and Context in Morphology, (Chaudhary et al., 2019), SIGMORPHON
-
Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs, (Kumar and Tsvetkov, 2019), ICLR