Matīss Rikters

Abstract
Large parallel corpora that are automatically obtained from the web, documents or elsewhere often exhibit many corrupted parts that are bound to negatively affect the quality of the systems and models that learn from these corpora. This paper describes frequent problems found in data and such data affects neural machine translation systems, as well as how to identify and deal with them. The solutions are summarised in a set of scripts that remove problematic sentences from input corpora.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| machine-translation-on-wmt-2017-english | Transformer trained on highly filtered data | BLEU: 22.89 |
| machine-translation-on-wmt-2017-latvian | Transformer trained on highly filtered data | BLEU: 24.37 |
| machine-translation-on-wmt-2018-english-1 | Transformer trained on highly filtered data | BLEU: 17.40 |
| machine-translation-on-wmt-2018-finnish | Transformer trained on highly filtered data | BLEU: 24.00 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.