Methodology for the Evaluation of Machine Translation Quality

Authors

  • Ani Ananyan
  • Roza Avagyan Aorist Translation Agency

DOI:

https://doi.org/10.46991/TSTP/2021.1.1.133

Keywords:

systems, human metrics, machine translation, methodology

Abstract

Along with the development and widespread dissemination of translation by artificial intelligence, it is becoming increasingly important to continuously evaluate and improve its quality and to use it as a tool for the modern translator. In our research, we compared five sentences translated from Armenian into Russian and English by Google Translator, Yandex Translator and two models of the translation system of the Armenian company Avromic to find out how effective these translation systems are when working in Armenian. It was necessary to find out how effective it would be to use them as a translation tool and in the learning process by further editing the translation.

As there is currently no comprehensive and successful method of human metrics for machine translation, we have developed our own evaluation method and criteria by studying the world's most well-known methods of evaluation for automatic translation. We have used the post-editorial distance evaluation criterion as well. In the example of one sentence in the article, we have presented in detail the evaluation process according to the selected and developed criteria. At the end we have presented the results of the research and made appropriate conclusions.

References

Callison-Burch, Chris, Cameron Fordyce, and Philipp Koehn. Evaluation of Machine Translation: http://www.mt-archive.info/ACL-SMT-2007-Callison-Burch.pdf.

Denkowski, Michael, and Alon Lavie. Choosing the Right Evaluation for Machine Translation: an Examination of Annotator and Automatic Metric Performance on Human Judgment Tasks: https://www.cs.cmu.edu/~alavie/papers/AMTA-10-Denkowski.pdf.

Nagao, Makoto, and Mori Shinsuke. A New Method of N-gram Statistics for Large Number of n and Automatic Extraction of Words and Phrases from Large Text Data of Japanese: http://citeseerx.ist.psu.edu/viewdoc/¬download?¬doi=10.¬1.1.¬57.84-16&rep=rep1&type=pdf.

Downloads

Published

2021-06-30

How to Cite

Ananyan, A., & Avagyan, R. (2021). Methodology for the Evaluation of Machine Translation Quality. Translation Studies: Theory and Practice, 1(1 (1), 124–133. https://doi.org/10.46991/TSTP/2021.1.1.133

Issue

Section

Articles