Linguistic measures for automatic machine translation evaluation

Jesús Giménez, Lluis Marques

Research output: Contribution to journalArticle

14 Citations (Scopus)

Abstract

Assessing the quality of candidate translations involves diverse linguistic facets. However, most automatic evaluation methods in use today rely on limited quality assumptions, such as lexical similarity. This introduces a bias in the development cycle which in some cases has been reported to carry very negative consequences. In order to tackle this methodological problem, we explore a novel path towards heterogeneous automatic Machine Translation evaluation. We have compiled a rich set of specialized similarity measures operating at different linguistic dimensions and analyzed their individual and collective behaviour over a wide range of evaluation scenarios. Results show that measures based on syntactic and semantic information are able to provide more reliable system rankings than lexical measures, especially when the systems under evaluation are based on different paradigms. At the sentence level, while some linguistic measures perform better than most lexical measures, some others perform substantially worse, mainly due to parsing problems. Their scores are, however, suitable for combination, yielding a substantially improved evaluation quality.

Original languageEnglish
Pages (from-to)209-240
Number of pages32
JournalMachine Translation
Volume24
Issue number3-4
DOIs
Publication statusPublished - 1 Dec 2010
Externally publishedYes

    Fingerprint

Keywords

  • Automatic evaluation methods
  • Combined measures
  • Linguistic analysis
  • Machine translation
  • Semantic similarity
  • Syntactic similarity

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software

Cite this