Identifying useful human correction feedback from an on-line machine translation service

Alberto Barron, Lluis Marques, Carlos A. Henríquez Q., Lluís Formiga, Enrique Romero, Jonathan May

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Post-editing feedback provided by users of on-line translation services offers an excellent opportunity for automatic improvement of statistical machine translation (SMT) systems. However, feedback provided by casual users is very noisy, and must be automatically filtered in order to identify the potentially useful cases. We present a study on automatic feedback filtering in a real weblog collected from Reverso.net. We extend and re-annotate a training corpus, define an extended set of simple features and approach the problem as a binary classification task, experimenting with linear and kernelbased classifiers and feature selection. Results on the feedback filtering task show a significant improvement over the majority class, but also a precision ceiling around 70-80%. This reflects the inherent difficulty of the problem and indicates that shallow features cannot fully capture the semantic nature of the problem. Despite the modest results on the filtering task, the classifiers are proven effective in an application-based evaluation. The incorporation of a filtered set of feedback instances selected from a larger corpus significantly improves the performance of a phrase-based SMT system, according to a set of standard evaluation metrics.

Original languageEnglish
Title of host publicationIJCAI International Joint Conference on Artificial Intelligence
Pages2057-2063
Number of pages7
Publication statusPublished - 1 Dec 2013
Externally publishedYes
Event23rd International Joint Conference on Artificial Intelligence, IJCAI 2013 - Beijing, China
Duration: 3 Aug 20139 Aug 2013

Other

Other23rd International Joint Conference on Artificial Intelligence, IJCAI 2013
CountryChina
CityBeijing
Period3/8/139/8/13

Fingerprint

Feedback
Classifiers
Ceilings
Feature extraction
Semantics

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this

Barron, A., Marques, L., Henríquez Q., C. A., Formiga, L., Romero, E., & May, J. (2013). Identifying useful human correction feedback from an on-line machine translation service. In IJCAI International Joint Conference on Artificial Intelligence (pp. 2057-2063)

Identifying useful human correction feedback from an on-line machine translation service. / Barron, Alberto; Marques, Lluis; Henríquez Q., Carlos A.; Formiga, Lluís; Romero, Enrique; May, Jonathan.

IJCAI International Joint Conference on Artificial Intelligence. 2013. p. 2057-2063.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Barron, A, Marques, L, Henríquez Q., CA, Formiga, L, Romero, E & May, J 2013, Identifying useful human correction feedback from an on-line machine translation service. in IJCAI International Joint Conference on Artificial Intelligence. pp. 2057-2063, 23rd International Joint Conference on Artificial Intelligence, IJCAI 2013, Beijing, China, 3/8/13.
Barron A, Marques L, Henríquez Q. CA, Formiga L, Romero E, May J. Identifying useful human correction feedback from an on-line machine translation service. In IJCAI International Joint Conference on Artificial Intelligence. 2013. p. 2057-2063
Barron, Alberto ; Marques, Lluis ; Henríquez Q., Carlos A. ; Formiga, Lluís ; Romero, Enrique ; May, Jonathan. / Identifying useful human correction feedback from an on-line machine translation service. IJCAI International Joint Conference on Artificial Intelligence. 2013. pp. 2057-2063
@inproceedings{a9b8edb9f3b648f29f7b5c7db61b9d2f,
title = "Identifying useful human correction feedback from an on-line machine translation service",
abstract = "Post-editing feedback provided by users of on-line translation services offers an excellent opportunity for automatic improvement of statistical machine translation (SMT) systems. However, feedback provided by casual users is very noisy, and must be automatically filtered in order to identify the potentially useful cases. We present a study on automatic feedback filtering in a real weblog collected from Reverso.net. We extend and re-annotate a training corpus, define an extended set of simple features and approach the problem as a binary classification task, experimenting with linear and kernelbased classifiers and feature selection. Results on the feedback filtering task show a significant improvement over the majority class, but also a precision ceiling around 70-80{\%}. This reflects the inherent difficulty of the problem and indicates that shallow features cannot fully capture the semantic nature of the problem. Despite the modest results on the filtering task, the classifiers are proven effective in an application-based evaluation. The incorporation of a filtered set of feedback instances selected from a larger corpus significantly improves the performance of a phrase-based SMT system, according to a set of standard evaluation metrics.",
author = "Alberto Barron and Lluis Marques and {Henr{\'i}quez Q.}, {Carlos A.} and Llu{\'i}s Formiga and Enrique Romero and Jonathan May",
year = "2013",
month = "12",
day = "1",
language = "English",
isbn = "9781577356332",
pages = "2057--2063",
booktitle = "IJCAI International Joint Conference on Artificial Intelligence",

}

TY - GEN

T1 - Identifying useful human correction feedback from an on-line machine translation service

AU - Barron, Alberto

AU - Marques, Lluis

AU - Henríquez Q., Carlos A.

AU - Formiga, Lluís

AU - Romero, Enrique

AU - May, Jonathan

PY - 2013/12/1

Y1 - 2013/12/1

N2 - Post-editing feedback provided by users of on-line translation services offers an excellent opportunity for automatic improvement of statistical machine translation (SMT) systems. However, feedback provided by casual users is very noisy, and must be automatically filtered in order to identify the potentially useful cases. We present a study on automatic feedback filtering in a real weblog collected from Reverso.net. We extend and re-annotate a training corpus, define an extended set of simple features and approach the problem as a binary classification task, experimenting with linear and kernelbased classifiers and feature selection. Results on the feedback filtering task show a significant improvement over the majority class, but also a precision ceiling around 70-80%. This reflects the inherent difficulty of the problem and indicates that shallow features cannot fully capture the semantic nature of the problem. Despite the modest results on the filtering task, the classifiers are proven effective in an application-based evaluation. The incorporation of a filtered set of feedback instances selected from a larger corpus significantly improves the performance of a phrase-based SMT system, according to a set of standard evaluation metrics.

AB - Post-editing feedback provided by users of on-line translation services offers an excellent opportunity for automatic improvement of statistical machine translation (SMT) systems. However, feedback provided by casual users is very noisy, and must be automatically filtered in order to identify the potentially useful cases. We present a study on automatic feedback filtering in a real weblog collected from Reverso.net. We extend and re-annotate a training corpus, define an extended set of simple features and approach the problem as a binary classification task, experimenting with linear and kernelbased classifiers and feature selection. Results on the feedback filtering task show a significant improvement over the majority class, but also a precision ceiling around 70-80%. This reflects the inherent difficulty of the problem and indicates that shallow features cannot fully capture the semantic nature of the problem. Despite the modest results on the filtering task, the classifiers are proven effective in an application-based evaluation. The incorporation of a filtered set of feedback instances selected from a larger corpus significantly improves the performance of a phrase-based SMT system, according to a set of standard evaluation metrics.

UR - http://www.scopus.com/inward/record.url?scp=84896062926&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84896062926&partnerID=8YFLogxK

M3 - Conference contribution

SN - 9781577356332

SP - 2057

EP - 2063

BT - IJCAI International Joint Conference on Artificial Intelligence

ER -