Coreference resolution: An empirical study based on SemEval-2010 shared Task 1

Lluís Màrquez, Marta Recasens, Emili Sapena

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

This paper presents an empirical evaluation of coreference resolution that covers several interrelated dimensions. The main goal is to complete the comparative analysis from the SemEval-2010 task on Coreference Resolution in Multiple Languages. To do so, the study restricts the number of languages and systems involved, but extends and deepens the analysis of the system outputs, including a more qualitative discussion. The paper compares three automatic coreference resolution systems for three languages (English, Catalan and Spanish) in four evaluation settings, and using four evaluation measures. Given that our main goal is not to provide a comparison between resolution algorithms, these are merely used as tools to shed light on the different conditions under which coreference resolution is evaluated. Although the dimensions are strongly interdependent, making it very difficult to extract general principles, the study reveals a series of interesting issues in relation to coreference resolution: the portability of systems across languages, the influence of the type and quality of input annotations, and the behavior of the scoring measures.

Original languageEnglish
Pages (from-to)661-694
Number of pages34
JournalLanguage Resources and Evaluation
Volume47
Issue number3
DOIs
Publication statusPublished - 1 Sep 2013

    Fingerprint

Keywords

  • Coreference resolution and evaluation
  • Discourse entities
  • Machine learning based NLP tools
  • NLP system analysis
  • SemEval-2010 (Task 1)

ASJC Scopus subject areas

  • Language and Linguistics
  • Education
  • Linguistics and Language
  • Library and Information Sciences

Cite this