Don't understand a measure? Learn it

Structured prediction for coreference resolution optimizing its measures

Iryna Haponchyk, Alessandro Moschitti

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

An assential aspect of structured prediction is the evaluation of an output structure against the gold standard. Especially in the loss-augmented setting, the need of finding the max-violating constraint has severely limited the expressivity of effective loss functions. In this paper, we trade off exact computation for enabling the use of more complex loss functions for coreference resolution (CR). Most note-worthily, we show that such functions can be (i) automatically learned also from controversial but commonly accepted CR measures, e.g., MELA, and (ii) successfully used in learning algorithms. The accurate model comparison on the standard CoNLL-2012 setting shows the benefit of more expressive loss for Arabic and English data.

Original languageEnglish
Title of host publicationACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
PublisherAssociation for Computational Linguistics (ACL)
Pages1018-1028
Number of pages11
Volume1
ISBN (Electronic)9781945626753
DOIs
Publication statusPublished - 1 Jan 2017
Event55th Annual Meeting of the Association for Computational Linguistics, ACL 2017 - Vancouver, Canada
Duration: 30 Jul 20174 Aug 2017

Other

Other55th Annual Meeting of the Association for Computational Linguistics, ACL 2017
CountryCanada
CityVancouver
Period30/7/174/8/17

Fingerprint

trade paper
loss of function
model comparison
gold standard
evaluation
Learning algorithms
learning
Prediction
Coreference

ASJC Scopus subject areas

  • Language and Linguistics
  • Artificial Intelligence
  • Software
  • Linguistics and Language

Cite this

Haponchyk, I., & Moschitti, A. (2017). Don't understand a measure? Learn it: Structured prediction for coreference resolution optimizing its measures. In ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 1, pp. 1018-1028). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/P17-1094

Don't understand a measure? Learn it : Structured prediction for coreference resolution optimizing its measures. / Haponchyk, Iryna; Moschitti, Alessandro.

ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). Vol. 1 Association for Computational Linguistics (ACL), 2017. p. 1018-1028.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Haponchyk, I & Moschitti, A 2017, Don't understand a measure? Learn it: Structured prediction for coreference resolution optimizing its measures. in ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). vol. 1, Association for Computational Linguistics (ACL), pp. 1018-1028, 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, 30/7/17. https://doi.org/10.18653/v1/P17-1094
Haponchyk I, Moschitti A. Don't understand a measure? Learn it: Structured prediction for coreference resolution optimizing its measures. In ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). Vol. 1. Association for Computational Linguistics (ACL). 2017. p. 1018-1028 https://doi.org/10.18653/v1/P17-1094
Haponchyk, Iryna ; Moschitti, Alessandro. / Don't understand a measure? Learn it : Structured prediction for coreference resolution optimizing its measures. ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). Vol. 1 Association for Computational Linguistics (ACL), 2017. pp. 1018-1028
@inproceedings{3cce172d83984475aa38f3cd3e36866f,
title = "Don't understand a measure? Learn it: Structured prediction for coreference resolution optimizing its measures",
abstract = "An assential aspect of structured prediction is the evaluation of an output structure against the gold standard. Especially in the loss-augmented setting, the need of finding the max-violating constraint has severely limited the expressivity of effective loss functions. In this paper, we trade off exact computation for enabling the use of more complex loss functions for coreference resolution (CR). Most note-worthily, we show that such functions can be (i) automatically learned also from controversial but commonly accepted CR measures, e.g., MELA, and (ii) successfully used in learning algorithms. The accurate model comparison on the standard CoNLL-2012 setting shows the benefit of more expressive loss for Arabic and English data.",
author = "Iryna Haponchyk and Alessandro Moschitti",
year = "2017",
month = "1",
day = "1",
doi = "10.18653/v1/P17-1094",
language = "English",
volume = "1",
pages = "1018--1028",
booktitle = "ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)",
publisher = "Association for Computational Linguistics (ACL)",

}

TY - GEN

T1 - Don't understand a measure? Learn it

T2 - Structured prediction for coreference resolution optimizing its measures

AU - Haponchyk, Iryna

AU - Moschitti, Alessandro

PY - 2017/1/1

Y1 - 2017/1/1

N2 - An assential aspect of structured prediction is the evaluation of an output structure against the gold standard. Especially in the loss-augmented setting, the need of finding the max-violating constraint has severely limited the expressivity of effective loss functions. In this paper, we trade off exact computation for enabling the use of more complex loss functions for coreference resolution (CR). Most note-worthily, we show that such functions can be (i) automatically learned also from controversial but commonly accepted CR measures, e.g., MELA, and (ii) successfully used in learning algorithms. The accurate model comparison on the standard CoNLL-2012 setting shows the benefit of more expressive loss for Arabic and English data.

AB - An assential aspect of structured prediction is the evaluation of an output structure against the gold standard. Especially in the loss-augmented setting, the need of finding the max-violating constraint has severely limited the expressivity of effective loss functions. In this paper, we trade off exact computation for enabling the use of more complex loss functions for coreference resolution (CR). Most note-worthily, we show that such functions can be (i) automatically learned also from controversial but commonly accepted CR measures, e.g., MELA, and (ii) successfully used in learning algorithms. The accurate model comparison on the standard CoNLL-2012 setting shows the benefit of more expressive loss for Arabic and English data.

UR - http://www.scopus.com/inward/record.url?scp=85040953694&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85040953694&partnerID=8YFLogxK

U2 - 10.18653/v1/P17-1094

DO - 10.18653/v1/P17-1094

M3 - Conference contribution

VL - 1

SP - 1018

EP - 1028

BT - ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)

PB - Association for Computational Linguistics (ACL)

ER -