Selectional preferences for semantic role classification

Bẽnat Zapirain, Eneko Agirre, Lluis Marques, Mihai Surdeanu

Research output: Contribution to journalArticle

13 Citations (Scopus)

Abstract

This paper focuses on a well-known open issue in Semantic Role Classification (SRC) research: the limited influence and sparseness of lexical features. We mitigate this problem using models that integrate automatically learned selectional preferences (SP). We explore a range of models based onWordNet and distributional-similarity SPs. Furthermore, we demonstrate that the SRC task is better modeled by SP models centered on both verbs and prepositions, rather than verbs alone. Our experiments with SP-based models in isolation indicate that they outperform a lexical baseline with 20 F1 points in domain and almost 40 F1 points out of domain. Furthermore, we show that a state-of-the-art SRC system extended with features based on selectional preferences performs significantly better, both in domain (17% error reduction) and out of domain (13% error reduction). Finally, we show that in an end-to-end semantic role labeling system we obtain small but statistically significant improvements, even though our modified SRC model affects only approximately 4% of the argument candidates. Our post hoc error analysis indicates that the SP-based features help mostly in situations where syntactic information is either incorrect or insufficient to disambiguate the correct role.

Original languageEnglish
Pages (from-to)631-663
Number of pages33
JournalComputational Linguistics
Volume39
Issue number3
DOIs
Publication statusPublished - 1 Sep 2013
Externally publishedYes

Fingerprint

Semantics
semantics
Syntactics
Error analysis
Labeling
social isolation
candidacy
Semantic Roles
experiment
Experiments
Verbs

ASJC Scopus subject areas

  • Computer Science Applications
  • Artificial Intelligence
  • Linguistics and Language
  • Language and Linguistics

Cite this

Selectional preferences for semantic role classification. / Zapirain, Bẽnat; Agirre, Eneko; Marques, Lluis; Surdeanu, Mihai.

In: Computational Linguistics, Vol. 39, No. 3, 01.09.2013, p. 631-663.

Research output: Contribution to journalArticle

Zapirain, Bẽnat ; Agirre, Eneko ; Marques, Lluis ; Surdeanu, Mihai. / Selectional preferences for semantic role classification. In: Computational Linguistics. 2013 ; Vol. 39, No. 3. pp. 631-663.
@article{186777713c2e4060859ee6de233e70ef,
title = "Selectional preferences for semantic role classification",
abstract = "This paper focuses on a well-known open issue in Semantic Role Classification (SRC) research: the limited influence and sparseness of lexical features. We mitigate this problem using models that integrate automatically learned selectional preferences (SP). We explore a range of models based onWordNet and distributional-similarity SPs. Furthermore, we demonstrate that the SRC task is better modeled by SP models centered on both verbs and prepositions, rather than verbs alone. Our experiments with SP-based models in isolation indicate that they outperform a lexical baseline with 20 F1 points in domain and almost 40 F1 points out of domain. Furthermore, we show that a state-of-the-art SRC system extended with features based on selectional preferences performs significantly better, both in domain (17{\%} error reduction) and out of domain (13{\%} error reduction). Finally, we show that in an end-to-end semantic role labeling system we obtain small but statistically significant improvements, even though our modified SRC model affects only approximately 4{\%} of the argument candidates. Our post hoc error analysis indicates that the SP-based features help mostly in situations where syntactic information is either incorrect or insufficient to disambiguate the correct role.",
author = "Bẽnat Zapirain and Eneko Agirre and Lluis Marques and Mihai Surdeanu",
year = "2013",
month = "9",
day = "1",
doi = "10.1162/COLI_a_00145",
language = "English",
volume = "39",
pages = "631--663",
journal = "Computational Linguistics",
issn = "0891-2017",
publisher = "MIT Press Journals",
number = "3",

}

TY - JOUR

T1 - Selectional preferences for semantic role classification

AU - Zapirain, Bẽnat

AU - Agirre, Eneko

AU - Marques, Lluis

AU - Surdeanu, Mihai

PY - 2013/9/1

Y1 - 2013/9/1

N2 - This paper focuses on a well-known open issue in Semantic Role Classification (SRC) research: the limited influence and sparseness of lexical features. We mitigate this problem using models that integrate automatically learned selectional preferences (SP). We explore a range of models based onWordNet and distributional-similarity SPs. Furthermore, we demonstrate that the SRC task is better modeled by SP models centered on both verbs and prepositions, rather than verbs alone. Our experiments with SP-based models in isolation indicate that they outperform a lexical baseline with 20 F1 points in domain and almost 40 F1 points out of domain. Furthermore, we show that a state-of-the-art SRC system extended with features based on selectional preferences performs significantly better, both in domain (17% error reduction) and out of domain (13% error reduction). Finally, we show that in an end-to-end semantic role labeling system we obtain small but statistically significant improvements, even though our modified SRC model affects only approximately 4% of the argument candidates. Our post hoc error analysis indicates that the SP-based features help mostly in situations where syntactic information is either incorrect or insufficient to disambiguate the correct role.

AB - This paper focuses on a well-known open issue in Semantic Role Classification (SRC) research: the limited influence and sparseness of lexical features. We mitigate this problem using models that integrate automatically learned selectional preferences (SP). We explore a range of models based onWordNet and distributional-similarity SPs. Furthermore, we demonstrate that the SRC task is better modeled by SP models centered on both verbs and prepositions, rather than verbs alone. Our experiments with SP-based models in isolation indicate that they outperform a lexical baseline with 20 F1 points in domain and almost 40 F1 points out of domain. Furthermore, we show that a state-of-the-art SRC system extended with features based on selectional preferences performs significantly better, both in domain (17% error reduction) and out of domain (13% error reduction). Finally, we show that in an end-to-end semantic role labeling system we obtain small but statistically significant improvements, even though our modified SRC model affects only approximately 4% of the argument candidates. Our post hoc error analysis indicates that the SP-based features help mostly in situations where syntactic information is either incorrect or insufficient to disambiguate the correct role.

UR - http://www.scopus.com/inward/record.url?scp=84881176433&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84881176433&partnerID=8YFLogxK

U2 - 10.1162/COLI_a_00145

DO - 10.1162/COLI_a_00145

M3 - Article

VL - 39

SP - 631

EP - 663

JO - Computational Linguistics

JF - Computational Linguistics

SN - 0891-2017

IS - 3

ER -