Shallow and deep syntactic/semantic structures for passage reranking in question-answering systems

Kateryna Tymoshenko, Alessandro Moschitti

Research output: Contribution to journalArticle

Abstract

In this article, we extensively study the use of syntactic and semantic structures obtained with shallow and full syntactic parsers for answer passage reranking. We propose several dependency and constituent-based structures, also enriched with Linked Open Data (LD) knowledge to represent pairs of questions and answer passages. We encode such tree structures in learning-to-rank (L2R) algorithms using tree kernels, which can project them in tree substructure spaces, where each dimension represents a powerful syntactic/semantic feature. Additionally, since we define links between question and passage structures, our tree kernel spaces also include relational structural features. We carried out an extensive comparative experimentation of our models for automatic answer selection benchmarks on different TREC QA corpora as well as the newer Wikipedia-based dataset, namely WikiQA, which has been widely used to test sentence rerankers. The results consistently demonstrate that our structural semantic models achieve the state of the art in passage reranking. In particular, we derived the following important findings: (i) relational syntactic structures are essential to achieve superior results; (ii) models trained with dependency trees can outperform those trained with shallow trees, e.g., in case of sentence reranking; (iii) external knowledge automatically generated with focus and question classifiers is very effective; and (iv) the semantic information derived by LD and incorporated in syntactic structures can be used to replace the knowledge provided by the above-mentioned classifiers. This is a remarkable advantage as it enables our models to increase coverage and portability over new domains.

Original languageEnglish
Article number8
JournalACM Transactions on Information Systems
Volume37
Issue number1
DOIs
Publication statusPublished - 1 Jan 2019

Fingerprint

Syntactics
Semantics
Classifiers
Question answering
Reranking

Keywords

  • Kernel methods
  • Learning to rank
  • Linked data
  • Question answering
  • Structural kernels

ASJC Scopus subject areas

  • Information Systems
  • Business, Management and Accounting(all)
  • Computer Science Applications

Cite this

Shallow and deep syntactic/semantic structures for passage reranking in question-answering systems. / Tymoshenko, Kateryna; Moschitti, Alessandro.

In: ACM Transactions on Information Systems, Vol. 37, No. 1, 8, 01.01.2019.

Research output: Contribution to journalArticle

@article{105d9108e2cb47aaba75a94091dd6ccc,
title = "Shallow and deep syntactic/semantic structures for passage reranking in question-answering systems",
abstract = "In this article, we extensively study the use of syntactic and semantic structures obtained with shallow and full syntactic parsers for answer passage reranking. We propose several dependency and constituent-based structures, also enriched with Linked Open Data (LD) knowledge to represent pairs of questions and answer passages. We encode such tree structures in learning-to-rank (L2R) algorithms using tree kernels, which can project them in tree substructure spaces, where each dimension represents a powerful syntactic/semantic feature. Additionally, since we define links between question and passage structures, our tree kernel spaces also include relational structural features. We carried out an extensive comparative experimentation of our models for automatic answer selection benchmarks on different TREC QA corpora as well as the newer Wikipedia-based dataset, namely WikiQA, which has been widely used to test sentence rerankers. The results consistently demonstrate that our structural semantic models achieve the state of the art in passage reranking. In particular, we derived the following important findings: (i) relational syntactic structures are essential to achieve superior results; (ii) models trained with dependency trees can outperform those trained with shallow trees, e.g., in case of sentence reranking; (iii) external knowledge automatically generated with focus and question classifiers is very effective; and (iv) the semantic information derived by LD and incorporated in syntactic structures can be used to replace the knowledge provided by the above-mentioned classifiers. This is a remarkable advantage as it enables our models to increase coverage and portability over new domains.",
keywords = "Kernel methods, Learning to rank, Linked data, Question answering, Structural kernels",
author = "Kateryna Tymoshenko and Alessandro Moschitti",
year = "2019",
month = "1",
day = "1",
doi = "10.1145/3233772",
language = "English",
volume = "37",
journal = "ACM Transactions on Information Systems",
issn = "1046-8188",
publisher = "Association for Computing Machinery (ACM)",
number = "1",

}

TY - JOUR

T1 - Shallow and deep syntactic/semantic structures for passage reranking in question-answering systems

AU - Tymoshenko, Kateryna

AU - Moschitti, Alessandro

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In this article, we extensively study the use of syntactic and semantic structures obtained with shallow and full syntactic parsers for answer passage reranking. We propose several dependency and constituent-based structures, also enriched with Linked Open Data (LD) knowledge to represent pairs of questions and answer passages. We encode such tree structures in learning-to-rank (L2R) algorithms using tree kernels, which can project them in tree substructure spaces, where each dimension represents a powerful syntactic/semantic feature. Additionally, since we define links between question and passage structures, our tree kernel spaces also include relational structural features. We carried out an extensive comparative experimentation of our models for automatic answer selection benchmarks on different TREC QA corpora as well as the newer Wikipedia-based dataset, namely WikiQA, which has been widely used to test sentence rerankers. The results consistently demonstrate that our structural semantic models achieve the state of the art in passage reranking. In particular, we derived the following important findings: (i) relational syntactic structures are essential to achieve superior results; (ii) models trained with dependency trees can outperform those trained with shallow trees, e.g., in case of sentence reranking; (iii) external knowledge automatically generated with focus and question classifiers is very effective; and (iv) the semantic information derived by LD and incorporated in syntactic structures can be used to replace the knowledge provided by the above-mentioned classifiers. This is a remarkable advantage as it enables our models to increase coverage and portability over new domains.

AB - In this article, we extensively study the use of syntactic and semantic structures obtained with shallow and full syntactic parsers for answer passage reranking. We propose several dependency and constituent-based structures, also enriched with Linked Open Data (LD) knowledge to represent pairs of questions and answer passages. We encode such tree structures in learning-to-rank (L2R) algorithms using tree kernels, which can project them in tree substructure spaces, where each dimension represents a powerful syntactic/semantic feature. Additionally, since we define links between question and passage structures, our tree kernel spaces also include relational structural features. We carried out an extensive comparative experimentation of our models for automatic answer selection benchmarks on different TREC QA corpora as well as the newer Wikipedia-based dataset, namely WikiQA, which has been widely used to test sentence rerankers. The results consistently demonstrate that our structural semantic models achieve the state of the art in passage reranking. In particular, we derived the following important findings: (i) relational syntactic structures are essential to achieve superior results; (ii) models trained with dependency trees can outperform those trained with shallow trees, e.g., in case of sentence reranking; (iii) external knowledge automatically generated with focus and question classifiers is very effective; and (iv) the semantic information derived by LD and incorporated in syntactic structures can be used to replace the knowledge provided by the above-mentioned classifiers. This is a remarkable advantage as it enables our models to increase coverage and portability over new domains.

KW - Kernel methods

KW - Learning to rank

KW - Linked data

KW - Question answering

KW - Structural kernels

UR - http://www.scopus.com/inward/record.url?scp=85061246074&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85061246074&partnerID=8YFLogxK

U2 - 10.1145/3233772

DO - 10.1145/3233772

M3 - Article

VL - 37

JO - ACM Transactions on Information Systems

JF - ACM Transactions on Information Systems

SN - 1046-8188

IS - 1

M1 - 8

ER -