Selecting sentences versus selecting tree constituents for automatic question ranking

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

Community question answering (cQA) websites are focused on users who query questions onto an online forum, expecting for other users to provide them answers or suggestions. Unlike other social media, the length of the posted queries has no limits and queries tend to be multi-sentence elaborations combining context, actual questions, and irrelevant information. We approach the problem of question ranking: given a user's new question, to retrieve those previously-posted questions which could be equivalent, or highly relevant. This could prevent the posting of nearly-duplicate questions and provide the user with instantaneous answers. For the first time in cQA, we address the selection of relevant text-both at sentence- and at constituent-level- for parse-tree-based representations. Our supervised models for text selection boost the performance of a tree kernel-based machine learning model, allowing it to overtake the current state of the art on a recently released cQA evaluation framework.

Original languageEnglish
Title of host publicationCOLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016
Subtitle of host publicationTechnical Papers
PublisherAssociation for Computational Linguistics, ACL Anthology
Pages2515-2525
Number of pages11
ISBN (Print)9784879747020
Publication statusPublished - 1 Jan 2016
Event26th International Conference on Computational Linguistics, COLING 2016 - Osaka, Japan
Duration: 11 Dec 201616 Dec 2016

Other

Other26th International Conference on Computational Linguistics, COLING 2016
CountryJapan
CityOsaka
Period11/12/1616/12/16

Fingerprint

ranking
Learning systems
Websites
community
social media
website
Constituent
Ranking
evaluation
learning
performance
Question Answering

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Language and Linguistics
  • Linguistics and Language

Cite this

Barron, A., Martino, G., Romeo, S., & Moschitti, A. (2016). Selecting sentences versus selecting tree constituents for automatic question ranking. In COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016: Technical Papers (pp. 2515-2525). Association for Computational Linguistics, ACL Anthology.

Selecting sentences versus selecting tree constituents for automatic question ranking. / Barron, Alberto; Martino, Giovanni; Romeo, Salvatore; Moschitti, Alessandro.

COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016: Technical Papers. Association for Computational Linguistics, ACL Anthology, 2016. p. 2515-2525.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Barron, A, Martino, G, Romeo, S & Moschitti, A 2016, Selecting sentences versus selecting tree constituents for automatic question ranking. in COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016: Technical Papers. Association for Computational Linguistics, ACL Anthology, pp. 2515-2525, 26th International Conference on Computational Linguistics, COLING 2016, Osaka, Japan, 11/12/16.
Barron A, Martino G, Romeo S, Moschitti A. Selecting sentences versus selecting tree constituents for automatic question ranking. In COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016: Technical Papers. Association for Computational Linguistics, ACL Anthology. 2016. p. 2515-2525
Barron, Alberto ; Martino, Giovanni ; Romeo, Salvatore ; Moschitti, Alessandro. / Selecting sentences versus selecting tree constituents for automatic question ranking. COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016: Technical Papers. Association for Computational Linguistics, ACL Anthology, 2016. pp. 2515-2525
@inproceedings{81d58b1dbe104754aaa7c361d52e4d3e,
title = "Selecting sentences versus selecting tree constituents for automatic question ranking",
abstract = "Community question answering (cQA) websites are focused on users who query questions onto an online forum, expecting for other users to provide them answers or suggestions. Unlike other social media, the length of the posted queries has no limits and queries tend to be multi-sentence elaborations combining context, actual questions, and irrelevant information. We approach the problem of question ranking: given a user's new question, to retrieve those previously-posted questions which could be equivalent, or highly relevant. This could prevent the posting of nearly-duplicate questions and provide the user with instantaneous answers. For the first time in cQA, we address the selection of relevant text-both at sentence- and at constituent-level- for parse-tree-based representations. Our supervised models for text selection boost the performance of a tree kernel-based machine learning model, allowing it to overtake the current state of the art on a recently released cQA evaluation framework.",
author = "Alberto Barron and Giovanni Martino and Salvatore Romeo and Alessandro Moschitti",
year = "2016",
month = "1",
day = "1",
language = "English",
isbn = "9784879747020",
pages = "2515--2525",
booktitle = "COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016",
publisher = "Association for Computational Linguistics, ACL Anthology",

}

TY - GEN

T1 - Selecting sentences versus selecting tree constituents for automatic question ranking

AU - Barron, Alberto

AU - Martino, Giovanni

AU - Romeo, Salvatore

AU - Moschitti, Alessandro

PY - 2016/1/1

Y1 - 2016/1/1

N2 - Community question answering (cQA) websites are focused on users who query questions onto an online forum, expecting for other users to provide them answers or suggestions. Unlike other social media, the length of the posted queries has no limits and queries tend to be multi-sentence elaborations combining context, actual questions, and irrelevant information. We approach the problem of question ranking: given a user's new question, to retrieve those previously-posted questions which could be equivalent, or highly relevant. This could prevent the posting of nearly-duplicate questions and provide the user with instantaneous answers. For the first time in cQA, we address the selection of relevant text-both at sentence- and at constituent-level- for parse-tree-based representations. Our supervised models for text selection boost the performance of a tree kernel-based machine learning model, allowing it to overtake the current state of the art on a recently released cQA evaluation framework.

AB - Community question answering (cQA) websites are focused on users who query questions onto an online forum, expecting for other users to provide them answers or suggestions. Unlike other social media, the length of the posted queries has no limits and queries tend to be multi-sentence elaborations combining context, actual questions, and irrelevant information. We approach the problem of question ranking: given a user's new question, to retrieve those previously-posted questions which could be equivalent, or highly relevant. This could prevent the posting of nearly-duplicate questions and provide the user with instantaneous answers. For the first time in cQA, we address the selection of relevant text-both at sentence- and at constituent-level- for parse-tree-based representations. Our supervised models for text selection boost the performance of a tree kernel-based machine learning model, allowing it to overtake the current state of the art on a recently released cQA evaluation framework.

UR - http://www.scopus.com/inward/record.url?scp=85049051494&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85049051494&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85049051494

SN - 9784879747020

SP - 2515

EP - 2525

BT - COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016

PB - Association for Computational Linguistics, ACL Anthology

ER -