SemEval-2016 Task 3

Community question answering

Preslav Nakov, Lluis Marques, Alessandro Moschitti, Walid Magdy, Hamdy Mubarak, Abed Alhakim Freihat, James Glass, Bilal Randeree

Research output: Chapter in Book/Report/Conference proceedingConference contribution

83 Citations (Scopus)

Abstract

This paper describes the SemEval-2016 Task 3 on Community Question Answering, which we offered in English and Arabic. For English, we had three sub-tasks: Question-Comment Similarity (subtask A), Question-Question Similarity (B), and Question-External Comment Similarity (C). For Arabic, we had another subtask: Rerank the correct answers for a new question (D). Eighteen teams participated in the task, submitting a total of 95 runs (38 primary and 57 contrastive) for the four subtasks. A variety of approaches and features were used by the participating systems to address the different subtasks, which are summarized in this paper. The best systems achieved an official score (MAP) of 79.19, 76.70, 55.41, and 45.83 in subtasks A, B, C, and D, respectively. These scores are significantly better than those for the baselines that we provided. For subtask A, the best system improved over the 2015 winner by 3 points absolute in terms of Accuracy.

Original languageEnglish
Title of host publicationSemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings
PublisherAssociation for Computational Linguistics (ACL)
Pages525-545
Number of pages21
ISBN (Electronic)9781941643952
Publication statusPublished - 1 Jan 2016
Event10th International Workshop on Semantic Evaluation, SemEval 2016 - San Diego, United States
Duration: 16 Jun 201617 Jun 2016

Other

Other10th International Workshop on Semantic Evaluation, SemEval 2016
CountryUnited States
CitySan Diego
Period16/6/1617/6/16

Fingerprint

Question Answering
Baseline
Similarity
Community

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computational Theory and Mathematics
  • Computer Science Applications

Cite this

Nakov, P., Marques, L., Moschitti, A., Magdy, W., Mubarak, H., Freihat, A. A., ... Randeree, B. (2016). SemEval-2016 Task 3: Community question answering. In SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings (pp. 525-545). Association for Computational Linguistics (ACL).

SemEval-2016 Task 3 : Community question answering. / Nakov, Preslav; Marques, Lluis; Moschitti, Alessandro; Magdy, Walid; Mubarak, Hamdy; Freihat, Abed Alhakim; Glass, James; Randeree, Bilal.

SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings. Association for Computational Linguistics (ACL), 2016. p. 525-545.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Nakov, P, Marques, L, Moschitti, A, Magdy, W, Mubarak, H, Freihat, AA, Glass, J & Randeree, B 2016, SemEval-2016 Task 3: Community question answering. in SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings. Association for Computational Linguistics (ACL), pp. 525-545, 10th International Workshop on Semantic Evaluation, SemEval 2016, San Diego, United States, 16/6/16.
Nakov P, Marques L, Moschitti A, Magdy W, Mubarak H, Freihat AA et al. SemEval-2016 Task 3: Community question answering. In SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings. Association for Computational Linguistics (ACL). 2016. p. 525-545
Nakov, Preslav ; Marques, Lluis ; Moschitti, Alessandro ; Magdy, Walid ; Mubarak, Hamdy ; Freihat, Abed Alhakim ; Glass, James ; Randeree, Bilal. / SemEval-2016 Task 3 : Community question answering. SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings. Association for Computational Linguistics (ACL), 2016. pp. 525-545
@inproceedings{a574a91bf6ab4b5cadeaae96f00d6d82,
title = "SemEval-2016 Task 3: Community question answering",
abstract = "This paper describes the SemEval-2016 Task 3 on Community Question Answering, which we offered in English and Arabic. For English, we had three sub-tasks: Question-Comment Similarity (subtask A), Question-Question Similarity (B), and Question-External Comment Similarity (C). For Arabic, we had another subtask: Rerank the correct answers for a new question (D). Eighteen teams participated in the task, submitting a total of 95 runs (38 primary and 57 contrastive) for the four subtasks. A variety of approaches and features were used by the participating systems to address the different subtasks, which are summarized in this paper. The best systems achieved an official score (MAP) of 79.19, 76.70, 55.41, and 45.83 in subtasks A, B, C, and D, respectively. These scores are significantly better than those for the baselines that we provided. For subtask A, the best system improved over the 2015 winner by 3 points absolute in terms of Accuracy.",
author = "Preslav Nakov and Lluis Marques and Alessandro Moschitti and Walid Magdy and Hamdy Mubarak and Freihat, {Abed Alhakim} and James Glass and Bilal Randeree",
year = "2016",
month = "1",
day = "1",
language = "English",
pages = "525--545",
booktitle = "SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings",
publisher = "Association for Computational Linguistics (ACL)",

}

TY - GEN

T1 - SemEval-2016 Task 3

T2 - Community question answering

AU - Nakov, Preslav

AU - Marques, Lluis

AU - Moschitti, Alessandro

AU - Magdy, Walid

AU - Mubarak, Hamdy

AU - Freihat, Abed Alhakim

AU - Glass, James

AU - Randeree, Bilal

PY - 2016/1/1

Y1 - 2016/1/1

N2 - This paper describes the SemEval-2016 Task 3 on Community Question Answering, which we offered in English and Arabic. For English, we had three sub-tasks: Question-Comment Similarity (subtask A), Question-Question Similarity (B), and Question-External Comment Similarity (C). For Arabic, we had another subtask: Rerank the correct answers for a new question (D). Eighteen teams participated in the task, submitting a total of 95 runs (38 primary and 57 contrastive) for the four subtasks. A variety of approaches and features were used by the participating systems to address the different subtasks, which are summarized in this paper. The best systems achieved an official score (MAP) of 79.19, 76.70, 55.41, and 45.83 in subtasks A, B, C, and D, respectively. These scores are significantly better than those for the baselines that we provided. For subtask A, the best system improved over the 2015 winner by 3 points absolute in terms of Accuracy.

AB - This paper describes the SemEval-2016 Task 3 on Community Question Answering, which we offered in English and Arabic. For English, we had three sub-tasks: Question-Comment Similarity (subtask A), Question-Question Similarity (B), and Question-External Comment Similarity (C). For Arabic, we had another subtask: Rerank the correct answers for a new question (D). Eighteen teams participated in the task, submitting a total of 95 runs (38 primary and 57 contrastive) for the four subtasks. A variety of approaches and features were used by the participating systems to address the different subtasks, which are summarized in this paper. The best systems achieved an official score (MAP) of 79.19, 76.70, 55.41, and 45.83 in subtasks A, B, C, and D, respectively. These scores are significantly better than those for the baselines that we provided. For subtask A, the best system improved over the 2015 winner by 3 points absolute in terms of Accuracy.

UR - http://www.scopus.com/inward/record.url?scp=85032305015&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85032305015&partnerID=8YFLogxK

M3 - Conference contribution

SP - 525

EP - 545

BT - SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings

PB - Association for Computational Linguistics (ACL)

ER -