KeLP at SemEval-2016 task 3: Learning semantic relations between questions and answers

Simone Filice, Danilo Croce, Alessandro Moschitti, Roberto Basili

Research output: Chapter in Book/Report/Conference proceedingConference contribution

49 Citations (Scopus)

Abstract

This paper describes the KeLP system participating in the SemEval-2016 Community Question Answering (cQA) task. The challenge tasks are modeled as binary classification problems: kernel-based classifiers are trained on the SemEval datasets and their scores are used to sort the instances and produce the final ranking. All classifiers and kernels have been implemented within the Kernel-based Learning Platform called KeLP. Our primary submission ranked first in Subtask A, third in Subtask B and second in Subtask C. These ranks are based on MAP, which is the referring challenge system score. Our approach outperforms all the other systems with respect to all the other challenge metrics.

Original languageEnglish
Title of host publicationSemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings
PublisherAssociation for Computational Linguistics (ACL)
Pages1116-1123
Number of pages8
ISBN (Electronic)9781941643952
Publication statusPublished - 1 Jan 2016
Event10th International Workshop on Semantic Evaluation, SemEval 2016 - San Diego, United States
Duration: 16 Jun 201617 Jun 2016

Other

Other10th International Workshop on Semantic Evaluation, SemEval 2016
CountryUnited States
CitySan Diego
Period16/6/1617/6/16

Fingerprint

Classifiers
Semantics
kernel
Classifier
Binary Classification
Question Answering
Classification Problems
Sort
Ranking
Metric
Learning
Community

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computational Theory and Mathematics
  • Computer Science Applications

Cite this

Filice, S., Croce, D., Moschitti, A., & Basili, R. (2016). KeLP at SemEval-2016 task 3: Learning semantic relations between questions and answers. In SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings (pp. 1116-1123). Association for Computational Linguistics (ACL).

KeLP at SemEval-2016 task 3 : Learning semantic relations between questions and answers. / Filice, Simone; Croce, Danilo; Moschitti, Alessandro; Basili, Roberto.

SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings. Association for Computational Linguistics (ACL), 2016. p. 1116-1123.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Filice, S, Croce, D, Moschitti, A & Basili, R 2016, KeLP at SemEval-2016 task 3: Learning semantic relations between questions and answers. in SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings. Association for Computational Linguistics (ACL), pp. 1116-1123, 10th International Workshop on Semantic Evaluation, SemEval 2016, San Diego, United States, 16/6/16.
Filice S, Croce D, Moschitti A, Basili R. KeLP at SemEval-2016 task 3: Learning semantic relations between questions and answers. In SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings. Association for Computational Linguistics (ACL). 2016. p. 1116-1123
Filice, Simone ; Croce, Danilo ; Moschitti, Alessandro ; Basili, Roberto. / KeLP at SemEval-2016 task 3 : Learning semantic relations between questions and answers. SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings. Association for Computational Linguistics (ACL), 2016. pp. 1116-1123
@inproceedings{2224e592699e4ba9b3c7852c9931b123,
title = "KeLP at SemEval-2016 task 3: Learning semantic relations between questions and answers",
abstract = "This paper describes the KeLP system participating in the SemEval-2016 Community Question Answering (cQA) task. The challenge tasks are modeled as binary classification problems: kernel-based classifiers are trained on the SemEval datasets and their scores are used to sort the instances and produce the final ranking. All classifiers and kernels have been implemented within the Kernel-based Learning Platform called KeLP. Our primary submission ranked first in Subtask A, third in Subtask B and second in Subtask C. These ranks are based on MAP, which is the referring challenge system score. Our approach outperforms all the other systems with respect to all the other challenge metrics.",
author = "Simone Filice and Danilo Croce and Alessandro Moschitti and Roberto Basili",
year = "2016",
month = "1",
day = "1",
language = "English",
pages = "1116--1123",
booktitle = "SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings",
publisher = "Association for Computational Linguistics (ACL)",

}

TY - GEN

T1 - KeLP at SemEval-2016 task 3

T2 - Learning semantic relations between questions and answers

AU - Filice, Simone

AU - Croce, Danilo

AU - Moschitti, Alessandro

AU - Basili, Roberto

PY - 2016/1/1

Y1 - 2016/1/1

N2 - This paper describes the KeLP system participating in the SemEval-2016 Community Question Answering (cQA) task. The challenge tasks are modeled as binary classification problems: kernel-based classifiers are trained on the SemEval datasets and their scores are used to sort the instances and produce the final ranking. All classifiers and kernels have been implemented within the Kernel-based Learning Platform called KeLP. Our primary submission ranked first in Subtask A, third in Subtask B and second in Subtask C. These ranks are based on MAP, which is the referring challenge system score. Our approach outperforms all the other systems with respect to all the other challenge metrics.

AB - This paper describes the KeLP system participating in the SemEval-2016 Community Question Answering (cQA) task. The challenge tasks are modeled as binary classification problems: kernel-based classifiers are trained on the SemEval datasets and their scores are used to sort the instances and produce the final ranking. All classifiers and kernels have been implemented within the Kernel-based Learning Platform called KeLP. Our primary submission ranked first in Subtask A, third in Subtask B and second in Subtask C. These ranks are based on MAP, which is the referring challenge system score. Our approach outperforms all the other systems with respect to all the other challenge metrics.

UR - http://www.scopus.com/inward/record.url?scp=85029360826&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85029360826&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85029360826

SP - 1116

EP - 1123

BT - SemEval 2016 - 10th International Workshop on Semantic Evaluation, Proceedings

PB - Association for Computational Linguistics (ACL)

ER -