Distributional models for lexical semantics: An investigation of different representations for natural language learning

Danilo Croce, Simone Filice, Roberto Basili

Research output: Contribution to journalArticle


Language learning systems usually generalize linguistic observations into rules and patterns that are statistical models of higher level semantic inferences. When the availability of training data is scarce, lexical information can be limited by data sparseness effects and generalization is thus needed. Distributional models represent lexical semantic information in terms of the basic co-occurrences between words in large-scale text collections. As recent works already address, the definition of proper distributional models as well as methods able to express the meaning of phrases or sentences as operations on lexical representations is a complex problem, and a still largely open issue. In this paper, a perspective centered on Convolution Kernels is discussed and the formulation of a Partial Tree Kernel that integrates syntactic information and lexical generalization is studied. Moreover a large scale investigation of different representation spaces, each capturing a different linguistic relation, is provided.

Original languageEnglish
Pages (from-to)115-134
Number of pages20
JournalStudies in Computational Intelligence
Publication statusPublished - 2015
Externally publishedYes



  • Distributional lexical semantics
  • Kernel methods
  • Question classification

ASJC Scopus subject areas

  • Artificial Intelligence

Cite this