Query weighting for ranking model adaptation

Peng Cai, Wei Gao, Aoying Zhou, Kam Fai Wong

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

We propose to directly measure the importance of queries in the source domain to the target domain where no rank labels of documents are available, which is referred to as query weighting. Query weighting is a key step in ranking model adaptation. As the learning object of ranking algorithms is divided by query instances, we argue that it's more reasonable to conduct importance weighting at query level than document level. We present two query weighting schemes. The first compresses the query into a query feature vector, which aggregates all document instances in the same query, and then conducts query weighting based on the query feature vector. This method can efficiently estimate query importance by compressing query data, but the potential risk is information loss resulted from the compression. The second measures the similarity between the source query and each target query, and then combines these fine-grained similarity values for its importance estimation. Adaptation experiments on LETOR3.0 data set demonstrate that query weighting significantly outperforms document instance weighting methods.

Original languageEnglish
Title of host publicationACL-HLT 2011 - Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies
Pages112-122
Number of pages11
Volume1
Publication statusPublished - 1 Dec 2011
Externally publishedYes
Event49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL-HLT 2011 - Portland, OR, United States
Duration: 19 Jun 201124 Jun 2011

Other

Other49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL-HLT 2011
CountryUnited States
CityPortland, OR
Period19/6/1124/6/11

Fingerprint

weighting
ranking
Ranking
experiment
Source Domain
Compression
Target Domain
Experiment
learning
Values

ASJC Scopus subject areas

  • Language and Linguistics
  • Linguistics and Language

Cite this

Cai, P., Gao, W., Zhou, A., & Wong, K. F. (2011). Query weighting for ranking model adaptation. In ACL-HLT 2011 - Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (Vol. 1, pp. 112-122)

Query weighting for ranking model adaptation. / Cai, Peng; Gao, Wei; Zhou, Aoying; Wong, Kam Fai.

ACL-HLT 2011 - Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Vol. 1 2011. p. 112-122.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Cai, P, Gao, W, Zhou, A & Wong, KF 2011, Query weighting for ranking model adaptation. in ACL-HLT 2011 - Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. vol. 1, pp. 112-122, 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL-HLT 2011, Portland, OR, United States, 19/6/11.
Cai P, Gao W, Zhou A, Wong KF. Query weighting for ranking model adaptation. In ACL-HLT 2011 - Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Vol. 1. 2011. p. 112-122
Cai, Peng ; Gao, Wei ; Zhou, Aoying ; Wong, Kam Fai. / Query weighting for ranking model adaptation. ACL-HLT 2011 - Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Vol. 1 2011. pp. 112-122
@inproceedings{c4b776f8a8514e1fb78adf542da523cd,
title = "Query weighting for ranking model adaptation",
abstract = "We propose to directly measure the importance of queries in the source domain to the target domain where no rank labels of documents are available, which is referred to as query weighting. Query weighting is a key step in ranking model adaptation. As the learning object of ranking algorithms is divided by query instances, we argue that it's more reasonable to conduct importance weighting at query level than document level. We present two query weighting schemes. The first compresses the query into a query feature vector, which aggregates all document instances in the same query, and then conducts query weighting based on the query feature vector. This method can efficiently estimate query importance by compressing query data, but the potential risk is information loss resulted from the compression. The second measures the similarity between the source query and each target query, and then combines these fine-grained similarity values for its importance estimation. Adaptation experiments on LETOR3.0 data set demonstrate that query weighting significantly outperforms document instance weighting methods.",
author = "Peng Cai and Wei Gao and Aoying Zhou and Wong, {Kam Fai}",
year = "2011",
month = "12",
day = "1",
language = "English",
isbn = "9781932432879",
volume = "1",
pages = "112--122",
booktitle = "ACL-HLT 2011 - Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",

}

TY - GEN

T1 - Query weighting for ranking model adaptation

AU - Cai, Peng

AU - Gao, Wei

AU - Zhou, Aoying

AU - Wong, Kam Fai

PY - 2011/12/1

Y1 - 2011/12/1

N2 - We propose to directly measure the importance of queries in the source domain to the target domain where no rank labels of documents are available, which is referred to as query weighting. Query weighting is a key step in ranking model adaptation. As the learning object of ranking algorithms is divided by query instances, we argue that it's more reasonable to conduct importance weighting at query level than document level. We present two query weighting schemes. The first compresses the query into a query feature vector, which aggregates all document instances in the same query, and then conducts query weighting based on the query feature vector. This method can efficiently estimate query importance by compressing query data, but the potential risk is information loss resulted from the compression. The second measures the similarity between the source query and each target query, and then combines these fine-grained similarity values for its importance estimation. Adaptation experiments on LETOR3.0 data set demonstrate that query weighting significantly outperforms document instance weighting methods.

AB - We propose to directly measure the importance of queries in the source domain to the target domain where no rank labels of documents are available, which is referred to as query weighting. Query weighting is a key step in ranking model adaptation. As the learning object of ranking algorithms is divided by query instances, we argue that it's more reasonable to conduct importance weighting at query level than document level. We present two query weighting schemes. The first compresses the query into a query feature vector, which aggregates all document instances in the same query, and then conducts query weighting based on the query feature vector. This method can efficiently estimate query importance by compressing query data, but the potential risk is information loss resulted from the compression. The second measures the similarity between the source query and each target query, and then combines these fine-grained similarity values for its importance estimation. Adaptation experiments on LETOR3.0 data set demonstrate that query weighting significantly outperforms document instance weighting methods.

UR - http://www.scopus.com/inward/record.url?scp=84859038519&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84859038519&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84859038519

SN - 9781932432879

VL - 1

SP - 112

EP - 122

BT - ACL-HLT 2011 - Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

ER -