Improving classification accuracy of youtube videos by exploiting focal points in social tags

Amogh Mahapatra, Komal Kapoor, Ravindra Kasturi, Jaideep Srivastava

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Past literature [1] has shown that problems involving tacit communication among humans and agents are better solved by identifying communication 'focal' points based on domain specific human biases. Cast differently, classification of user-generated content into generalized categories is the equivalent of automated programs trying to match human adjudged labels. It seems logical to suspect that identification and incorporation of features generally found salient by humans or 'focal points', can allow an automated agent to better match human adjudged labels in classification tasks. In this paper, we leverage this correspondence, by using domain-specific focal points to further inform the classification algorithms of the inherent human biases. We empirically evaluate our method, by classifying YouTube videos using user-annotated tags. Improvements in classification accuracy over the state-of-the-art classification techniques on using our transformed (using focal points) and highly reduced feature space reveals the value of the approach in subjective classification tasks.

Original languageEnglish
Title of host publicationElectronic Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013
DOIs
Publication statusPublished - 2013
Externally publishedYes
Event2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013 - San Jose, CA
Duration: 15 Jul 201319 Jul 2013

Other

Other2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013
CitySan Jose, CA
Period15/7/1319/7/13

Fingerprint

Labels
Communication

Keywords

  • Classification
  • Focal Points
  • Human Salience

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Computer Vision and Pattern Recognition

Cite this

Mahapatra, A., Kapoor, K., Kasturi, R., & Srivastava, J. (2013). Improving classification accuracy of youtube videos by exploiting focal points in social tags. In Electronic Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013 [6618382] https://doi.org/10.1109/ICMEW.2013.6618382

Improving classification accuracy of youtube videos by exploiting focal points in social tags. / Mahapatra, Amogh; Kapoor, Komal; Kasturi, Ravindra; Srivastava, Jaideep.

Electronic Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013. 2013. 6618382.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Mahapatra, A, Kapoor, K, Kasturi, R & Srivastava, J 2013, Improving classification accuracy of youtube videos by exploiting focal points in social tags. in Electronic Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013., 6618382, 2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013, San Jose, CA, 15/7/13. https://doi.org/10.1109/ICMEW.2013.6618382
Mahapatra A, Kapoor K, Kasturi R, Srivastava J. Improving classification accuracy of youtube videos by exploiting focal points in social tags. In Electronic Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013. 2013. 6618382 https://doi.org/10.1109/ICMEW.2013.6618382
Mahapatra, Amogh ; Kapoor, Komal ; Kasturi, Ravindra ; Srivastava, Jaideep. / Improving classification accuracy of youtube videos by exploiting focal points in social tags. Electronic Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013. 2013.
@inproceedings{0e82748d690c40f8a03eb0bcddfcae5f,
title = "Improving classification accuracy of youtube videos by exploiting focal points in social tags",
abstract = "Past literature [1] has shown that problems involving tacit communication among humans and agents are better solved by identifying communication 'focal' points based on domain specific human biases. Cast differently, classification of user-generated content into generalized categories is the equivalent of automated programs trying to match human adjudged labels. It seems logical to suspect that identification and incorporation of features generally found salient by humans or 'focal points', can allow an automated agent to better match human adjudged labels in classification tasks. In this paper, we leverage this correspondence, by using domain-specific focal points to further inform the classification algorithms of the inherent human biases. We empirically evaluate our method, by classifying YouTube videos using user-annotated tags. Improvements in classification accuracy over the state-of-the-art classification techniques on using our transformed (using focal points) and highly reduced feature space reveals the value of the approach in subjective classification tasks.",
keywords = "Classification, Focal Points, Human Salience",
author = "Amogh Mahapatra and Komal Kapoor and Ravindra Kasturi and Jaideep Srivastava",
year = "2013",
doi = "10.1109/ICMEW.2013.6618382",
language = "English",
isbn = "9781479916047",
booktitle = "Electronic Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013",

}

TY - GEN

T1 - Improving classification accuracy of youtube videos by exploiting focal points in social tags

AU - Mahapatra, Amogh

AU - Kapoor, Komal

AU - Kasturi, Ravindra

AU - Srivastava, Jaideep

PY - 2013

Y1 - 2013

N2 - Past literature [1] has shown that problems involving tacit communication among humans and agents are better solved by identifying communication 'focal' points based on domain specific human biases. Cast differently, classification of user-generated content into generalized categories is the equivalent of automated programs trying to match human adjudged labels. It seems logical to suspect that identification and incorporation of features generally found salient by humans or 'focal points', can allow an automated agent to better match human adjudged labels in classification tasks. In this paper, we leverage this correspondence, by using domain-specific focal points to further inform the classification algorithms of the inherent human biases. We empirically evaluate our method, by classifying YouTube videos using user-annotated tags. Improvements in classification accuracy over the state-of-the-art classification techniques on using our transformed (using focal points) and highly reduced feature space reveals the value of the approach in subjective classification tasks.

AB - Past literature [1] has shown that problems involving tacit communication among humans and agents are better solved by identifying communication 'focal' points based on domain specific human biases. Cast differently, classification of user-generated content into generalized categories is the equivalent of automated programs trying to match human adjudged labels. It seems logical to suspect that identification and incorporation of features generally found salient by humans or 'focal points', can allow an automated agent to better match human adjudged labels in classification tasks. In this paper, we leverage this correspondence, by using domain-specific focal points to further inform the classification algorithms of the inherent human biases. We empirically evaluate our method, by classifying YouTube videos using user-annotated tags. Improvements in classification accuracy over the state-of-the-art classification techniques on using our transformed (using focal points) and highly reduced feature space reveals the value of the approach in subjective classification tasks.

KW - Classification

KW - Focal Points

KW - Human Salience

UR - http://www.scopus.com/inward/record.url?scp=84888224448&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84888224448&partnerID=8YFLogxK

U2 - 10.1109/ICMEW.2013.6618382

DO - 10.1109/ICMEW.2013.6618382

M3 - Conference contribution

SN - 9781479916047

BT - Electronic Proceedings of the 2013 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2013

ER -