Data-driven Evaluation of Visual Quality Measures

M. Sedlmair, M. Aupetit

Research output: Contribution to journalArticle

22 Citations (Scopus)

Abstract

Visual quality measures seek to algorithmically imitate human judgments of patterns such as class separability, correlation, or outliers. In this paper, we propose a novel data-driven framework for evaluating such measures. The basic idea is to take a large set of visually encoded data, such as scatterplots, with reliable human "ground truth" judgements, and to use this human-labeled data to learn how well a measure would predict human judgements on previously unseen data. Measures can then be evaluated based on predictive performance - an approach that is crucial for generalizing across datasets but has gained little attention so far. To illustrate our framework, we use it to evaluate 15 state-of-the-art class separation measures, using human ground truth data from 828 class separation judgments on color-coded 2D scatterplots.

Original languageEnglish
Pages (from-to)201-210
Number of pages10
JournalComputer Graphics Forum
Volume34
Issue number3
DOIs
Publication statusPublished - 1 Jun 2015

Fingerprint

Color

Keywords

  • H.5.0 [Information Interfaces and Presentation]: General

ASJC Scopus subject areas

  • Computer Networks and Communications

Cite this

Data-driven Evaluation of Visual Quality Measures. / Sedlmair, M.; Aupetit, M.

In: Computer Graphics Forum, Vol. 34, No. 3, 01.06.2015, p. 201-210.

Research output: Contribution to journalArticle

Sedlmair, M. ; Aupetit, M. / Data-driven Evaluation of Visual Quality Measures. In: Computer Graphics Forum. 2015 ; Vol. 34, No. 3. pp. 201-210.
@article{4affa3b0bb3946f68047d161fd93e66f,
title = "Data-driven Evaluation of Visual Quality Measures",
abstract = "Visual quality measures seek to algorithmically imitate human judgments of patterns such as class separability, correlation, or outliers. In this paper, we propose a novel data-driven framework for evaluating such measures. The basic idea is to take a large set of visually encoded data, such as scatterplots, with reliable human {"}ground truth{"} judgements, and to use this human-labeled data to learn how well a measure would predict human judgements on previously unseen data. Measures can then be evaluated based on predictive performance - an approach that is crucial for generalizing across datasets but has gained little attention so far. To illustrate our framework, we use it to evaluate 15 state-of-the-art class separation measures, using human ground truth data from 828 class separation judgments on color-coded 2D scatterplots.",
keywords = "H.5.0 [Information Interfaces and Presentation]: General",
author = "M. Sedlmair and M. Aupetit",
year = "2015",
month = "6",
day = "1",
doi = "10.1111/cgf.12632",
language = "English",
volume = "34",
pages = "201--210",
journal = "Computer Graphics Forum",
issn = "0167-7055",
publisher = "Wiley-Blackwell",
number = "3",

}

TY - JOUR

T1 - Data-driven Evaluation of Visual Quality Measures

AU - Sedlmair, M.

AU - Aupetit, M.

PY - 2015/6/1

Y1 - 2015/6/1

N2 - Visual quality measures seek to algorithmically imitate human judgments of patterns such as class separability, correlation, or outliers. In this paper, we propose a novel data-driven framework for evaluating such measures. The basic idea is to take a large set of visually encoded data, such as scatterplots, with reliable human "ground truth" judgements, and to use this human-labeled data to learn how well a measure would predict human judgements on previously unseen data. Measures can then be evaluated based on predictive performance - an approach that is crucial for generalizing across datasets but has gained little attention so far. To illustrate our framework, we use it to evaluate 15 state-of-the-art class separation measures, using human ground truth data from 828 class separation judgments on color-coded 2D scatterplots.

AB - Visual quality measures seek to algorithmically imitate human judgments of patterns such as class separability, correlation, or outliers. In this paper, we propose a novel data-driven framework for evaluating such measures. The basic idea is to take a large set of visually encoded data, such as scatterplots, with reliable human "ground truth" judgements, and to use this human-labeled data to learn how well a measure would predict human judgements on previously unseen data. Measures can then be evaluated based on predictive performance - an approach that is crucial for generalizing across datasets but has gained little attention so far. To illustrate our framework, we use it to evaluate 15 state-of-the-art class separation measures, using human ground truth data from 828 class separation judgments on color-coded 2D scatterplots.

KW - H.5.0 [Information Interfaces and Presentation]: General

UR - http://www.scopus.com/inward/record.url?scp=84937938684&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84937938684&partnerID=8YFLogxK

U2 - 10.1111/cgf.12632

DO - 10.1111/cgf.12632

M3 - Article

VL - 34

SP - 201

EP - 210

JO - Computer Graphics Forum

JF - Computer Graphics Forum

SN - 0167-7055

IS - 3

ER -