An efficient adversarial learning strategy for constructing robust classification boundaries

Wei Liu, Sanjay Chawla, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

4 Citations (Scopus)

Abstract

Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However in some adversarial settings, the test set can be deliberately constructed in order to increase the error rates of a classifier. A prominent example is email spam where words are transformed to avoid word-based features embedded in a spam filter. Recent research has modeled interactions between a data miner and an adversary as a sequential Stackelberg game, and solved its Nash equilibrium to build classifiers that are more robust to subsequent manipulations on training data sets. However in this paper we argue that the iterative algorithm used in the Stackelberg game, which solves an optimization problem at each step of play, is sufficient but not necessary for achieving Nash equilibria in classification problems. Instead, we propose a method that transforms singular vectors of a training data matrix to simulate manipulations by an adversary, and from that perspective a Nash equilibrium can be obtained by solving a novel optimization problem only once. We show that compared with the iterative algorithm used in recent literature, our one-step game significantly reduces computing time while still being able to produce good Nash equilibria results.

Original languageEnglish
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages649-660
Number of pages12
Volume7691 LNAI
DOIs
Publication statusPublished - 2012
Externally publishedYes
Event25th Australasian Joint Conference on Artificial Intelligence, AI 2012 - Sydney, NSW
Duration: 4 Dec 20127 Dec 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7691 LNAI
ISSN (Print)03029743
ISSN (Electronic)16113349

Other

Other25th Australasian Joint Conference on Artificial Intelligence, AI 2012
CitySydney, NSW
Period4/12/127/12/12

Fingerprint

Learning Strategies
Nash Equilibrium
Classifiers
Stackelberg Game
Spam
Miners
Electronic mail
Iterative Algorithm
Manipulation
Classifier
Optimization Problem
Singular Vectors
Electronic Mail
Test Set
Classification Problems
Error Rate
Transform
Game
Filter
Sufficient

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Liu, W., Chawla, S., Bailey, J., Leckie, C., & Ramamohanarao, K. (2012). An efficient adversarial learning strategy for constructing robust classification boundaries. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7691 LNAI, pp. 649-660). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7691 LNAI). https://doi.org/10.1007/978-3-642-35101-3_55

An efficient adversarial learning strategy for constructing robust classification boundaries. / Liu, Wei; Chawla, Sanjay; Bailey, James; Leckie, Christopher; Ramamohanarao, Kotagiri.

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 7691 LNAI 2012. p. 649-660 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7691 LNAI).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Liu, W, Chawla, S, Bailey, J, Leckie, C & Ramamohanarao, K 2012, An efficient adversarial learning strategy for constructing robust classification boundaries. in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). vol. 7691 LNAI, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 7691 LNAI, pp. 649-660, 25th Australasian Joint Conference on Artificial Intelligence, AI 2012, Sydney, NSW, 4/12/12. https://doi.org/10.1007/978-3-642-35101-3_55
Liu W, Chawla S, Bailey J, Leckie C, Ramamohanarao K. An efficient adversarial learning strategy for constructing robust classification boundaries. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 7691 LNAI. 2012. p. 649-660. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-642-35101-3_55
Liu, Wei ; Chawla, Sanjay ; Bailey, James ; Leckie, Christopher ; Ramamohanarao, Kotagiri. / An efficient adversarial learning strategy for constructing robust classification boundaries. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol. 7691 LNAI 2012. pp. 649-660 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{5f8d809b4b0f484dbdc9264f5326eb03,
title = "An efficient adversarial learning strategy for constructing robust classification boundaries",
abstract = "Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However in some adversarial settings, the test set can be deliberately constructed in order to increase the error rates of a classifier. A prominent example is email spam where words are transformed to avoid word-based features embedded in a spam filter. Recent research has modeled interactions between a data miner and an adversary as a sequential Stackelberg game, and solved its Nash equilibrium to build classifiers that are more robust to subsequent manipulations on training data sets. However in this paper we argue that the iterative algorithm used in the Stackelberg game, which solves an optimization problem at each step of play, is sufficient but not necessary for achieving Nash equilibria in classification problems. Instead, we propose a method that transforms singular vectors of a training data matrix to simulate manipulations by an adversary, and from that perspective a Nash equilibrium can be obtained by solving a novel optimization problem only once. We show that compared with the iterative algorithm used in recent literature, our one-step game significantly reduces computing time while still being able to produce good Nash equilibria results.",
author = "Wei Liu and Sanjay Chawla and James Bailey and Christopher Leckie and Kotagiri Ramamohanarao",
year = "2012",
doi = "10.1007/978-3-642-35101-3_55",
language = "English",
isbn = "9783642351006",
volume = "7691 LNAI",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "649--660",
booktitle = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",

}

TY - GEN

T1 - An efficient adversarial learning strategy for constructing robust classification boundaries

AU - Liu, Wei

AU - Chawla, Sanjay

AU - Bailey, James

AU - Leckie, Christopher

AU - Ramamohanarao, Kotagiri

PY - 2012

Y1 - 2012

N2 - Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However in some adversarial settings, the test set can be deliberately constructed in order to increase the error rates of a classifier. A prominent example is email spam where words are transformed to avoid word-based features embedded in a spam filter. Recent research has modeled interactions between a data miner and an adversary as a sequential Stackelberg game, and solved its Nash equilibrium to build classifiers that are more robust to subsequent manipulations on training data sets. However in this paper we argue that the iterative algorithm used in the Stackelberg game, which solves an optimization problem at each step of play, is sufficient but not necessary for achieving Nash equilibria in classification problems. Instead, we propose a method that transforms singular vectors of a training data matrix to simulate manipulations by an adversary, and from that perspective a Nash equilibrium can be obtained by solving a novel optimization problem only once. We show that compared with the iterative algorithm used in recent literature, our one-step game significantly reduces computing time while still being able to produce good Nash equilibria results.

AB - Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However in some adversarial settings, the test set can be deliberately constructed in order to increase the error rates of a classifier. A prominent example is email spam where words are transformed to avoid word-based features embedded in a spam filter. Recent research has modeled interactions between a data miner and an adversary as a sequential Stackelberg game, and solved its Nash equilibrium to build classifiers that are more robust to subsequent manipulations on training data sets. However in this paper we argue that the iterative algorithm used in the Stackelberg game, which solves an optimization problem at each step of play, is sufficient but not necessary for achieving Nash equilibria in classification problems. Instead, we propose a method that transforms singular vectors of a training data matrix to simulate manipulations by an adversary, and from that perspective a Nash equilibrium can be obtained by solving a novel optimization problem only once. We show that compared with the iterative algorithm used in recent literature, our one-step game significantly reduces computing time while still being able to produce good Nash equilibria results.

UR - http://www.scopus.com/inward/record.url?scp=84871396255&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84871396255&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-35101-3_55

DO - 10.1007/978-3-642-35101-3_55

M3 - Conference contribution

SN - 9783642351006

VL - 7691 LNAI

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 649

EP - 660

BT - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

ER -