Sparse Feature Attacks in Adversarial Learning

Zhizhou Yin, Fei Wang, Wei Liu, Sanjay Chawla

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

Adversarial learning is the study of machine learning techniques deployed in non-benign environments. Example applications include classification for detecting spam, network intrusion detection and credit card scoring. In fact, as the use of machine learning grows in diverse application domains, the possibility for adversarial behaviour is likely to increase. When adversarial learning is modelled in a game-theoretic setup, the standard assumption about the adversary (player) behaviour is the ability to change all features of the classifiers (the opponent player) at will. The adversary pays a cost proportional to the size of the "attack". We refer to this form of adversarial behaviour as a dense feature attack.

Original languageEnglish
JournalIEEE Transactions on Knowledge and Data Engineering
DOIs
Publication statusAccepted/In press - 6 Jan 2018

Fingerprint

Learning systems
Intrusion detection
Classifiers
Costs

Keywords

  • Adversarial learning
  • Data models
  • Electronic mail
  • Gallium nitride
  • Game theory
  • Games
  • l1 regularizer
  • Nash equilibrium
  • Robustness
  • Sparse modelling
  • Stackelberg game
  • Transforms

ASJC Scopus subject areas

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics

Cite this

Sparse Feature Attacks in Adversarial Learning. / Yin, Zhizhou; Wang, Fei; Liu, Wei; Chawla, Sanjay.

In: IEEE Transactions on Knowledge and Data Engineering, 06.01.2018.

Research output: Contribution to journalArticle

@article{9612eb042bcc45319d3c5dec0b137766,
title = "Sparse Feature Attacks in Adversarial Learning",
abstract = "Adversarial learning is the study of machine learning techniques deployed in non-benign environments. Example applications include classification for detecting spam, network intrusion detection and credit card scoring. In fact, as the use of machine learning grows in diverse application domains, the possibility for adversarial behaviour is likely to increase. When adversarial learning is modelled in a game-theoretic setup, the standard assumption about the adversary (player) behaviour is the ability to change all features of the classifiers (the opponent player) at will. The adversary pays a cost proportional to the size of the {"}attack{"}. We refer to this form of adversarial behaviour as a dense feature attack.",
keywords = "Adversarial learning, Data models, Electronic mail, Gallium nitride, Game theory, Games, l1 regularizer, Nash equilibrium, Robustness, Sparse modelling, Stackelberg game, Transforms",
author = "Zhizhou Yin and Fei Wang and Wei Liu and Sanjay Chawla",
year = "2018",
month = "1",
day = "6",
doi = "10.1109/TKDE.2018.2790928",
language = "English",
journal = "IEEE Transactions on Knowledge and Data Engineering",
issn = "1041-4347",
publisher = "IEEE Computer Society",

}

TY - JOUR

T1 - Sparse Feature Attacks in Adversarial Learning

AU - Yin, Zhizhou

AU - Wang, Fei

AU - Liu, Wei

AU - Chawla, Sanjay

PY - 2018/1/6

Y1 - 2018/1/6

N2 - Adversarial learning is the study of machine learning techniques deployed in non-benign environments. Example applications include classification for detecting spam, network intrusion detection and credit card scoring. In fact, as the use of machine learning grows in diverse application domains, the possibility for adversarial behaviour is likely to increase. When adversarial learning is modelled in a game-theoretic setup, the standard assumption about the adversary (player) behaviour is the ability to change all features of the classifiers (the opponent player) at will. The adversary pays a cost proportional to the size of the "attack". We refer to this form of adversarial behaviour as a dense feature attack.

AB - Adversarial learning is the study of machine learning techniques deployed in non-benign environments. Example applications include classification for detecting spam, network intrusion detection and credit card scoring. In fact, as the use of machine learning grows in diverse application domains, the possibility for adversarial behaviour is likely to increase. When adversarial learning is modelled in a game-theoretic setup, the standard assumption about the adversary (player) behaviour is the ability to change all features of the classifiers (the opponent player) at will. The adversary pays a cost proportional to the size of the "attack". We refer to this form of adversarial behaviour as a dense feature attack.

KW - Adversarial learning

KW - Data models

KW - Electronic mail

KW - Gallium nitride

KW - Game theory

KW - Games

KW - l1 regularizer

KW - Nash equilibrium

KW - Robustness

KW - Sparse modelling

KW - Stackelberg game

KW - Transforms

UR - http://www.scopus.com/inward/record.url?scp=85041207690&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041207690&partnerID=8YFLogxK

U2 - 10.1109/TKDE.2018.2790928

DO - 10.1109/TKDE.2018.2790928

M3 - Article

AN - SCOPUS:85041207690

JO - IEEE Transactions on Knowledge and Data Engineering

JF - IEEE Transactions on Knowledge and Data Engineering

SN - 1041-4347

ER -