Sparse Feature Attacks in Adversarial Learning

Zhizhou Yin, Fei Wang, Wei Liu, Sanjay Chawla

Research output: Contribution to journalArticle

8 Citations (Scopus)

Abstract

Adversarial learning is the study of machine learning techniques deployed in non-benign environments. Example applications include classification for detecting spam, network intrusion detection and credit card scoring. In fact, as the use of machine learning grows in diverse application domains, the possibility for adversarial behaviour is likely to increase. When adversarial learning is modelled in a game-theoretic setup, the standard assumption about the adversary (player) behaviour is the ability to change all features of the classifiers (the opponent player) at will. The adversary pays a cost proportional to the size of the "attack". We refer to this form of adversarial behaviour as a dense feature attack.

Original languageEnglish
JournalIEEE Transactions on Knowledge and Data Engineering
DOIs
Publication statusAccepted/In press - 6 Jan 2018

    Fingerprint

Keywords

  • Adversarial learning
  • Data models
  • Electronic mail
  • Gallium nitride
  • Game theory
  • Games
  • l1 regularizer
  • Nash equilibrium
  • Robustness
  • Sparse modelling
  • Stackelberg game
  • Transforms

ASJC Scopus subject areas

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics

Cite this