Adversarial attack, defense, and applications with deep learning frameworks

Zhizhou Yin, Wei Liu, Sanjay Chawla

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

In recent years, deep learning frameworks have been applied in many domains and achieved promising performance. However, recent work have demonstrated that deep learning frameworks are vulnerable to adversarial attacks. A trained neural network can be manipulated by small perturbations added to legitimate samples. In computer vision domain, these small perturbations could be imperceptible to human. As deep learning techniques have become the core part for many security-critical applications including identity recognition camera, malware detection software, self-driving cars, adversarial attacks have become one crucial security threat to many deep learning applications in real world. In this chapter, we first review some state-of-the-art adversarial attack techniques for deep learning frameworks in both white-box and black-box settings. We then discuss recent methods to defend against adversarial attacks on deep learning frameworks. Finally, we explore recent work applying adversarial attack techniques to some popular commercial deep learning applications, such as image classification, speech recognition and malware detection. These projects demonstrate that many commercial deep learning frameworks are vulnerable to malicious cyber security attacks.

Original languageEnglish
Title of host publicationAdvanced Sciences and Technologies for Security Applications
PublisherSpringer
Pages1-25
Number of pages25
DOIs
Publication statusPublished - 1 Jan 2019

Publication series

NameAdvanced Sciences and Technologies for Security Applications
ISSN (Print)1613-5113
ISSN (Electronic)2363-9466

Fingerprint

Learning
learning
Computer Security
Deep learning
Image classification
Speech recognition
neural network
Computer vision
Railroad cars
Software
Cameras
threat
Neural networks
performance
Malware

Keywords

  • Adversarial learning
  • Cyber security
  • Deep learning

ASJC Scopus subject areas

  • Safety, Risk, Reliability and Quality
  • Safety Research
  • Political Science and International Relations
  • Computer Science Applications
  • Computer Networks and Communications
  • Health, Toxicology and Mutagenesis

Cite this

Yin, Z., Liu, W., & Chawla, S. (2019). Adversarial attack, defense, and applications with deep learning frameworks. In Advanced Sciences and Technologies for Security Applications (pp. 1-25). (Advanced Sciences and Technologies for Security Applications). Springer. https://doi.org/10.1007/978-3-030-13057-2_1

Adversarial attack, defense, and applications with deep learning frameworks. / Yin, Zhizhou; Liu, Wei; Chawla, Sanjay.

Advanced Sciences and Technologies for Security Applications. Springer, 2019. p. 1-25 (Advanced Sciences and Technologies for Security Applications).

Research output: Chapter in Book/Report/Conference proceedingChapter

Yin, Z, Liu, W & Chawla, S 2019, Adversarial attack, defense, and applications with deep learning frameworks. in Advanced Sciences and Technologies for Security Applications. Advanced Sciences and Technologies for Security Applications, Springer, pp. 1-25. https://doi.org/10.1007/978-3-030-13057-2_1
Yin Z, Liu W, Chawla S. Adversarial attack, defense, and applications with deep learning frameworks. In Advanced Sciences and Technologies for Security Applications. Springer. 2019. p. 1-25. (Advanced Sciences and Technologies for Security Applications). https://doi.org/10.1007/978-3-030-13057-2_1
Yin, Zhizhou ; Liu, Wei ; Chawla, Sanjay. / Adversarial attack, defense, and applications with deep learning frameworks. Advanced Sciences and Technologies for Security Applications. Springer, 2019. pp. 1-25 (Advanced Sciences and Technologies for Security Applications).
@inbook{42b9cc2203aa4fdcb1aca21e5e4ffc9d,
title = "Adversarial attack, defense, and applications with deep learning frameworks",
abstract = "In recent years, deep learning frameworks have been applied in many domains and achieved promising performance. However, recent work have demonstrated that deep learning frameworks are vulnerable to adversarial attacks. A trained neural network can be manipulated by small perturbations added to legitimate samples. In computer vision domain, these small perturbations could be imperceptible to human. As deep learning techniques have become the core part for many security-critical applications including identity recognition camera, malware detection software, self-driving cars, adversarial attacks have become one crucial security threat to many deep learning applications in real world. In this chapter, we first review some state-of-the-art adversarial attack techniques for deep learning frameworks in both white-box and black-box settings. We then discuss recent methods to defend against adversarial attacks on deep learning frameworks. Finally, we explore recent work applying adversarial attack techniques to some popular commercial deep learning applications, such as image classification, speech recognition and malware detection. These projects demonstrate that many commercial deep learning frameworks are vulnerable to malicious cyber security attacks.",
keywords = "Adversarial learning, Cyber security, Deep learning",
author = "Zhizhou Yin and Wei Liu and Sanjay Chawla",
year = "2019",
month = "1",
day = "1",
doi = "10.1007/978-3-030-13057-2_1",
language = "English",
series = "Advanced Sciences and Technologies for Security Applications",
publisher = "Springer",
pages = "1--25",
booktitle = "Advanced Sciences and Technologies for Security Applications",

}

TY - CHAP

T1 - Adversarial attack, defense, and applications with deep learning frameworks

AU - Yin, Zhizhou

AU - Liu, Wei

AU - Chawla, Sanjay

PY - 2019/1/1

Y1 - 2019/1/1

N2 - In recent years, deep learning frameworks have been applied in many domains and achieved promising performance. However, recent work have demonstrated that deep learning frameworks are vulnerable to adversarial attacks. A trained neural network can be manipulated by small perturbations added to legitimate samples. In computer vision domain, these small perturbations could be imperceptible to human. As deep learning techniques have become the core part for many security-critical applications including identity recognition camera, malware detection software, self-driving cars, adversarial attacks have become one crucial security threat to many deep learning applications in real world. In this chapter, we first review some state-of-the-art adversarial attack techniques for deep learning frameworks in both white-box and black-box settings. We then discuss recent methods to defend against adversarial attacks on deep learning frameworks. Finally, we explore recent work applying adversarial attack techniques to some popular commercial deep learning applications, such as image classification, speech recognition and malware detection. These projects demonstrate that many commercial deep learning frameworks are vulnerable to malicious cyber security attacks.

AB - In recent years, deep learning frameworks have been applied in many domains and achieved promising performance. However, recent work have demonstrated that deep learning frameworks are vulnerable to adversarial attacks. A trained neural network can be manipulated by small perturbations added to legitimate samples. In computer vision domain, these small perturbations could be imperceptible to human. As deep learning techniques have become the core part for many security-critical applications including identity recognition camera, malware detection software, self-driving cars, adversarial attacks have become one crucial security threat to many deep learning applications in real world. In this chapter, we first review some state-of-the-art adversarial attack techniques for deep learning frameworks in both white-box and black-box settings. We then discuss recent methods to defend against adversarial attacks on deep learning frameworks. Finally, we explore recent work applying adversarial attack techniques to some popular commercial deep learning applications, such as image classification, speech recognition and malware detection. These projects demonstrate that many commercial deep learning frameworks are vulnerable to malicious cyber security attacks.

KW - Adversarial learning

KW - Cyber security

KW - Deep learning

UR - http://www.scopus.com/inward/record.url?scp=85075907489&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85075907489&partnerID=8YFLogxK

U2 - 10.1007/978-3-030-13057-2_1

DO - 10.1007/978-3-030-13057-2_1

M3 - Chapter

AN - SCOPUS:85075907489

T3 - Advanced Sciences and Technologies for Security Applications

SP - 1

EP - 25

BT - Advanced Sciences and Technologies for Security Applications

PB - Springer

ER -