Identifying and controlling important neurons in neural machine translation

Anthony Bau, Nadir Durrani, Yonatan Belinkov, Fahim Dalvi, Hassan Sajjad, James Glass

Research output: Contribution to conferencePaper

Abstract

Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.

Original languageEnglish
Publication statusPublished - 1 Jan 2019
Event7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States
Duration: 6 May 20199 May 2019

Conference

Conference7th International Conference on Learning Representations, ICLR 2019
CountryUnited States
CityNew Orleans
Period6/5/199/5/19

Fingerprint

Neurons
Linguistics
linguistics
intuition
Chemical activation
activation
supervision
Neuron
Machine Translation

ASJC Scopus subject areas

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Cite this

Bau, A., Durrani, N., Belinkov, Y., Dalvi, F., Sajjad, H., & Glass, J. (2019). Identifying and controlling important neurons in neural machine translation. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.

Identifying and controlling important neurons in neural machine translation. / Bau, Anthony; Durrani, Nadir; Belinkov, Yonatan; Dalvi, Fahim; Sajjad, Hassan; Glass, James.

2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.

Research output: Contribution to conferencePaper

Bau, A, Durrani, N, Belinkov, Y, Dalvi, F, Sajjad, H & Glass, J 2019, 'Identifying and controlling important neurons in neural machine translation' Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States, 6/5/19 - 9/5/19, .
Bau A, Durrani N, Belinkov Y, Dalvi F, Sajjad H, Glass J. Identifying and controlling important neurons in neural machine translation. 2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.
Bau, Anthony ; Durrani, Nadir ; Belinkov, Yonatan ; Dalvi, Fahim ; Sajjad, Hassan ; Glass, James. / Identifying and controlling important neurons in neural machine translation. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.
@conference{f445d3e717ff44cea3316c4b248afe2e,
title = "Identifying and controlling important neurons in neural machine translation",
abstract = "Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.",
author = "Anthony Bau and Nadir Durrani and Yonatan Belinkov and Fahim Dalvi and Hassan Sajjad and James Glass",
year = "2019",
month = "1",
day = "1",
language = "English",
note = "7th International Conference on Learning Representations, ICLR 2019 ; Conference date: 06-05-2019 Through 09-05-2019",

}

TY - CONF

T1 - Identifying and controlling important neurons in neural machine translation

AU - Bau, Anthony

AU - Durrani, Nadir

AU - Belinkov, Yonatan

AU - Dalvi, Fahim

AU - Sajjad, Hassan

AU - Glass, James

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.

AB - Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.

UR - http://www.scopus.com/inward/record.url?scp=85071199116&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071199116&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85071199116

ER -