Abstract
Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.
Original language | English |
---|---|
Publication status | Published - 1 Jan 2019 |
Event | 7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States Duration: 6 May 2019 → 9 May 2019 |
Conference
Conference | 7th International Conference on Learning Representations, ICLR 2019 |
---|---|
Country | United States |
City | New Orleans |
Period | 6/5/19 → 9/5/19 |
Fingerprint
ASJC Scopus subject areas
- Education
- Computer Science Applications
- Linguistics and Language
- Language and Linguistics
Cite this
Identifying and controlling important neurons in neural machine translation. / Bau, Anthony; Durrani, Nadir; Belinkov, Yonatan; Dalvi, Fahim; Sajjad, Hassan; Glass, James.
2019. Paper presented at 7th International Conference on Learning Representations, ICLR 2019, New Orleans, United States.Research output: Contribution to conference › Paper
}
TY - CONF
T1 - Identifying and controlling important neurons in neural machine translation
AU - Bau, Anthony
AU - Durrani, Nadir
AU - Belinkov, Yonatan
AU - Dalvi, Fahim
AU - Sajjad, Hassan
AU - Glass, James
PY - 2019/1/1
Y1 - 2019/1/1
N2 - Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.
AB - Neural machine translation (NMT) models learn representations containing substantial linguistic information. However, it is not clear if such information is fully distributed or if some of it can be attributed to individual neurons. We develop unsupervised methods for discovering important neurons in NMT models. Our methods rely on the intuition that different models learn similar properties, and do not require any costly external supervision. We show experimentally that translation quality depends on the discovered neurons, and find that many of them capture common linguistic phenomena. Finally, we show how to control NMT translations in predictable ways, by modifying activations of individual neurons.
UR - http://www.scopus.com/inward/record.url?scp=85071199116&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85071199116&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85071199116
ER -