What do neural machine translation models learn about morphology?

Yonatan Belinkov, Nadir Durrani, Fahim Dalvi, Hassan Sajjad, James Glass

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Citations (Scopus)

Abstract

Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs. character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs. decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.

Original languageEnglish
Title of host publicationACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)
PublisherAssociation for Computational Linguistics (ACL)
Pages861-872
Number of pages12
Volume1
ISBN (Electronic)9781945626753
DOIs
Publication statusPublished - 1 Jan 2017
Event55th Annual Meeting of the Association for Computational Linguistics, ACL 2017 - Vancouver, Canada
Duration: 30 Jul 20174 Aug 2017

Other

Other55th Annual Meeting of the Association for Computational Linguistics, ACL 2017
CountryCanada
CityVancouver
Period30/7/174/8/17

Fingerprint

language
Machine Translation
ability
evaluation
learning
performance
Language
Tagging
Extrinsic
Part of Speech
Granularity
Word Structure
Machine Translation System
Evaluation
Encoding
Data-driven
Performance Art
Layer

ASJC Scopus subject areas

  • Language and Linguistics
  • Artificial Intelligence
  • Software
  • Linguistics and Language

Cite this

Belinkov, Y., Durrani, N., Dalvi, F., Sajjad, H., & Glass, J. (2017). What do neural machine translation models learn about morphology? In ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers) (Vol. 1, pp. 861-872). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/P17-1080

What do neural machine translation models learn about morphology? / Belinkov, Yonatan; Durrani, Nadir; Dalvi, Fahim; Sajjad, Hassan; Glass, James.

ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). Vol. 1 Association for Computational Linguistics (ACL), 2017. p. 861-872.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Belinkov, Y, Durrani, N, Dalvi, F, Sajjad, H & Glass, J 2017, What do neural machine translation models learn about morphology? in ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). vol. 1, Association for Computational Linguistics (ACL), pp. 861-872, 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, 30/7/17. https://doi.org/10.18653/v1/P17-1080
Belinkov Y, Durrani N, Dalvi F, Sajjad H, Glass J. What do neural machine translation models learn about morphology? In ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). Vol. 1. Association for Computational Linguistics (ACL). 2017. p. 861-872 https://doi.org/10.18653/v1/P17-1080
Belinkov, Yonatan ; Durrani, Nadir ; Dalvi, Fahim ; Sajjad, Hassan ; Glass, James. / What do neural machine translation models learn about morphology?. ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers). Vol. 1 Association for Computational Linguistics (ACL), 2017. pp. 861-872
@inproceedings{f3d307d7182c4d548b3830d6f9f5b183,
title = "What do neural machine translation models learn about morphology?",
abstract = "Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs. character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs. decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.",
author = "Yonatan Belinkov and Nadir Durrani and Fahim Dalvi and Hassan Sajjad and James Glass",
year = "2017",
month = "1",
day = "1",
doi = "10.18653/v1/P17-1080",
language = "English",
volume = "1",
pages = "861--872",
booktitle = "ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)",
publisher = "Association for Computational Linguistics (ACL)",

}

TY - GEN

T1 - What do neural machine translation models learn about morphology?

AU - Belinkov, Yonatan

AU - Durrani, Nadir

AU - Dalvi, Fahim

AU - Sajjad, Hassan

AU - Glass, James

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs. character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs. decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.

AB - Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture. However, little is known about what these models learn about source and target languages during the training process. In this work, we analyze the representations learned by neural MT models at various levels of granularity and empirically evaluate the quality of the representations for learning morphology through extrinsic part-of-speech and morphological tagging tasks. We conduct a thorough investigation along several parameters: word-based vs. character-based representations, depth of the encoding layer, the identity of the target language, and encoder vs. decoder representations. Our data-driven, quantitative evaluation sheds light on important aspects in the neural MT system and its ability to capture word structure.

UR - http://www.scopus.com/inward/record.url?scp=85040561636&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85040561636&partnerID=8YFLogxK

U2 - 10.18653/v1/P17-1080

DO - 10.18653/v1/P17-1080

M3 - Conference contribution

AN - SCOPUS:85040561636

VL - 1

SP - 861

EP - 872

BT - ACL 2017 - 55th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers)

PB - Association for Computational Linguistics (ACL)

ER -