How to avoid unwanted pregnancies: Domain adaptation using neural network models

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)

Abstract

We present novel models for domain adaptation based on the neural network joint model (NNJM). Our models maximize the cross enttopy by regularizing the loss function with respect to in-domain model. Domain adaptation is carried out by assigning higher weight to out-domain sequences that are similar to the in-domain data. In our alternative model we take a more restrictive approach by additionally penalizing sequences similar to the outdomain data. Our models achieve better perplexities than the baseline NNIM models and give improvements of up to 0.5 and 0.6 BLEU points in Arabic-to-English and English-to-German language pairs, on a standard task of translating TED talks.

Original languageEnglish
Title of host publicationConference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing
PublisherAssociation for Computational Linguistics (ACL)
Pages1259-1270
Number of pages12
ISBN (Print)9781941643327
Publication statusPublished - 2015
EventConference on Empirical Methods in Natural Language Processing, EMNLP 2015 - Lisbon, Portugal
Duration: 17 Sep 201521 Sep 2015

Other

OtherConference on Empirical Methods in Natural Language Processing, EMNLP 2015
CountryPortugal
CityLisbon
Period17/9/1521/9/15

Fingerprint

Neural networks

ASJC Scopus subject areas

  • Computational Theory and Mathematics
  • Computer Science Applications
  • Information Systems

Cite this

Rayhan Joty, S., Sajjad, H., Durrani, N., Al-Mannai, K., Abdelali, A., & Vogel, S. (2015). How to avoid unwanted pregnancies: Domain adaptation using neural network models. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing (pp. 1259-1270). Association for Computational Linguistics (ACL).

How to avoid unwanted pregnancies : Domain adaptation using neural network models. / Rayhan Joty, Shafiq; Sajjad, Hassan; Durrani, Nadir; Al-Mannai, Kamla; Abdelali, Ahmed; Vogel, Stephan.

Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), 2015. p. 1259-1270.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Rayhan Joty, S, Sajjad, H, Durrani, N, Al-Mannai, K, Abdelali, A & Vogel, S 2015, How to avoid unwanted pregnancies: Domain adaptation using neural network models. in Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), pp. 1259-1270, Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, 17/9/15.
Rayhan Joty S, Sajjad H, Durrani N, Al-Mannai K, Abdelali A, Vogel S. How to avoid unwanted pregnancies: Domain adaptation using neural network models. In Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL). 2015. p. 1259-1270
Rayhan Joty, Shafiq ; Sajjad, Hassan ; Durrani, Nadir ; Al-Mannai, Kamla ; Abdelali, Ahmed ; Vogel, Stephan. / How to avoid unwanted pregnancies : Domain adaptation using neural network models. Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (ACL), 2015. pp. 1259-1270
@inproceedings{a8148acf9a844203be9c1e12b80ecfe2,
title = "How to avoid unwanted pregnancies: Domain adaptation using neural network models",
abstract = "We present novel models for domain adaptation based on the neural network joint model (NNJM). Our models maximize the cross enttopy by regularizing the loss function with respect to in-domain model. Domain adaptation is carried out by assigning higher weight to out-domain sequences that are similar to the in-domain data. In our alternative model we take a more restrictive approach by additionally penalizing sequences similar to the outdomain data. Our models achieve better perplexities than the baseline NNIM models and give improvements of up to 0.5 and 0.6 BLEU points in Arabic-to-English and English-to-German language pairs, on a standard task of translating TED talks.",
author = "{Rayhan Joty}, Shafiq and Hassan Sajjad and Nadir Durrani and Kamla Al-Mannai and Ahmed Abdelali and Stephan Vogel",
year = "2015",
language = "English",
isbn = "9781941643327",
pages = "1259--1270",
booktitle = "Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing",
publisher = "Association for Computational Linguistics (ACL)",

}

TY - GEN

T1 - How to avoid unwanted pregnancies

T2 - Domain adaptation using neural network models

AU - Rayhan Joty, Shafiq

AU - Sajjad, Hassan

AU - Durrani, Nadir

AU - Al-Mannai, Kamla

AU - Abdelali, Ahmed

AU - Vogel, Stephan

PY - 2015

Y1 - 2015

N2 - We present novel models for domain adaptation based on the neural network joint model (NNJM). Our models maximize the cross enttopy by regularizing the loss function with respect to in-domain model. Domain adaptation is carried out by assigning higher weight to out-domain sequences that are similar to the in-domain data. In our alternative model we take a more restrictive approach by additionally penalizing sequences similar to the outdomain data. Our models achieve better perplexities than the baseline NNIM models and give improvements of up to 0.5 and 0.6 BLEU points in Arabic-to-English and English-to-German language pairs, on a standard task of translating TED talks.

AB - We present novel models for domain adaptation based on the neural network joint model (NNJM). Our models maximize the cross enttopy by regularizing the loss function with respect to in-domain model. Domain adaptation is carried out by assigning higher weight to out-domain sequences that are similar to the in-domain data. In our alternative model we take a more restrictive approach by additionally penalizing sequences similar to the outdomain data. Our models achieve better perplexities than the baseline NNIM models and give improvements of up to 0.5 and 0.6 BLEU points in Arabic-to-English and English-to-German language pairs, on a standard task of translating TED talks.

UR - http://www.scopus.com/inward/record.url?scp=84959872416&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84959872416&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84959872416

SN - 9781941643327

SP - 1259

EP - 1270

BT - Conference Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing

PB - Association for Computational Linguistics (ACL)

ER -