Loading ...

Putting Words in Context: {LSTM} Language Models and Lexical Ambiguity

Área de investigaciónFísica
TítuloPutting Words in Context: {LSTM} Language Models and Lexical Ambiguity
Tipo de publicaciónConference Proceedings
Año de publicación2019
AutoresAina, L, Gulordava, K, Boleda, G
RevistaProceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Páginas3342–3348
EditorialAssociation for Computational Linguistics
Conference LocationFlorence, Italy
Abstract

In neural network models of language, words are commonly represented using context-invariant representations (word embeddings) which are then put in context in the hidden layers. Since words are often ambiguous, representing the contextually relevant information is not trivial. We investigate how an LSTM language model deals with lexical ambiguity in English, designing a method to probe its hidden representations for lexical and contextual information about words. We find that both types of information are represented to a large extent, but also that there is room for improvement for contextual information.

URLhttps://www.aclweb.org/anthology/P19-1324
DOI10.18653/v1/P19-1324