Loading ...

Probing for Referential Information in Language Models

Área de investigaciónFísica
TítuloProbing for Referential Information in Language Models
Tipo de publicaciónConference Proceedings
Año de publicación2020
AutoresSorodoc, I-T, Gulordava, K, Boleda, G
Revista58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020)
Páginas4177-4189
EditorialAssoc Computat Linguist
Conference LocationSTROUDSBURG, PA 18360 USA
ISBN number978-1-952148-25-5
Abstract

Language models keep track of complex linguistic information about the preceding context - including, e.g., syntactic relations in a sentence. We investigate whether they also capture information beneficial for resolving pronominal anaphora in English. We analyze two state of the art models with LSTM and Transformer architectures, respectively, using probe tasks on a coreference annotated corpus. Our hypothesis is that language models will capture grammatical properties of anaphora (such as agreement between a pronoun and its antecedent), but not semantico-referential information (the fact that pronoun and antecedent refer to the same entity). Instead, we find evidence that models capture referential aspects to some extent-though they are still much better at grammar. The Transformer outperforms the LSTM in all analyses, and exhibits in particular better semantico-referential abilities.