Sluicing resolution is the task of identifying the antecedent to a question ellipsis. Antecedents are often sentential constituents, and previous work has therefore relied on syntactic parsing, together with complex linguistic features. A recent model instead used partial parsing as an auxiliary task in sequential neural network architectures to inject syntactic information. We explore the linguistic information being brought to bear by such networks, both by deﬁning subsets of the data exhibiting relevant linguistic characteristics, and by examining the internal representations of the network. Both perspectives provide evidence for substantial linguistic knowledge being deployed by the neural networks.
|Title of host publication||Proceedings of the 2018 EMNLP Workshop BlackboxNLP : Analyzing and Interpreting Neural Networks for NLP|
|Editors||Tal Linzen, Grzegorz Chrupała, Afra Alishahi|
|Number of pages||8|
|Place of Publication||Brussels|
|Publisher||Association for Computational Linguistics|
|Publication status||Published - 2018|
|Event||2018 Conference on Empirical Methods in Natural Language Processing - Square Meeting Center, Brussels, Belgium|
Duration: 31 Oct 2018 → 4 Nov 2018
|Conference||2018 Conference on Empirical Methods in Natural Language Processing|
|Location||Square Meeting Center|
|Period||31/10/2018 → 04/11/2018|
Rønning, O., Hardt, D., & Søgaard, A. (2018). Linguistic Representations in Multi-task Neural Networks for Ellipsis Resolution. In T. Linzen, G. Chrupała, & A. Alishahi (Eds.), Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (pp. 66-73). Association for Computational Linguistics.