Visualisation and'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure D Hupkes, S Veldhoen, W Zuidema Journal of Artificial Intelligence Research 61, 907-926, 2018 | 161 | 2018 |
Compositionality decomposed: how do neural networks generalise? D Hupkes, V Dankers, M Mul, E Bruni Journal of Artificial Intelligence Research 67, 757-795, 2020 | 107* | 2020 |
Under the hood: Using diagnostic classifiers to investigate and improve how language models track agreement information M Giulianelli, J Harding, F Mohnert, D Hupkes, W Zuidema arXiv preprint arXiv:1808.08079, 2018 | 102 | 2018 |
The emergence of number and syntax units in LSTM language models Y Lakretz, G Kruszewski, T Desbordes, D Hupkes, S Dehaene, M Baroni arXiv preprint arXiv:1903.07435, 2019 | 94 | 2019 |
Masked language modeling and the distributional hypothesis: Order word matters pre-training for little K Sinha, R Jia, D Hupkes, J Pineau, A Williams, D Kiela arXiv preprint arXiv:2104.06644, 2021 | 60 | 2021 |
Do language models understand anything? on the ability of lstms to understand negative polarity items J Jumelet, D Hupkes arXiv preprint arXiv:1808.10627, 2018 | 36 | 2018 |
Diagnostic classifiers revealing how neural networks process hierarchical structure S Veldhoen, D Hupkes, WH Zuidema CoCo@ NIPS, 2016 | 30 | 2016 |
Analysing neural language models: Contextual decomposition reveals default reasoning in number and gender assignment J Jumelet, W Zuidema, D Hupkes arXiv preprint arXiv:1909.08975, 2019 | 26 | 2019 |
Co-evolution of language and agents in referential games G Dagan, D Hupkes, E Bruni arXiv preprint arXiv:2001.03361, 2020 | 20 | 2020 |
Transcoding compositionally: using attention to find more generalizable solutions K Korrel, D Hupkes, V Dankers, E Bruni arXiv preprint arXiv:1906.01234, 2019 | 20 | 2019 |
Learning compositionally through attentive guidance D Hupkes, A Singh, K Korrel, G Kruszewski, E Bruni arXiv preprint arXiv:1805.09657, 2018 | 18 | 2018 |
Mechanisms for handling nested dependencies in neural-network language models and humans Y Lakretz, D Hupkes, A Vergallito, M Marelli, M Baroni, S Dehaene Cognition 213, 104699, 2021 | 15 | 2021 |
On the realization of compositionality in neural networks J Baan, J Leible, M Nikolaus, D Rau, D Ulmer, T Baumgärtner, D Hupkes, ... arXiv preprint arXiv:1906.01634, 2019 | 13 | 2019 |
Internal and external pressures on language emergence: least effort, object constancy and frequency DR Luna, EM Ponti, D Hupkes, E Bruni arXiv preprint arXiv:2004.03868, 2020 | 11 | 2020 |
POS-tagging of Historical Dutch D Hupkes, R Bod Proceedings of the Tenth International Conference on Language Resources and …, 2016 | 10 | 2016 |
Location attention for extrapolation to longer sequences Y Dubois, G Dagan, D Hupkes, E Bruni arXiv preprint arXiv:1911.03872, 2019 | 9 | 2019 |
Analysing the potential of seq-to-seq models for incremental interpretation in task-oriented dialogue D Hupkes, S Bouwmeester, R Fernández arXiv preprint arXiv:1808.09178, 2018 | 7 | 2018 |
The paradox of the compositionality of natural language: a neural machine translation case study V Dankers, E Bruni, D Hupkes arXiv preprint arXiv:2108.05885, 2021 | 6 | 2021 |
Language models use monotonicity to assess NPI licensing J Jumelet, M Denić, J Szymanik, D Hupkes, S Steinert-Threlkeld arXiv preprint arXiv:2105.13818, 2021 | 5 | 2021 |
Language modelling as a multi-task problem L Weber, J Jumelet, E Bruni, D Hupkes arXiv preprint arXiv:2101.11287, 2021 | 5 | 2021 |