Follow
Mor Geva
Mor Geva
Allen Institute for AI
Verified email at allenai.org - Homepage
Title
Cited by
Cited by
Year
Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets
M Geva, Y Goldberg, J Berant
EMNLP 2019, 2019
2042019
Injecting Numerical Reasoning Skills into Language Models
M Geva, A Gupta, J Berant
ACL 2020, 2020
952020
Break It Down: A Question Understanding Benchmark
T Wolfson, M Geva, A Gupta, M Gardner, Y Goldberg, D Deutch, J Berant
TACL, 2020
872020
Transformer feed-forward layers are key-value memories
M Geva, R Schuster, J Berant, O Levy
arXiv preprint arXiv:2012.14913, 2020
362020
Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies
M Geva, D Khashabi, E Segal, T Khot, D Roth, J Berant
Transactions of the Association for Computational Linguistics 9, 346-361, 2021
342021
DiscoFuse: A Large-Scale Dataset for Discourse-based Sentence Fusion
M Geva, E Malmi, I Szpektor, J Berant
NAACL-HLT 2019 1, 3443-3455, 2019
332019
Emergence of communication in an interactive world with consistent speakers
B Bogin, M Geva, J Berant
arXiv preprint arXiv:1809.00549, 2018
272018
Evaluating semantic parsing against a simple web-based question answering model
A Talmor, M Geva, J Berant
*SEM 2018, 2017
142017
Learning to Search in Long Documents Using Document Structure
M Geva, J Berant
COLING 2018, 2018
122018
Scrolls: Standardized comparison over long language sequences
U Shaham, E Segal, M Ivgi, A Efrat, O Yoran, A Haviv, A Gupta, W Xiong, ...
arXiv preprint arXiv:2201.03533, 2022
82022
Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
72022
Are we modeling the task or the annotator
M Geva, Y Goldberg, J Berant
An Investigation of Annotator Bias in Natural Language Understanding …, 2019
62019
Break, perturb, build: Automatic perturbation of reasoning paths through question decomposition
M Geva, T Wolfson, J Berant
Transactions of the Association for Computational Linguistics 10, 111-126, 2022
52022
Roei Schuster, Jonathan Berant, and Omer Levy
M Geva
Transformer feed-forward layers are key-value memories, 2021
42021
Don't Blame the Annotator: Bias Already Starts in the Annotation Instructions
M Parmar, S Mishra, M Geva, C Baral
arXiv preprint arXiv:2205.00415, 2022
32022
Transformer Feed-Forward Layers Build Predictions by Promoting Concepts in the Vocabulary Space
M Geva, A Caciularu, KR Wang, Y Goldberg
arXiv preprint arXiv:2203.14680, 2022
32022
What's in your Head? Emergent Behaviour in Multi-Task Transformer Models
M Geva, U Katz, A Ben-Arie, J Berant
arXiv preprint arXiv:2104.06129, 2021
22021
Inferring Implicit Relations with Language Models
U Katz, M Geva, J Berant
arXiv preprint arXiv:2204.13778, 2022
12022
LM-Debugger: An Interactive Tool for Inspection and Intervention in Transformer-Based Language Models
M Geva, A Caciularu, G Dar, P Roit, S Sadde, M Shlain, B Tamir, ...
arXiv preprint arXiv:2204.12130, 2022
12022
Analyzing Transformers in Embedding Space
G Dar, M Geva, A Gupta, J Berant
arXiv preprint arXiv:2209.02535, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–20