Follow
Mor Geva
Mor Geva
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Are we modeling the task or the annotator? an investigation of annotator bias in natural language understanding datasets
M Geva, Y Goldberg, J Berant
EMNLP 2019, 2019
2602019
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
1802022
Injecting Numerical Reasoning Skills into Language Models
M Geva, A Gupta, J Berant
ACL 2020, 2020
1442020
Break It Down: A Question Understanding Benchmark
T Wolfson, M Geva, A Gupta, M Gardner, Y Goldberg, D Deutch, J Berant
TACL, 2020
1282020
Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies
M Geva, D Khashabi, E Segal, T Khot, D Roth, J Berant
Transactions of the Association for Computational Linguistics 9, 346-361, 2021
1232021
Transformer feed-forward layers are key-value memories
M Geva, R Schuster, J Berant, O Levy
arXiv preprint arXiv:2012.14913, 2020
1142020
DiscoFuse: A Large-Scale Dataset for Discourse-based Sentence Fusion
M Geva, E Malmi, I Szpektor, J Berant
NAACL-HLT 2019 1, 3443-3455, 2019
422019
Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space
M Geva, A Caciularu, KR Wang, Y Goldberg
arXiv preprint arXiv:2203.14680, 2022
342022
Emergence of communication in an interactive world with consistent speakers
B Bogin, M Geva, J Berant
arXiv preprint arXiv:1809.00549, 2018
342018
Scrolls: Standardized comparison over long language sequences
U Shaham, E Segal, M Ivgi, A Efrat, O Yoran, A Haviv, A Gupta, W Xiong, ...
arXiv preprint arXiv:2201.03533, 2022
222022
Evaluating semantic parsing against a simple web-based question answering model
A Talmor, M Geva, J Berant
*SEM 2018, 2017
162017
Don't Blame the Annotator: Bias Already Starts in the Annotation Instructions
M Parmar, S Mishra, M Geva, C Baral
arXiv preprint arXiv:2205.00415, 2022
152022
Learning to Search in Long Documents Using Document Structure
M Geva, J Berant
COLING 2018, 2018
142018
Lm-debugger: An interactive tool for inspection and intervention in transformer-based language models
M Geva, A Caciularu, G Dar, P Roit, S Sadde, M Shlain, B Tamir, ...
arXiv preprint arXiv:2204.12130, 2022
122022
Break, perturb, build: Automatic perturbation of reasoning paths through question decomposition
M Geva, T Wolfson, J Berant
Transactions of the Association for Computational Linguistics 10, 111-126, 2022
102022
What's in your Head? Emergent Behaviour in Multi-Task Transformer Models
M Geva, U Katz, A Ben-Arie, J Berant
arXiv preprint arXiv:2104.06129, 2021
82021
Analyzing transformers in embedding space
G Dar, M Geva, A Gupta, J Berant
arXiv preprint arXiv:2209.02535, 2022
72022
Crawling the internal knowledge-base of language models
R Cohen, M Geva, J Berant, A Globerson
arXiv preprint arXiv:2301.12810, 2023
52023
Jump to Conclusions: Short-Cutting Transformers With Linear Transformations
A Yom Din, T Karidi, L Choshen, M Geva
arXiv e-prints, arXiv: 2303.09435, 2023
3*2023
Inferring implicit relations in complex questions with language models
U Katz, M Geva, J Berant
Findings of the Association for Computational Linguistics: EMNLP 2022, 2548-2566, 2022
3*2022
The system can't perform the operation now. Try again later.
Articles 1–20