Paul Michel
Paul Michel
Post-doc, Center for Data Science/LSCP, École Normale Supérieure
Verified email at - Homepage
Cited by
Cited by
Dynet: The dynamic neural network toolkit
G Neubig, C Dyer, Y Goldberg, A Matthews, W Ammar, A Anastasopoulos, ...
arXiv preprint arXiv:1701.03980, 2017
Are Sixteen Heads Really Better than One?
P Michel, O Levy, G Neubig
NeurIPS 2019, 2019
MTNT: A Testbed for Machine Translation of Noisy Text
P Michel, G Neubig
EMNLP 2018, 2018
Weight Poisoning Attacks on Pre-trained Models
K Kurita, P Michel, G Neubig
ACL 2020, 2020
Extreme Adaptation for Personalized Neural Machine Translation
P Michel, G Neubig
ACL 2018, 2018
compare-mt: A Tool for Holistic Comparison of Language Generation Systems
G Neubig, ZY Dou, J Hu, P Michel, D Pruthi, X Wang
NAACL 2019 Demo, 2019
On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models
P Michel, X Li, G Neubig, JM Pino
NAACL 2019, 2019
Findings of the first shared task on machine translation robustness
X Li, P Michel, A Anastasopoulos, Y Belinkov, N Durrani, O Firat, P Koehn, ...
WMT 2019, 2019
Optimizing data usage via differentiable rewards
X Wang, H Pham, P Michel, A Anastasopoulos, J Carbonell, G Neubig
International Conference on Machine Learning, 9983-9995, 2020
Blind phoneme segmentation with temporal prediction errors
P Michel, O Räsänen, R Thiolliere, E Dupoux
ACL SRW 2017, 2016
Findings of the wmt 2020 shared task on machine translation robustness
L Specia, Z Li, J Pino, V Chaudhary, F Guzmán, G Neubig, N Durrani, ...
Proceedings of the Fifth Conference on Machine Translation, 76-91, 2020
Does the Geometry of Word Embeddings Help Document Classification? A Case Study on Persistent Homology Based Representations
P Michel, A Ravichander, S Rijhwani
Proceedings of the 2nd Workshop on Representation Learning for NLP, 2017
Modeling the Second Player in Distributionally Robust Optimization
P Michel, T Hashimoto, G Neubig
ICLR 2021, 2021
Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative
LM Dery, P Michel, A Talwalkar, G Neubig
arXiv preprint arXiv:2109.07437, 2021
Examining and Combating Spurious Features under Distribution Shift
C Zhou, X Ma, P Michel, G Neubig
ICML 2021, 2021
Regularizing trajectories to mitigate catastrophic forgetting
P Michel, E Salesky, G Neubig
Balancing Average and Worst-case Accuracy in Multitask Learning
P Michel, S Ruder, D Yogatama
arXiv preprint arXiv:2110.05838, 2021
Learning Neural Models for Natural Language Processing in the Face of Distributional Shift
P Michel
arXiv preprint arXiv:2109.01558, 2021
Examining and Combating Spurious Features under Distribution Shift Open Website
C Zhou, X Ma, P Michel, G Neubig
The system can't perform the operation now. Try again later.
Articles 1–19