Comparing rewinding and fine-tuning in neural network pruning A Renda, J Frankle, M Carbin International Conference on Learning Representations, 2020 | 457 | 2020 |
Ithemal: Accurate, portable and fast basic block throughput estimation using deep neural networks C Mendis, A Renda, S Amarasinghe, M Carbin International Conference on Machine Learning, 2019 | 175 | 2019 |
Difftune: Optimizing cpu simulator parameters with learned differentiable surrogates A Renda, Y Chen, C Mendis, M Carbin 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture …, 2020 | 42 | 2020 |
BHive: A benchmark suite and measurement framework for validating x86-64 basic block performance models Y Chen, A Brahmakshatriya, C Mendis, A Renda, E Atkinson, O Sıkora, ... 2019 IEEE International Symposium on Workload Characterization (IISWC), 167-177, 2019 | 41 | 2019 |
TIRAMISU: A polyhedral compiler for dense and sparse deep learning R Baghdadi, AN Debbagh, K Abdous, FZ Benhamida, A Renda, ... arXiv preprint arXiv:2005.04091, 2020 | 15 | 2020 |
Can llms generate random numbers? evaluating llm sampling in controlled domains AK Hopkins, A Renda, M Carbin ICML 2023 Workshop: Sampling and Optimization in Discrete Space, 2023 | 11 | 2023 |
Comparing rewinding and fine-tuning in neural network pruning. arXiv 2020 A Renda, J Frankle, M Carbin arXiv preprint arXiv:2003.02389, 0 | 7 | |
Programming with neural surrogates of programs A Renda, Y Ding, M Carbin Proceedings of the 2021 ACM SIGPLAN International Symposium on New Ideas …, 2021 | 6 | 2021 |
The effect of data dimensionality on neural network prunability Z Ankner, A Renda, GK Dziugaite, J Frankle, T Jin arXiv preprint arXiv:2212.00291, 2022 | 5 | 2022 |
Cello: Efficient computer systems optimization with predictive early termination and censored regression Y Ding, A Renda, A Pervaiz, M Carbin, H Hoffmann arXiv preprint arXiv:2204.04831, 2022 | 4 | 2022 |
COMET: X86 cost model explanation framework I Chaudhary, A Renda, C Mendis, G Singh arXiv preprint arXiv:2302.06836, 2023 | 1 | 2023 |
A Theory of Equivalence-Preserving Program Embeddings L Weber, J Michel, A Renda, S Amarasinghe, M Carbin | 1 | 2023 |
Renamer: A Transformer Architecture In-variant to Variable Renaming Z Ankner, A Renda, M Carbin | 1 | 2023 |
Programming Language Support for Natural Language Interaction A Renda, H Goldstein, S Bird, C Quirk, A Sampson 2018 SysML conference, 2018 | 1 | 2018 |
Learning to Compile Programs to Neural Networks L Weber, J Michel, A Renda, M Carbin arXiv preprint arXiv:2407.15078, 2024 | | 2024 |
COMET: Neural Cost Model Explanation Framework I Chaudhary, A Renda, C Mendis, G Singh Proceedings of Machine Learning and Systems 6, 499-511, 2024 | | 2024 |
Turaco: Complexity-Guided Data Sampling for Training Neural Surrogates of Programs A Renda, Y Ding, M Carbin Proceedings of the ACM on Programming Languages 7 (OOPSLA2), 1648-1676, 2023 | | 2023 |
Fast Binarized Neural Network Training with Partial Pre-training A Renda, JW Fromm | | 2020 |
Abstractions for AI-Based User Interfaces and Systems A Renda, H Goldstein, S Bird, C Quirk, A Sampson arXiv preprint arXiv:1709.04991, 2017 | | 2017 |
Optimal Data Sampling for Training Neural Surrogates of Programs A Renda, Y Ding, M Carbin | | |