I am a Postdoctoral Fellow at Princeton Language and Intelligence, Princeton University. My research focuses on the fundamentals of deep learning and modern artificial intelligence (AI) systems. By combining mathematical analysis with systematic experimentation, I aim to develop theories that shed light on how modern AI works, identify potential failures, and yield principled methods for improving efficiency, reliability, and performance.
My work is supported in part by a Zuckerman Postdoctoral Scholarship. Previously, I obtained my PhD in Computer Science at Tel Aviv University, where I was fortunate to be advised by Nadav Cohen. During my PhD, I interned at Apple Machine Learning Research and the Microsoft Recommendations Team, and received the Apple Scholars in AI/ML and the Tel Aviv University Center for AI & Data Science fellowships.
I am joining the Computer Science & AI Department at Bar-Ilan University as an Assistant Professor this summer! If you are interested in joining my group please see the note below.
Research Approach
My group will focus on advancing the theoretical foundations of deep learning and modern AI systems, with the goal of deriving actionable insights that inform practice. This type of research, which bridges theory and practice, typically involves both rigorous mathematical analysis and systematic experimentation. You can find more details in the materials on this website (papers, talks, etc.).
Mentorship
I am looking for highly motivated MSc and PhD students with a strong academic record to join this effort. I will be deeply invested in your growth as a researcher. Together, we will identify directions that excite you, navigate through challenges, and create space for the kind of open-ended exploration that leads to original and impactful work.
Reaching Out
If you are interested in joining, feel free to reach out via email. Please include your CV and grade transcripts so I can better understand your background.
Recent Research
Recently, I have been working on aspects of converting language models to useful AI systems (aka post-training), including failures of preference-based alignment [1] and policy gradient methods [2], what makes a good proxy reward function [3, 4], catastrophic forgetting [5], and reward model generalization [6].
News
New paper categorizes imperfect proxy rewards according to their effect on policy gradient optimization. It highlights that, although incorrect rewards are conventionally viewed as harmful, they can also be benign or even beneficial!
Why is Your Language Model a Poor Implicit Reward Model? received a best paper runner-up award at the NeurIPS 2025 Reliable Machine Learning from Unreliable Data Workshop and was accepted to ICLR 2026!
Two papers accepted to NeurIPS 2025: one provides an optimization perspective on what makes a good reward model for RLHF and another proves that the implicit bias of state space models (SSMs) can be poisoned with clean labels.
Publications
See also Google Scholar* indicates equal contribution
Understanding Deep Learning via Notions of Rank
Noam Razin
arXiv:2408.02111 (PhD thesis), 2024
Lecture Notes on Linear Neural Networks: A Tale of Optimization and Generalization in Deep Learning
Nadav Cohen, Noam Razin
arXiv:2408.13767 (lecture notes), 2024
RecoBERT: A Catalog Language Model for Text-Based Recommendations
Itzik Malkiel, Oren Barkan, Avi Caciularu, Noam Razin, Ori Katz, Noam Koenigstein
Findings of the Association for Computational Linguistics: EMNLP, 2020
Selected Talks
Unintentional Unalignment: Likelihood Displacement in Direct Preference Optimization
Deep Learning: Classics and Trends Seminar · Jan 2025
Two Analyses of Modern Deep Learning: Graph Neural Networks and Language Model Finetuning
Princeton Alg-ML Seminar · Dec 2023
Implicit Regularization in Deep Learning May Not Be Explainable by Norms
Tel Aviv University Machine Learning Seminar · May 2020
Teaching
Fundamentals of Deep Learning
Guest Lecturer · Princeton University · 2025
Introduction to Reinforcement Learning
Guest Lecturer · Princeton University · 2025
First Steps in Research Honors Seminar
Guest Lecturer · Tel Aviv University · 2021–2024
Foundations of Deep Learning
Teaching Assistant · Tel Aviv University · 2021–2023