📢 I am on the academic and industry job market for 2025/26
I am a Postdoctoral Fellow at Princeton Language and Intelligence, Princeton University. My research focuses on the fundamentals of artificial intelligence (AI). By combining mathematical analyses with systematic experimentation, I aim to develop theories that shed light on how modern AI works, identify potential failures, and yield principled methods for improving efficiency, reliability, and performance. Most recently, I have been working on language model post-training, including reinforcement learning and preference optimization approaches.
My work is supported in part by a Zuckerman Postdoctoral Scholarship. Previously, I obtained my PhD in Computer Science at Tel Aviv University, where I was fortunate to be advised by Nadav Cohen. During my PhD, I interned at Apple Machine Learning Research and Microsoft Recommendations Team, and received the Apple Scholars in AI/ML and Tel Aviv University Center for AI & Data Science fellowships.
Dec 25: Why is Your Language Model a Poor Implicit Reward Model? received a best paper runner-up award at the NeurIPS 2025 Reliable Machine Learning from Unreliable Data Workshop!
Sep 25: Two papers accepted to NeurIPS 2025: one provides an optimization perspective on what makes a good reward model for RLHF and another proves that the implicit bias of state space models (SSMs) can be poisoned with clean labels.
Jul 25: New paper on why language models are often poor implicit reward models — they tend to rely on superficial token-level cues.
Oct 24: Honored to receive the Zuckerman and Israeli Council for Higher Education Postdoctoral Scholarships.
Sep 24: Joined Princeton Language and Intelligence as a Postdoctoral Fellow.
Aug 24: New lecture notes on the theory (and surprising practical applications) of linear neural networks.
May 24: Implicit Bias of Policy Gradient in Linear Quadratic Control: Extrapolation to Unseen Initial States accepted to ICML 2024.
Jan 24: Two papers accepted to ICLR 2024: one identifying a vanishing gradients problem when using reinforcement learning to finetune language models and another analyzing length generalization of Transformers.
Sep 23: Two papers accepted to NeurIPS 2023: one on the ability of graph neural networks to model interactions and another on what makes data suitable for locally connected neural networks.
Sep 23: Interned at Apple Machine Learning Research.
Mar 23: Honored to receive the Deutsch Prize in Computer Science for PhD candidates.
May 22:
Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks
accepted to
ICML 2022.
📝 Check out this
blog post for an overview.
Mar 22: Honored to receive the 2022 Apple Scholars in AI/ML PhD fellowship.
Oct 21: Honored to receive the Tel Aviv University Center for AI & Data Science excellence fellowship.
May 21:
Implicit Regularization in Tensor Factorization accepted to
ICML 2021.
📝 Check out this
blog post for an overview.
Sep 20:
Implicit Regularization in Deep Learning May Not Be Explainable by Norms accepted to
NeurIPS 2020.
📝 Check out this
blog post for an overview.
* denotes equal contribution
Analyses of Policy Gradient for Language Model Finetuning and Optimal Control
MPI MiS + UCLA Math Machine Learning Seminar, March 2024
Video
Slides
Two Analyses of Modern Deep Learning: Graph Neural Networks and Language Model Finetuning
Princeton Alg-ML Seminar, December 2023
Slides
On the Ability of Graph Neural Networks to Model Interactions Between Vertices
Learning on Graphs and Geometry Reading Group, January 2023
Video
Slides
Generalization in Deep Learning Through the Lens of Implicit Rank Lowering
ICTP Youth in High-Dimensions: Recent Progress in Machine Learning, High-Dimensional Statistics and Inference, June 2022
Video
Slides
Generalization in Deep Learning Through the Lens of Implicit Rank Lowering
MPI MiS + UCLA Math Machine Learning Seminar, May 2022
Slides
Implicit Regularization in Tensor Factorization
The Hebrew University Machine Learning Club, Jerusalem, Israel, June 2021
Video
Slides
Implicit Regularization in Deep Learning May Not Be Explainable by Norms
Tel Aviv University Machine Learning Seminar, Tel Aviv, Israel, May 2020
Slides
Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Networks, Off the Convex Path, July 22
Implicit Regularization in Tensor Factorization: Can Tensor Rank Shed Light on Generalization in Deep Learning?, Off the Convex Path, July 21
Can Implicit Regularization in Deep Learning Be Explained by Norms? (by Nadav Cohen), Off the Convex Path, November 20
2025: Guest Lecturer, Fundamentals of Deep Learning (COS 514), Princeton University
2025: Guest Lecturer, Introduction to Reinforcement Learning (COS 435), Princeton University
2021 to 2024: Guest Lecturer, First Steps in Research Honors Seminar, Tel Aviv University
2021 to 2023: Teaching Assistant, Foundations of Deep Learning (course #0368-3080), Tel Aviv University