
Jonathan Frankle
@jefrankle
Chief AI Scientist @databricks via MosaicML. Pursuing data intelligence 🧱
ID: 2239670346
10-12-2013 19:35:42
3,3K Tweet
18,18K Followers
693 Following

We're finding that what's needed in RL for enterprise tasks is pretty different than in foundation model training on math, code, etc. Catch Jonathan Frankle and our team at ICML to talk about these problems!

This is a good opportunity to announce that I recently joined the research team at Databricks where I will be working alongside Jonathan Frankle Rishabh Singh Matei Zaharia Erich Elsen, and many others on the hardest problems at the intersection of information retrieval and AI.

The alignment we need in AI right now is integrity: between founders and investors, leaders and teams, builders and users, and among colleagues. It isn't a race to the bottom. People with values stand out. Something I'm proud to share with Michael Bendersky, Rishabh Singh, and Erich Elsen.


I’m presenting two papers on value-based RL for post-training & reasoning on Friday at AI for Math Workshop @ ICML 2025 at #ICML2025! 1️⃣ Q#: lays theoretical foundations for value-based RL for post-training LMs; 2️⃣ VGS: practical value-guided search scaled up for long CoT reasoning. 🧵👇

How can small LLMs match or even surpass frontier models like DeepSeek R1 and o3 Mini in math competition (AIME & HMMT) reasoning? Prior work seems to suggest that ideas like PRMs do not really work or scale well for long context reasoning. Kaiwen Wang will reveal how a novel





