Sudarshan Babu (@sudarshanb263) 's Twitter Profile
Sudarshan Babu

@sudarshanb263

AI fellow at @CZbiohub, previously CS PhD @TTIC_connect, post doc @Uchicago. Deep learning for drug discovery tools. Former Nvidia, Amazon intern.

ID: 1661507022527799297

linkhttps://people.cs.uchicago.edu/~sudarshan/ calendar_today24-05-2023 22:58:57

59 Tweet

44 Followers

220 Following

Brian Hie (@brianhie) 's Twitter Profile Photo

In new work led by gokul kannan with Peter Kim, we show that protein language models learn allosteric interactions without any explicit supervision, evidence that evaluating protein LMs solely on their ability to learn 3D structure may lose important functional information.

Leo Zang (@leotz03) 's Twitter Profile Photo

Protenix: Protein + X | ByteDance - A trainable PyTorch reproduction of AlphaFold 3 GitHub: github.com/bytedance/Prot…

Sudarshan Babu (@sudarshanb263) 's Twitter Profile Photo

“Hey Aravind Srinivas Perplexity would you guys consider presenting results in a Reddit-style format, where each ‘comment’ comes from an LLM predisposed to different ideological perspectives (liberal, conservative, etc.). This would better mimic real discourse and help users

“Hey <a href="/AravSrinivas/">Aravind Srinivas</a> <a href="/perplexity_ai/">Perplexity</a> would you guys consider presenting results in a Reddit-style format, where each ‘comment’ comes from an LLM predisposed to different ideological perspectives (liberal, conservative, etc.). This would better mimic real discourse and help users
Slater Stich (@slaterstich) 's Twitter Profile Photo

Diffusion Without Tears is our attempt to make the score-matching + SDE interpretation of diffusion geometrically intuitive. If you're interested in our upcoming interview with Yang Song, I recommend reading this first! Link below.

Diffusion Without Tears is our attempt to make the score-matching + SDE interpretation of diffusion geometrically intuitive. If you're interested in our upcoming interview with <a href="/DrYangSong/">Yang Song</a>, I recommend reading this first! Link below.
Tanishq Mathew Abraham, Ph.D. (@iscienceluvr) 's Twitter Profile Photo

Inductive Moment Matching Luma AI introduces a new class of generative models for one- or few-step sampling with a single-stage training procedure. Surpasses diffusion models on ImageNet-256×256 with 1.99 FID using only 8 inference steps and achieves state-of-the-art 2-step

Inductive Moment Matching

Luma AI introduces a new class of generative models for one- or few-step sampling with a single-stage training procedure. 

Surpasses diffusion models on ImageNet-256×256 with 1.99 FID using only 8 inference steps and achieves state-of-the-art 2-step
Chaitanya K. Joshi @ICLR2025 🇸🇬 (@chaitjo) 's Twitter Profile Photo

Flow matching has gained popularity recently which is better, diffusion or flow matching? They are formally equivalent Our purpose is to help practitioners understand and use these frameworks interchangeably -- **regardless of what it’s called**

Flow matching has gained popularity recently

which is better, diffusion or flow matching?

They are formally equivalent

Our purpose is to help practitioners understand and use these frameworks interchangeably

-- **regardless of what it’s called**
Nathan C. Frey (@nc_frey) 's Twitter Profile Photo

Introducing Open Molecules 25, a foundational quantum chemistry dataset including >100M DFT calculations across 83M unique molecules, built with 6B core hours of compute! What does this mean for drug discovery, biology, and BioML? 1/

Introducing Open Molecules 25, a foundational quantum chemistry dataset including &gt;100M DFT calculations across 83M unique molecules, built with 6B core hours of compute!

What does this mean for drug discovery, biology, and BioML?

1/
Prof. Lee Cronin (@leecronin) 's Twitter Profile Photo

It is trivial to explain why a LLM can never ever be conscious or intelligent. Utterly trivial. It goes like this - LLMs have zero causal power. Zero agency. Zero internal monologue. Zero abstracting ability. Zero understanding of the world. They are tools for conscious beings.

ChessBase India (@chessbaseindia) 's Twitter Profile Photo

That moment when World Champion Gukesh D won his game against World no.1 Magnus Carlsen! Video: Aditya Sur Roy/ ChessBase India #chess #chessbaseindia #norwaychess #gukesh

Sergey Levine (@svlevine) 's Twitter Profile Photo

I always found it puzzling how language models learn so much from next-token prediction, while video models learn so little from next frame prediction. Maybe it's because LLMs are actually brain scanners in disguise. Idle musings in my new blog post: sergeylevine.substack.com/p/language-mod…

Giannis Daras (@giannis_daras) 's Twitter Profile Photo

Announcing Ambient Diffusion Omni — a framework that uses synthetic, low-quality, and out-of-distribution data to improve diffusion models. State-of-the-art ImageNet performance. A strong text-to-image results in just 2 days on 8 GPUs. Filtering ❌ Clever data use ✅

Announcing Ambient Diffusion Omni — a framework that uses synthetic, low-quality, and out-of-distribution data to improve diffusion models.

State-of-the-art ImageNet performance. A strong text-to-image results in just 2 days on 8 GPUs.

Filtering ❌
Clever data use ✅
Peyman Milanfar (@docmilanfar) 's Twitter Profile Photo

A good denoiser learns the geometry of the image manifold. Thus it makes perfect sense to use denoisers to regularize ill-posed problems. This was a key reason we proposed Regularization by Denoising (RED) in 2016. A modest landmark in citations, but big impact in practice 1/4

A good denoiser learns the geometry of the image manifold. Thus it makes perfect sense to use denoisers to regularize ill-posed problems. This was a key reason we proposed Regularization by Denoising (RED) in 2016. A modest landmark in citations, but big impact in practice

1/4
Sander Dieleman (@sedielem) 's Twitter Profile Photo

Diffusion models have analytical solutions, but they involve sums over the entire training set, and they don't generalise at all. They are mainly useful to help us understand how practical diffusion models generalise. Nice blog + code by Raymond Fan: rfangit.github.io/blog/2025/opti…

Diffusion models have analytical solutions, but they involve sums over the entire training set, and they don't generalise at all. They are mainly useful to help us understand how practical diffusion models generalise.

Nice blog + code by Raymond Fan: rfangit.github.io/blog/2025/opti…
Haitz Sáez de Ocáriz Borde (@ocariz__) 's Twitter Profile Photo

🚨 "Mathematical Foundations of Geometric Deep Learning", co-authored with Michael Bronstein 📚 Read the paper here: arxiv.org/abs/2508.02723 🧠 We review the mathematical background necessary for studying Geometric Deep Learning. #GDL #mathematics #deeplearning #AI

🚨 "Mathematical Foundations of Geometric Deep Learning", co-authored with <a href="/mmbronstein/">Michael Bronstein</a> 

📚 Read the paper here: arxiv.org/abs/2508.02723

🧠 We review the mathematical background necessary for studying Geometric Deep Learning.

#GDL #mathematics #deeplearning #AI