Zulfikar Ramzan (He / Him) (@zulfikar_ramzan) 's Twitter Profile
Zulfikar Ramzan (He / Him)

@zulfikar_ramzan

CTO Point Wild. Former CTO @RSASecurity. MIT PhD. Intelligent Safety, Cybersecurity, Crypto(graphy), ML / AI. My tweets & opinions. he / him

ID: 55309639

linkhttp://www.pointwild.com calendar_today09-07-2009 18:01:17

3,3K Tweet

4,4K Takipçi

2,2K Takip Edilen

Manoel (@manoelribeiro) 's Twitter Profile Photo

Do reasoning models have real “Aha!” moments—mid-chain realizations where they intrinsically self-correct? In a new pre-print, “The Illusion of Insight in Reasoning Models," led by Liv d'Aliberti, we provide strong evidence that they do not! 📜: arxiv.org/abs/2601.00514

Do reasoning models have real “Aha!” moments—mid-chain realizations where they intrinsically self-correct?

In a new pre-print, “The Illusion of Insight in Reasoning Models," led by <a href="/livdaliberti/">Liv d'Aliberti</a>, we provide strong evidence that they do not!

📜: arxiv.org/abs/2601.00514
James Zou (@james_y_zou) 's Twitter Profile Photo

Today in Nature Medicine we report that AI can predict 130 diseases from 1 night of sleep🛌 We trained a foundation model (#SleepFM) on 585K hours of sleep recordings from 65K people—brain, heart, muscle & breathing signals combined. AI learns the language of sleep🧵

Today in <a href="/NatureMedicine/">Nature Medicine</a> we report that AI can predict 130 diseases from 1 night of sleep🛌

We trained a foundation model (#SleepFM) on 585K hours of sleep recordings from 65K people—brain, heart, muscle &amp; breathing signals combined.

AI learns the language of sleep🧵
Omar Khattab (@lateinteraction) 's Twitter Profile Photo

The mathematician William P. Thurston (1946-2012) perhaps expressed best the thing I mean by 'higher-level' abstractions instead of vibe coding. In 1994, he wrote: > In large computer programs, a tremendous proportion of effort must be spent on myriad compatibility issues:

clhong1248 (@carinalhong) 's Twitter Profile Photo

I promise it's a good blog. Come have a read! Exciting math and fun commentaries written by world-famous mathematicians like Ken Ono and Evan Chen and Lean gurus like Kenny Lau and Jujian Zhang. Proofs by AxiomProver, an engineering effort built fast and executed relentlessly.

Chad Jones (@chadjonesecon) 's Twitter Profile Photo

"AI and Our Economic Future" New paper in preparation for the Journal of Economic Perspectives ==> accessible to a broad audience. web.stanford.edu/~chadj/AIandEc…

"AI and Our Economic Future" New paper in preparation for the Journal of Economic Perspectives ==&gt; accessible to a broad audience. web.stanford.edu/~chadj/AIandEc…
Anthropic (@anthropicai) 's Twitter Profile Photo

New on the Anthropic Engineering Blog: We give prospective performance engineering candidates a notoriously difficult take-home exam. It worked well—until Opus 4.5 beat it. Here's how we designed (and redesigned) it: anthropic.com/engineering/AI…

Deedy (@deedydas) 's Twitter Profile Photo

Anthropic just dropped the best blog on how to do software engg interviews in an AI world. It explains 1. How AI easily beat humans in their old interview problem 2. Why they still need SWEs when AI can do most things 3. How to design interviews AI can't beat (make them weirder)

Anthropic just dropped the best blog on how to do software engg interviews in an AI world.

It explains
1. How AI easily beat humans in their old interview problem
2. Why they still need SWEs when AI can do most things
3. How to design interviews AI can't beat (make them weirder)
Ken Ono (@kenono691) 's Twitter Profile Photo

1/ ANNOUNCING 🎬 MARYAM: The Mirror and the Map, a feature film about Fields Medalist Maryam Mirzakhani (the first woman to win the Fields Medal). After The Man Who Knew Infinity, writer/director Matt Brown, Manjul Bhargava & I are reuniting as associate producers.

1/ ANNOUNCING 🎬

MARYAM: The Mirror and the Map, a feature film about Fields Medalist Maryam Mirzakhani (the first woman to win the Fields Medal).

After The Man Who Knew Infinity, writer/director Matt Brown, Manjul Bhargava &amp; I are reuniting as associate producers.
Anthropic (@anthropicai) 's Twitter Profile Photo

New Anthropic Fellows research: How does misalignment scale with model intelligence and task complexity? When advanced AI fails, will it do so by pursuing the wrong goals? Or will it fail unpredictably and incoherently—like a "hot mess?" Read more: alignment.anthropic.com/2026/hot-mess-…

Boaz Barak (@boazbaraktcs) 's Twitter Profile Photo

Recommend watching (or at least read the TL;DR). What's happening in math is happening in all other fields, though with different time offsets.

Sanjeev Arora (@prfsanjeevarora) 's Twitter Profile Photo

It seems these mathematicians seem unaware that a single LLM call cannot solve a difficult problem, let alone an open problem. A single LLM call provides too little total compute. Difficult problems require orchestrated pipelines with many calls. e.g., as in this paper

Thomas Lin (@7homaslin) 's Twitter Profile Photo

It was an intellectual joyride working with Terry Tao on his first popular math book SIX MATH ESSENTIALS. In stores Oct 27, now available for preorder: quantabooks.org

It was an intellectual joyride working with Terry Tao on his first popular math book SIX MATH ESSENTIALS. In stores Oct 27, now available for preorder: quantabooks.org
Francesco Cagnetta (@fraccagnetta) 's Twitter Profile Photo

🚨 We derive data-limited neural scaling exponents directly from measurable corpus statistics. No synthetic data models, only two ingredients: -decay of token-token correlations with separation; -decay of next-token conditional entropy with context length.

🚨 We derive data-limited neural scaling exponents directly from measurable corpus statistics.

No synthetic data models, only two ingredients:
-decay of token-token correlations with separation;
-decay of next-token conditional entropy with context length.
Surya Ganguli (@suryaganguli) 's Twitter Profile Photo

Our new paper "Deriving neural scaling laws from the statistics of natural language" arxiv.org/abs/2602.07488 lead by Francesco Cagnetta & Allan Raventós w/ Matthieu Wyart makes a breakthrough! We can predict data-limited neural scaling law exponents from first principles using the

Our new paper "Deriving neural scaling laws from the statistics of natural language" arxiv.org/abs/2602.07488 lead by <a href="/Fraccagnetta/">Francesco Cagnetta</a> &amp; <a href="/AllanRaventos/">Allan Raventós</a> w/ Matthieu Wyart makes a breakthrough! We can predict data-limited neural scaling law exponents from first principles using the
Phillip Isola (@phillip_isola) 's Twitter Profile Photo

Our grad-level "Deep Learning" course (MIT's 6.7960) is now freely available online through OpenCourseWare: ocw.mit.edu/courses/6-7960… Lecture videos, psets, and readings are all provided. Had a lot of fun teaching this with Sara Beery and Jeremy Bernstein!

Our grad-level "Deep Learning" course (MIT's 6.7960) is now freely available online through OpenCourseWare: ocw.mit.edu/courses/6-7960…

Lecture videos, psets, and readings are all provided.

Had a lot of fun teaching this with <a href="/sarameghanbeery/">Sara Beery</a> and <a href="/jxbz/">Jeremy Bernstein</a>!
Alex Kontorovich (@alexkontorovich) 's Twitter Profile Photo

The Simons Foundation has put their treasure trove of videos on YouTube (they’ve been available for a decade from the Simons website, but I kept begging them to move it to a platform with many more eyeballs…). Here’s Eli Stein interviewed by Charlie Fefferman:

Anthony Leverrier (@letonyo) 's Twitter Profile Photo

amazing new benchmark for quantum computing. Now we just need to figure out how to decode qLDPC codes very well! arxiv.org/abs/2602.11457