Romeo Dean (@romeovdean) 's Twitter Profile
Romeo Dean

@romeovdean

Harvard CS | @AI_Futures_

ID: 1907913991244189697

linkhttp://ai-2027.com calendar_today03-04-2025 21:52:11

41 Tweet

259 Takipçi

49 Takip Edilen

AI Futures Project (@ai_futures_) 's Twitter Profile Photo

Scott Alexander has written about METR's results on the trend of models doing tasks with increasing time horizons, and how it fits into our AI 2027 timelines forecasts.

<a href="/slatestarcodex/">Scott Alexander</a> has written about <a href="/METR_Evals/">METR</a>'s results on the trend of models doing tasks with increasing time horizons, and how it fits into our AI 2027 timelines forecasts.
Eli Lifland (@eli_lifland) 's Twitter Profile Photo

Interested in work building upon AI 2027, for example other possible endings or researching the implications for AI policy? Apply to Daniel Kokotajlo and my ML Alignment & Theory Scholars stream to work with us over the summer!

Interested in work building upon AI 2027, for example other possible endings or researching the implications for AI policy? Apply to <a href="/DKokotajlo/">Daniel Kokotajlo</a>  and my <a href="/MATSprogram/">ML Alignment & Theory Scholars</a> stream to work with us over the summer!
Romeo Dean (@romeovdean) 's Twitter Profile Photo

Loved this conversation about the promise of RL on LLMs! I’m excited that Christy wrote up his thoughts on scaling RL to more complex and longer horizon tasks and how this might run into bottlenecks. I appreciate this kind of engagement with AI 2027, and have added it to list of

Daniel Kokotajlo (@dkokotajlo) 's Twitter Profile Photo

It's still way too early to call of course, but new data seems to be consistent with AI 2027's controversial superexponential prediction:

It's still way too early to call of course, but new data seems to be consistent with AI 2027's controversial superexponential prediction:
Romeo Dean (@romeovdean) 's Twitter Profile Photo

Interesting report, but it's important to realize a few things about these numbers: (1) Even '100%' self-sufficient can mean being 4-5x behind the US, since US companies are investing 4-5x more $. (2) What also matters is cost efficiency (GPUs / $), China is not likely to

Daniel Kokotajlo (@dkokotajlo) 's Twitter Profile Photo

Good post by Ryan Greenblatt and Eli Lifland: If you think achieving the superhuman AI researcher milestone wouldn't speed things up much, you should probably also think that making human employees dumber and slower wouldn't slow things down much. lesswrong.com/posts/hMSuXTsE…

AI Digest (@aidigest_) 's Twitter Profile Photo

At the end of 2024, we ran our AI 2025 survey. We collected >400 people's forecasts on key signals of AI progress by the end of 2025. We've now visualized the forecasts. Let's see how they're holding up so far 🧵

Ryan Greenblatt (@ryanpgreenblatt) 's Twitter Profile Photo

The key question is whether you can find improvements which work at large scale using mostly small experiments, not whether the improvements work just as well at small scale. The Transformer, MoE, and MQA were all originally found at tiny scale (~1 hr on an H100). 🧵

Tom Davidson (@tomdavidsonx) 's Twitter Profile Photo

Dwarkesh points out there's only enough compute to run 10 million AGIs -- way less than world population. But there's <10,000 frontier AI RnD researchers. So we could get a v rapid intelligence explosion, with algs soon efficient enough to run 100s billions of AGIs