Daniel Kokotajlo (@dkokotajlo) 's Twitter Profile
Daniel Kokotajlo

@dkokotajlo

ID: 1726760827452350464

linkhttps://ai-2027.com/ calendar_today21-11-2023 00:34:26

575 Tweet

15,15K Takipçi

219 Takip Edilen

The Curve (@thecurveconf) 's Twitter Profile Photo

Main application window for The Curve ends this Friday, 8/22! We’re super excited to get this excellent and ~eclectic group of people in the same room, discussing toughest questions about the future of AI. Application + more about who will be there below ⬇️

Main application window for The Curve ends this Friday, 8/22! 

We’re super excited to get this excellent and ~eclectic group of people in the same room, discussing toughest questions about the future of AI.

Application + more about who will be there below ⬇️
Lennart Heim (@ohlennart) 's Twitter Profile Photo

The speculated B30A would be a really good chip. “50% off” is false reassurance. -½ B300 performance, ½ price = same value (just buy 2x) -Well above (12x!) export control thresholds -Outperforms all Chinese chips -Delivers 12.6x the training perf of the H20 -Better than H100 1/

The speculated B30A would be a really good chip. “50% off” is false reassurance.

-½ B300 performance, ½ price = same value (just buy 2x)
-Well above (12x!) export control thresholds
-Outperforms all Chinese chips
-Delivers 12.6x the training perf of the H20
-Better than H100
1/
Factory (@factoryai) 's Twitter Profile Photo

We’re hosting a historic hackathon with METR, inspired by their latest paper that measured the real-world impact of AI coding tools. Here's how it works: 🤖 Half of participants will build with AI tools 👩‍💻 Half of participants will build without AI tools Judging is blind

Daniel Kokotajlo (@dkokotajlo) 's Twitter Profile Photo

Had a good conversation with Ryan Greenblatt yesterday about AGI timelines. I recommend and directionally agree with his take here; my bottom-line numbers are somewhat different (median ~EOY 2029) as he describes in a footnote. lesswrong.com/posts/2ssPfDpd…

Steven Byrnes (@steve47285) 's Twitter Profile Photo

There’s a funny thing where economics education paradoxically makes people DUMBER at thinking about future AI. Econ textbooks teach concepts & frames that are great for most things, but counterproductive for thinking about AGI. Here are 4 examples. Longpost: THE FIRST PIECE of

Dean W. Ball (@deanwball) 's Twitter Profile Photo

I think we live in a perpetual state of traditional media telling us that the pace of ai progress is slowing These pieces were published during a span that I would describe as the most rapid pace of progress I’ve ever witnessed in LLMs (GPT-4 Turbo -> GPT 5-Pro; remember: there

I think we live in a perpetual state of traditional media telling us that the pace of ai progress is slowing 

These pieces were published during a span that I would describe as the most rapid pace of progress I’ve ever witnessed in LLMs (GPT-4 Turbo -> GPT 5-Pro; remember: there
Daniel Kokotajlo (@dkokotajlo) 's Twitter Profile Photo

That's a lot of money. For context, I remember talking to a congressional staffer a few months ago who basically said that a16z was spending on the order of $100M on lobbying and that this amount was enough to make basically every politician think "hmm, I can raise a lot more if

Liv (@livgorton) 's Twitter Profile Photo

What if adversarial examples aren't a bug, but a direct consequence of how neural networks process information? We've found evidence that superposition – the way networks represent many more features than they have neurons – might cause adversarial examples.

What if adversarial examples aren't a bug, but a direct consequence of how neural networks process information?

We've found evidence that superposition – the way networks represent many more features than they have neurons – might cause adversarial examples.
Henry is cleaning up my knowledge base 🔄 (@sleight_henry) 's Twitter Profile Photo

🚀 Applications now open: Constellation's Astra Fellowship 🚀 We're relaunching Astra — a 3-6 month fellowship to accelerate AI safety research & careers. Alumni Eli Lifland & Romeo Dean co-authored AI 2027 and co-founded AI Futures Project with their Astra mentor Daniel Kokotajlo!

🚀 Applications now open: Constellation's Astra Fellowship 🚀

We're relaunching Astra — a 3-6 month fellowship to accelerate AI safety research & careers.

Alumni <a href="/eli_lifland/">Eli Lifland</a>  &amp; Romeo Dean co-authored AI 2027 and co-founded <a href="/AI_Futures_/">AI Futures Project</a> with their Astra mentor <a href="/DKokotajlo/">Daniel Kokotajlo</a>!
METR (@metr_evals) 's Twitter Profile Photo

We estimate that Claude Opus 4.1 has a 50%-time-horizon of around 1 hr 45 min (95% confidence interval of 50 to 195 minutes) on our agentic multi-step software engineering tasks. This estimate is lower than the current highest time-horizon point estimate of around 2 hr 15 min.

We estimate that Claude Opus 4.1 has a 50%-time-horizon of around 1 hr 45 min (95% confidence interval of 50 to 195 minutes) on our agentic multi-step software engineering tasks. This estimate is lower than the current highest time-horizon point estimate of around 2 hr 15 min.
Miles Brundage (@miles_brundage) 's Twitter Profile Photo

The gap between AI capabilities that only a few AI companies (/a few of their partners have) and what the rest of the world has will increasingly NOT consist of a long time period when some people have a totally different base model (e.g. 6 months for GPT-4), but rather…

Miles Brundage (@miles_brundage) 's Twitter Profile Photo

🎯 First it was the EAs out to get them, now it’s Elon. The reality is just that most people think we should be careful about AI x.com/emilydreyfuss/…

🎯 

First it was the EAs out to get them, now it’s Elon.

The reality is just that most people think we should be careful about AI

x.com/emilydreyfuss/…
Daniel Kokotajlo (@dkokotajlo) 's Twitter Profile Photo

Having a big corporation come after you legally, even if they are just harassing you and not trying to actually get you imprisoned, must be pretty stressful and scary. (I was terrified last year during the nondisparagement stuff, and that was just the fear of what *might* happen,