Jonas Vollmer (@jonas_vollmer) 's Twitter Profile
Jonas Vollmer

@jonas_vollmer

COO @AI_Futures_, Macroscopic Ventures. Prev co-founded @AtlasFellow & @LongTermRisk.

ID: 223901092

linkhttps://ai-2027.com calendar_today07-12-2010 16:50:48

434 Tweet

2,2K Followers

906 Following

Leopold Aschenbrenner (@leopoldasch) 's Twitter Profile Photo

Virtually nobody is pricing in what's coming in AI. I wrote an essay series on the AGI strategic picture: from the trendlines in deep learning and counting the OOMs, to the international situation and The Project. SITUATIONAL AWARENESS: The Decade Ahead

Virtually nobody is pricing in what's coming in AI.

I wrote an essay series on the AGI strategic picture: from the trendlines in deep learning and counting the OOMs, to the international situation and The Project.

SITUATIONAL AWARENESS: The Decade Ahead
Daniel Colson (@danielcolson6) 's Twitter Profile Photo

1/ OpenAI, Google DeepMind, and Anthropic are aiming to build AGI, and they’re moving fast. My op-ed published today in TIME explores why policymakers in Washington need to wake up. Link in thread.

AI Digest (@aidigest_) 's Twitter Profile Photo

Is AGI just around the corner or is AI scaling hitting a wall? To make this discourse more concrete, we’ve created a survey for forecasting concrete AI capabilities by the end of 2025. Fill it out and share your predictions by end of year! bit.ly/ai-2025 🧵

Is AGI just around the corner or is AI scaling hitting a wall? To make this discourse more concrete, we’ve created a survey for forecasting concrete AI capabilities by the end of 2025. Fill it out and share your predictions by end of year! bit.ly/ai-2025 🧵
Kelsey Piper (@kelseytuoc) 's Twitter Profile Photo

Yeah, all right, let's talk about James Damore. It's been eight years, and I really doubt Harj (who was my boss at the time) is the only person for whom it was a formative experience. For those of you who have no recollection of any of this, either because you are wisely an

OpenAI (@openai) 's Twitter Profile Photo

Detecting misbehavior in frontier reasoning models Chain-of-thought (CoT) reasoning models “think” in natural language understandable by humans. Monitoring their “thinking” has allowed us to detect misbehavior such as subverting tests in coding tasks, deceiving users, or giving

Detecting misbehavior in frontier reasoning models

Chain-of-thought (CoT) reasoning models “think” in natural language understandable by humans. Monitoring their “thinking” has allowed us to detect misbehavior such as subverting tests in coding tasks, deceiving users, or giving
Peter Wildeford 🇺🇸🚀 (@peterwildeford) 's Twitter Profile Photo

This WSJ article, if true, has some real bombshells about OpenAI and Sam Altman 💣‼️ It is alleged that Sam Altman clearly lied multiple times to a variety of people Such as Altman lying to the board about which components of GPT-4 had been safety tested

This WSJ article, if true, has some real bombshells about <a href="/OpenAI/">OpenAI</a> and <a href="/sama/">Sam Altman</a> 💣‼️

It is alleged that Sam Altman clearly lied multiple times to a variety of people

Such as Altman lying to the board about which components of GPT-4 had been safety tested
Eli Lifland (@eli_lifland) 's Twitter Profile Photo

I’ve been excited to see all the discussion of AI 2027, positive and negative. Here I'll respond to some themes of the criticisms we've gotten (this is all my opinion, not the team’s): (1) It’s just speculative sci-fi (2) 2027 is too soon (3) We’re pushing the doomer agenda 🧵

Jonas Vollmer (@jonas_vollmer) 's Twitter Profile Photo

Pet peeve: saying that AI will likely be deceptively misaligned bc it's "hard to rule out." If you think it's >50% likely, you should have a *specific* story for why the training process favors deceptive misalignment, and you should be able to explain that in simple language