Malo Bourgon (@m_bourgon) 's Twitter Profile
Malo Bourgon

@m_bourgon

CEO at @MIRIBerkeley, and decent boulderer

ID: 57558623

linkhttp://malob.me calendar_today17-07-2009 05:05:54

571 Tweet

963 Followers

130 Following

Malo Bourgon (@m_bourgon) 's Twitter Profile Photo

One of the most thoughtful essays I've read in a long time on AI xrisk. To the extent there's a cannon of required reading on the subject, this almost certainly should be a part of it.

TIME (@time) 's Twitter Profile Photo

Google DeepMind CEO Demis Hassabis hopes that competing nations and companies can find ways to set aside their differences and cooperate on AI safety: "It's in everyone's self-interest to make sure that goes well." Read his TIME100 interview: ti.me/4jbCz34

Google DeepMind CEO Demis Hassabis hopes that competing nations and companies can find ways to set aside their differences and cooperate on AI safety: "It's in everyone's self-interest to make sure that goes well."

Read his TIME100 interview: ti.me/4jbCz34
Jeff Clune (@jeffclune) 's Twitter Profile Photo

I greatly enjoyed “The Spectrum of AI Risks” panel at the Singapore Conference on AI. Thanks Tegan Tegan Maharaj @teganmaharaj.bsky.social for great moderating, Max Max Tegmark for the invitation, and the organizers and other panelists for a great event! PS. Do I really have sad resting panel face?😐

I greatly enjoyed “The Spectrum of AI Risks” panel at the Singapore Conference on AI. Thanks Tegan <a href="/tegan_maharaj/">Tegan Maharaj @teganmaharaj.bsky.social</a> for great moderating, Max <a href="/tegmark/">Max Tegmark</a> for the invitation, and the organizers and other panelists for a great event!

PS. Do I really have sad resting panel face?😐
MIRI (@miriberkeley) 's Twitter Profile Photo

New AI governance research agenda from MIRI’s Technical Governance Team. We lay out our view of the strategic landscape and actionable research questions that, if answered, would provide important insight on how to reduce catastrophic and extinction risks from AI. 🧵1/10

New AI governance research agenda from MIRI’s Technical Governance Team. We lay out our view of the strategic landscape and actionable research questions that, if answered, would provide important insight on how to reduce catastrophic and extinction risks from AI. 🧵1/10
Eliezer Yudkowsky ⏹️ (@esyudkowsky) 's Twitter Profile Photo

Nate Soares and I are publishing a traditional book: _If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All_. Coming in Sep 2025. You should probably read it! Given that, we'd like you to preorder it! Nowish!

Nate Soares and I are publishing a traditional book:  _If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All_.  Coming in Sep 2025.

You should probably read it!  Given that, we'd like you to preorder it!  Nowish!
MIRI (@miriberkeley) 's Twitter Profile Photo

📢 Announcing IF ANYONE BUILDS IT, EVERYONE DIES A new book from MIRI co-founder Eliezer Yudkowsky ⏹️ and president Nate Soares ⏹️, published by Little, Brown and Co. 🗓️ Out September 16, 2025 Details and preorder👇

Chuck Grassley (@chuckgrassley) 's Twitter Profile Photo

Too many ppl working in AI feel they cant speak up when something is wrong Introd bipart legislation 2day 2 ensure whistleblower protections cover those developing + deploying AI TRANSPARENCY BRINGS ACCOUNTABILITY

MIRI (@miriberkeley) 's Twitter Profile Photo

Eliezer Yudkowsky and Nate Soares have written a book aimed at raising the alarm about superintelligent AI for the widest possible audience: If Anyone Builds It, Everyone Dies. The book is coming out Sep. 16, and you can preorder it today. intelligence.org/2025/05/15/yud…

Eliezer Yudkowsky and Nate Soares have written a book aimed at raising the alarm about superintelligent AI for the widest possible audience: If Anyone Builds It, Everyone Dies.

The book is coming out Sep. 16, and you can preorder it today. intelligence.org/2025/05/15/yud…
Yishan (@yishan) 's Twitter Profile Photo

I got to read a draft of this book (and I wrote a blurb!) and it's very good. The topic of AI alignment is complex and subtle, and this is the best unified summary of it I've read. Many of the online resources are scattered and piecemeal, and a lot of Eliezer's explanations

Eliezer Yudkowsky ⏹️ (@esyudkowsky) 's Twitter Profile Photo

If Anyone Builds It, Everyone Dies now has preorders for audiobooks (Audible, Libro). The hardcover can even be preordered in Canada itself!

Eliezer Yudkowsky ⏹️ (@esyudkowsky) 's Twitter Profile Photo

Humans can be trained just like AIs. Stop giving Anthropic shit for reporting their interesting observations unless you never want to hear any interesting observations from AI companies ever again.

Harlan Stewart (@humanharlan) 's Twitter Profile Photo

I just learned that existential risk from AI is actually a psyop carefully orchestrated by a shadowy cabal consisting of all of the leading AI companies, the three most cited AI scientists of all time, the majority of published AI researchers, the Catholic Church, RAND, the

𝖦𝗋𝗂𝗆𝖾𝗌 ⏳ (@grimezsz) 's Twitter Profile Photo

Long story short I recommend the new book by Nate and Eliezer. I feel like the main thing I ever get cancelled/ in trouble - is for is talking to people with ideas that other people don't like. And I feel a big problem in our culture is that everyone feels they must ignore

MIRI (@miriberkeley) 's Twitter Profile Photo

Some huge book endorsements today — from retired three-star general Jack Shanahan, former DHS Under Secretary Suzanne Spaulding, security expert Bruce Schneier, Nobel laureate Ben Bernanke, former US NSC Senior Director Jon Wolfsthal, geneticist George Church, and more!

Some huge book endorsements today — from retired three-star general Jack Shanahan, former DHS Under Secretary Suzanne Spaulding, security expert Bruce Schneier, Nobel laureate Ben Bernanke, former US NSC Senior Director Jon Wolfsthal, geneticist George Church, and more!