MIRI (@miriberkeley) 's Twitter Profile
MIRI

@miriberkeley

MIRI exists to maximize the probability that the creation of smarter-than-human intelligence has a positive impact.

ID: 1568239549

linkhttps://intelligence.org calendar_today04-07-2013 13:32:15

1,1K Tweet

39,39K Followers

99 Following

Eliezer Yudkowsky ⏹️ (@esyudkowsky) 's Twitter Profile Photo

If you thought that almost no-one in public life would dare agree that wiping out humanity ought to be further discussed before implementation, think again. (Link below.)

If you thought that almost no-one in public life would dare agree that wiping out humanity ought to be further discussed before implementation, think again.  (Link below.)
Malo Bourgon (@m_bourgon) 's Twitter Profile Photo

My favorite reaction I’ve gotten when sharing some of the blurbs we’ve recently received for Eliezer and Nate’s forthcoming book: If Anyone Builds It, Everyone Dies From someone who works on AI policy in DC:

My favorite reaction I’ve gotten when sharing some of the blurbs we’ve recently received for Eliezer and Nate’s forthcoming book: If Anyone Builds It, Everyone Dies

From someone who works on AI policy in DC:
Thomas Larsen (@thlarsen) 's Twitter Profile Photo

Lots of people in AI, and especially AI policy, seem to think that aligning superintelligence is the most important issue of our time, and that failure could easily lead to extinction -- like what happened in AI 2027. But they don’t mention this fact in public because it sounds

Rob Wiblin (@robertwiblin) 's Twitter Profile Photo

A little surprised to see Bruce Schneier blurbing 'If Anyone Builds, It Everyone Dies'. When I last spoke with him in 2019 he didn't take extinction from AI seriously, but I guess a lot has happened! Also — Ben Bernanke?!

A little surprised to see Bruce Schneier blurbing 'If Anyone Builds, It Everyone Dies'.

When I last spoke with him in 2019 he didn't take extinction from AI seriously, but I guess a lot has happened!

Also — Ben Bernanke?!
Rob Bensinger ⏹️ (@robbensinger) 's Twitter Profile Photo

Senior White House officials, a retired three-star general, a Nobel laureate, and others come out to say that you should probably read Eliezer Yudkowsky and Nate Soares' "If Anyone Builds It, Everyone Dies". Preorders are live.

Senior White House officials, a retired three-star general, a Nobel laureate, and others come out to say that you should probably read Eliezer Yudkowsky and Nate Soares' "If Anyone Builds It, Everyone Dies". Preorders are live.
Rob Bensinger ⏹️ (@robbensinger) 's Twitter Profile Photo

AI companies are currently actively trying to build smarter-than-human AI. If they succeed, then every man, woman, and child on Earth is probably going to die. This is actually happening. I, Robby Bensinger, am genuinely scared for myself, my loved ones, and the rest of you over

MIRI (@miriberkeley) 's Twitter Profile Photo

We're hosting two virtual events, open to everyone who pre-orders the book: 1. A chat and Q&A with Nate Soares ⏹️ and special guest Tim Urban of Tim Urban. (Aug 10) 2. A Q&A with Eliezer Yudkowsky ⏹️ and Nate Soares ⏹️. (Sep) Register at ifanyonebuildsit.com/events!

Nate Soares ⏹️ (@so8res) 's Twitter Profile Photo

Event details are at IfAnyoneBuildsIt.com/events. Eliezer and I will also do another Q&A for preorderers in September. And FYI, the book is 25% off on the Barnes And Noble website (with a free account) until Jul 11: barnesandnoble.com/w/if-anyone-bu…

Eliezer Yudkowsky ⏹️ (@esyudkowsky) 's Twitter Profile Photo

I pointedly refuse to use the invalid water-use argument against AI. I would like ASI notkilleveryoneism to be, visibly, the Side That Sticks To Only Valid Arguments. But I am fair, and invite others to the game. e/accs, what widely popular pro-AI argument do you reject?

vitalik.eth (@vitalikbuterin) 's Twitter Profile Photo

A good book, worth reading to understand the basic case for why many people, even those who are generally very enthusiastic about speeding up technological progress, consider superintelligent AI uniquely risky