PauseAINYC (@pauseainyc) 's Twitter Profile
PauseAINYC

@pauseainyc

Working to pause the development of the most general AI models until they are proven safe for all

ID: 1797574158223577089

linkhttp://linktr.ee/pauseainyc calendar_today03-06-2024 10:21:19

109 Tweet

46 Followers

63 Following

Trevor Bingham (@22trevorbingham) 's Twitter Profile Photo

People who want to accelerate our approach to the singularity often try overwhelm their opponents by making it seem that technological progress is inevitable and unstoppable. This is nothing more than a simple debating ploy. Of course we can stop technology advancement. Dead in

Coalition for a Baruch Plan for AI (@baruchplanforai) 's Twitter Profile Photo

This is absolutely crazy. Declaring to invest $500 billion to build Superintelligence constitutes an immense risk and gamble for Humanity. Unlike the term "Artificial General Intelligence", Superintelligence is a very precisely-defined term describing AIs able to improve

AI Notkilleveryoneism Memes ⏸️ (@aisafetymemes) 's Twitter Profile Photo

Stability founder Emad, who has a 50% p(doom): "We are clearly in an intelligence takeoff scenario" Sam Altman 3 days ago: "a fast takeoff is more plausible than i thought a couple years ago" Foom aka fast/hard/sharp takeoff means the world as you know it ends suddenly

Stability founder <a href="/EMostaque/">Emad</a>, who has a 50% p(doom): "We are clearly in an intelligence takeoff scenario"

Sam Altman 3 days ago: "a fast takeoff is more plausible than i thought a couple years ago"

Foom aka fast/hard/sharp takeoff means the world as you know it ends suddenly
Ruben Bloom (Ruby) (@ruben_bloom) 's Twitter Profile Photo

Just a few years ago: AGI is so far away, it's too early to worry about safety Now: AGI is so soon, we've got limited time for safety else the Wrong People will get there first!

Linus Ekenstam – eu/acc (@linusekenstam) 's Twitter Profile Photo

I just want to be very clear (or as clear as I can be) Grok is giving me hundreds of pages of detailed instructions on how to make chemical weapons of mass destruction I have a full list of suppliers. Detailed instructions on how to get the needed materials...

I just want to be very clear (or as clear as I can be) 

Grok is giving me hundreds of pages of detailed instructions on how to make chemical weapons of mass destruction 

I have a full list of suppliers. Detailed instructions on how to get the needed materials...
AI Notkilleveryoneism Memes ⏸️ (@aisafetymemes) 's Twitter Profile Photo

God the current situation is fucking insane AI companies: "our models are on the cusp of being able to meaningfully help novices create known biological threats" Vice President of the US: fuck any safety guardrails on AI whatsoever DEMOCRATIZE DESKTOP SUPER-EBOLA PRINTERS 🚀

Buck Shlegeris (@bshlgrs) 's Twitter Profile Photo

📷 Announcing ControlConf: The world’s first conference dedicated to AI control - techniques to mitigate security risks from AI systems even if they’re trying to subvert those controls. March 27-28, 2025 in London. 🧵

Max Winga (@maxwinga) 's Twitter Profile Photo

To the many "...but China!" responses I've received, I raise you one "Chinese ambassador says we need to cooperate on global AI governance" Great to see public acknowledgment of this need! Hopefully this is accompanied by actions towards verifiable international AI governance.

To the many "...but China!" responses I've received, I raise you one "Chinese ambassador says we need to cooperate on global AI governance"

Great to see public acknowledgment of this need! Hopefully this is accompanied by actions towards verifiable international AI governance.
Rob Bensinger ⏹️ (@robbensinger) 's Twitter Profile Photo

"Building a new intelligent species that's vastly smarter than humans is a massively dangerous thing to do" is not a niche or weird position, and "we're likely to actually build a thing like that in the next decade" isn't a niche position anymore either.

ControlAI (@ai_ctrl) 's Twitter Profile Photo

What is an intelligence explosion, how could it happen, and what would its consequences be? We sought to explain: controlai.news/p/from-intelli…

Existential Risk Observatory ⏸ (@xrobservatory) 's Twitter Profile Photo

In climate change, there is a field of scientists thinking about little else than our safety. That's why society can count on them. In AI, some scientists are too busy creating the machine god themselves (from our tax money) to be bothered with safety. Unfortunately, these are

PauseAI ⏸ (@pauseai) 's Twitter Profile Photo

Recent research found that large language model Claude is likely to know when it's being tested, and can pretend to be less capable than it actually is in order to be deployed. Yet more evidence that, as AI models become more powerful, our ability to robustly evaluate their

Recent research found that large language model Claude is likely to know when it's being tested, and can pretend to be less capable than it actually is in order to be deployed.

Yet more evidence that, as AI models become more powerful, our ability to robustly evaluate their
For Humanity: AI Risk Podcast ⏹️ (@forhumanitypod) 's Twitter Profile Photo

We are the ants. AGI is the mailman. His goals are different than ours. He stomps us without even noticing. AI risk in a nutshell. This stuff is not hard to understand.