Max Winga (@maxwinga) 's Twitter Profile
Max Winga

@maxwinga

Maximizing our (slim) chance at survival with AI safety communications @ai_ctrl
Previously - AI safety researcher @ConjectureAI, UIUC Physics 2024

DMs open!

ID: 1793759663315443712

linkhttps://maxwinga.substack.com/ calendar_today23-05-2024 21:43:37

798 Tweet

1,1K Followers

324 Following

Max Winga (@maxwinga) 's Twitter Profile Photo

Excited to watch the full release of this documentary! The public deserves to know the extent of what silicon valley is trying to build...and what they're risking.

David Krueger (@davidskrueger) 's Twitter Profile Photo

As I predicted, CEOs are having to back away from rabidly racing to superintelligence. Altman acknowledged the risk (again). Suleyman denounced those who seek to replace humanity. This is PR backpedalling. They are still building it. This is still evil and dangerous.

Max Winga (@maxwinga) 's Twitter Profile Photo

GPT 5.1 is so sycophantic even right out of the box (no memory, and not even the "sycophant" modes called "friendly" and "quirky") This Thanksgiving I'll be grateful that I've got an inherent disgust reaction to this, and I pray for the millions of people who don't.

GPT 5.1 is so sycophantic even right out of the box (no memory, and not even the "sycophant" modes called "friendly" and "quirky")

This Thanksgiving I'll be grateful that I've got an inherent disgust reaction to this, and I pray for the millions of people who don't.
Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

Superintelligence, if we develop it using anything like current methods, would not be under meaningful human control. That's the bottom-line of a new study I've put out entitled Control Inversion (link in second post.) Many experts I talk to who take superintelligence (real,

Superintelligence, if we develop it using anything like current methods, would not be under meaningful human control.

That's the bottom-line of a new study I've put out entitled Control Inversion (link in second post.) Many experts I talk to who take superintelligence (real,
Max Winga (@maxwinga) 's Twitter Profile Photo

It seems to me one of the only solutions to the surge of bots on the internet will be a loss of anonymity (not like we really have much anymore as it is). The best social platforms will require ID and photo verification of users and lock off human-only portions of the web.

ControlAI (@ai_ctrl) 's Twitter Profile Photo

AI godfather Geoffrey Hinton explains why he signed the call to ban superintelligence, which now has over 120,000 supporters. "If we know we can't do it safely, we should stop. And maybe if that knowledge is widely percolated to the public, we will be able to stop."

Max Winga (@maxwinga) 's Twitter Profile Photo

Thanks Somaya Bryant for quoting me in BBC News عربي! When AI CEOs talk about 25% chance of human extinction from superintelligent AI, they mean the death of every person on Earth. Everyone deserves a say, and every country can contribute to banning this dangerous technology.

ControlAI (@ai_ctrl) 's Twitter Profile Photo

🎥 Watch: Conjecture CEO Connor Leahy’s opening statement before the Arizona House Connor Leahy warns that superintelligence poses a risk of human extinction and argues that we need to prohibit its development.

Luke McNally (@pseudomoaner) 's Twitter Profile Photo

96 UK politicians now backing binding ASI regulation through ControlAI. The race that matters most is between political momentum and compute scaling.

96 UK politicians now backing binding ASI regulation through <a href="/ai_ctrl/">ControlAI</a>. The race that matters most is between political momentum and compute scaling.
ControlAI (@ai_ctrl) 's Twitter Profile Photo

Not the Christmas cards you were hoping for. New system cards from OpenAI, Anthropic, Google and xAI indicate AIs are becoming more capable in dangerous domains such as biological weapons and automating AI research. Read more in our latest article! controlai.news/p/dangerous-ai…

Max Winga (@maxwinga) 's Twitter Profile Photo

Middle powers have little to gain and everything to lose from ASI development in superpower countries. They can form a coalition capable of deterring superpowers from building superintelligence, while preventing private companies within their jurisdictions from doing do.

ControlAI (@ai_ctrl) 's Twitter Profile Photo

🎥 NEW: Conjecture CEO Connor Leahy testifies on the danger of superintelligence to a Canadian House of Commons committee. Connor Leahy says that just as countries have tackled global threats like the ozone hole, they should agree to prohibit the development of superintelligence.

ControlAI (@ai_ctrl) 's Twitter Profile Photo

🎥 Watch: ControlAI's Max Winga (Max Winga) explains how AI companies plan to develop superintelligence by initiating a dangerous intelligence explosion.

ControlAI (@ai_ctrl) 's Twitter Profile Photo

Earlier this week, the chief scientist of one of the largest AI companies said that recursively self-improving AI is the "ultimate risk". Yet Anthropic's own CEO has suggested doing this, while another employee has said they want Claude n to build Claude n+1 so they can go home

Earlier this week, the chief scientist of one of the largest AI companies said that recursively self-improving AI is the "ultimate risk".

Yet Anthropic's own CEO has suggested doing this, while another employee has said they want Claude n to build Claude n+1 so they can go home
Lord (David) Alton (@davidaltonhl) 's Twitter Profile Photo

I’m pleased to support ControlAI’s campaign calling for binding guardrails on advanced AI, including superintelligence. This cross-party campaign now has 100+ parliamentary supporters, showing the broad support for action on AI. controlai.com/statement theguardian.com/technology/202…

Max Winga (@maxwinga) 's Twitter Profile Photo

Thrilled to see our campaign features prominently in The Guardian! Before ControlAI forged the path, people told us it would be impossible to get lawmakers to talk about AI extinction risk and superintelligence. Now we have over 100, and this is just the start.