Anthony Aguirre (@anthonynaguirre) 's Twitter Profile
Anthony Aguirre

@anthonynaguirre

Physicist & cosmologist at UCSC. Co-Founder of Future of Life Institute, Foundational Questions Institute, and Metaculus. Apple out of the box. Pro-human.

ID: 722842576479322114

linkhttp://anthony-aguirre.com calendar_today20-04-2016 17:41:14

828 Tweet

2,2K Takipçi

130 Takip Edilen

MIRI (@miriberkeley) 's Twitter Profile Photo

New AI governance research agenda from MIRI’s Technical Governance Team. We lay out our view of the strategic landscape and actionable research questions that, if answered, would provide important insight on how to reduce catastrophic and extinction risks from AI. 🧵1/10

New AI governance research agenda from MIRI’s Technical Governance Team. We lay out our view of the strategic landscape and actionable research questions that, if answered, would provide important insight on how to reduce catastrophic and extinction risks from AI. 🧵1/10
Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

Great critique here, which applies to other AI companies’ frameworks as well. The fact that a model can't quite nail the 17th of 29 steps in holding someone's hand through reconstituting smallpox or whatever does not mean that model is safe to deploy! There should be real risk

Max Tegmark (@tegmark) 's Twitter Profile Photo

This Singapore conference was an amazing AI safety comeback after the Paris flop: great consensus between a who's who from the US, China, top companies, AISI's, etc on what safety research needs to get done: aisafetypriorities.org

This Singapore conference was an amazing AI safety comeback after the Paris flop:  great consensus between a who's who from the US, China, top companies, AISI's, etc on what safety research needs to get done: aisafetypriorities.org
Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

I worry that through some trick of the mind, people who think humanity shouldn't build AGI or superintelligence anytime soon somehow convince themselves instead that we can't do so. It's uncomfortable to believe that we can but really should not, because that implies trying to

Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

In general relativity as you approach a black hole singularity you go from uncomfortable to being ripped to shreds in about 0.3 seconds. Is that what he means by gentle?

Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

AGI would eliminate/replace many, many, many jobs and roles. This was an interesting article on new jobs AGI could create. But I did not find that it decreased my pessimism. Some were jobs that the author assumed could not be done by AGI, but probably could. The more viable

Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

The idea of a fundamentally truth-finding AI system is noble. But the route is not to build a giant black-box general-purpose AI and then try to hit it with a stick until it tells "the truth." LLMs are powerful but their fundamental operation is not truth-seeking. I'm not sure

Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

Great take and example of one of the many forks in the road we will encounter with integrating AI into our society. I'd add that not just technological capabilities but also norms are very important here. We can and should demand social-technical and policy structures that serve

Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

Great piece about why AGI is not inevitable. I'd love to see more pieces like this addressing not whether they can (because indeed they can) but whether they should build it. Moreover, there is an alternative path of building powerful AI tools rather than human-replacing AGI.

Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

Good for Gemini 2.5. In my view, a system that is very high on capability and generality (which Gemini is), but low on autonomy is a feature rather than a bug. I'd prefer it if we had metrics that measure the degree to which AI systems empower and complement people; autonomy

Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

This vision from Meta of loyal superintelligent AI assistants would be a nice one if it was not from Meta, and not using the word "superintelligence." (This word has a meaning, and it's not this.)

Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

Astrophysics is not as packed with discoveries of brand-new objects as it once was, but this might be one: giant early-universe stars fueled by black holes! science.org/content/articl…

Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

Had an interesting and wide-ranging conversation with Spencer Greenberg on the Clearer Thinking podcast about the fundamental tension between autonomy and control in AI systems, regulations on AI that actually make sense, and what it would take to decide as a civilization not to

Anthony Aguirre (@anthonynaguirre) 's Twitter Profile Photo

Does anyone have examples of AI capability metrics for which the upper envelope is not getting better with time (or saturated)? There is lots of talk of walls and disappointing returns to scale but the metrics I see don't seem to show this. Closest I see at