William MacAskill (@willmacaskill) 's Twitter Profile
William MacAskill

@willmacaskill

EA adjacent adjacent.

ID: 363005534

linkhttp://www.williammacaskill.com calendar_today27-08-2011 10:56:52

748 Tweet

62,62K Takipçi

1,1K Takip Edilen

Robert Long (@rgblong) 's Twitter Profile Photo

1/ There’s a common framing that AI safety and AI welfare are inherently opponents. AI safety protects us, the humans; AI welfare protects them, the AIs. Fortunately, this is a false choice: many interventions are win-wins for both AI welfare and AI safety 🤖🤝🧠

1/ There’s a common framing that AI safety and AI welfare are inherently opponents.

AI safety protects us, the humans; AI welfare protects them, the AIs.

Fortunately, this is a false choice: many interventions are win-wins for both AI welfare and AI safety 🤖🤝🧠
Alexander Berger (@albrgr) 's Twitter Profile Photo

Really glad that Open Philanthropy was able to step in on short notice (<24h) to make sure Sarah Fortune's work on TB vaccines can continue x.com/albrgr/status/…

Really glad that <a href="/open_phil/">Open Philanthropy</a> was able to step in on short notice (&lt;24h) to make sure Sarah Fortune's work on TB vaccines can continue

x.com/albrgr/status/…
Matt 🔸 (@spacedoutmatt) 's Twitter Profile Photo

This person appears to be an active participant in the "Effective Altruist" movement—and a good reminder that hyper-rational political movements often end up funding lifesaving work on critical health issues

This person appears to be an active participant in the "Effective Altruist" movement—and a good reminder that hyper-rational political movements often end up funding lifesaving work on critical health issues
Benjamin Todd (@ben_j_todd) 's Twitter Profile Photo

Breaking: Nobel laureates, law professors and former OpenAI employees release a letter to CA & DE Attorneys General saying OpenAI's for-profit conversion is illegal, and betrays its charter. The letter details how the founders of OpenAI chose chose nonprofit control to ensure

Breaking: Nobel laureates, law professors and former OpenAI employees release a letter to CA &amp; DE Attorneys General saying OpenAI's for-profit conversion is illegal, and betrays its charter.

The letter details how the founders of OpenAI chose chose nonprofit control to ensure
Ethan Mollick (@emollick) 's Twitter Profile Photo

I don’t mean to be a broken record but AI development could stop at the o3/Gemini 2.5 level and we would have a decade of major changes across entire professions & industries (medicine, law, education, coding…) as we figure out how to actually use it. AI disruption is baked in.

I don’t mean to be a broken record but AI development could stop at the o3/Gemini 2.5 level and we would have a decade of major changes across entire professions &amp; industries (medicine, law, education, coding…) as we figure out how to actually use it.

AI disruption is baked in.
Richard Chappell🔸 (@rychappell) 's Twitter Profile Photo

When thinking about effective altruism, many philosophers ask themselves, "What utilitarian-adjacent claims can I disagree with as a basis for dismissing EA?" instead of "What are the most minimal philosophical assumptions that can motivate doing more good, effectively?"

Frances Lorenz (@frances__lorenz) 's Twitter Profile Photo

Last night I had a nightmare that I didn't apply to EA Global: London, which is projected to be the biggest EAG ever, but luckily applications are still open 😌

Sigal Samuel (@sigalsamuel) 's Twitter Profile Photo

AI systems could become conscious. What if they hate their lives? Is it our duty to make sure they're happy? To make sure Claude is always enjoying spiritual bliss? My new piece, with insights from Robert Long Jonathan Birch Susan Schneider Kyle Fish vox.com/future-perfect…

Bentham's Bulldog (@benthamsbulldog) 's Twitter Profile Photo

Being at EAG is so inspiring. It's a rare case where you feel like people are actually willing to grapple with the horrors of the world and do something about it.

vitrupo (@vitrupo) 's Twitter Profile Photo

Demis Hassabis is calling on philosophers to step in. He says technologists shouldn't decide AGI's future alone. “We need a new Kant or Wittgenstein to help map out where society should go next.” Politics, ethics, and theology will be essential to navigating a post-AGI world.

Ryan Greenblatt (@ryanpgreenblatt) 's Twitter Profile Photo

If AIs could learn as efficiently as a bright 10 year old child, then shortly after this point AIs would likely be generally superhuman via learning on more data and compute than a human can. So, I don't expect human level learning and sample efficiency until very powerful AI.

William MacAskill (@willmacaskill) 's Twitter Profile Photo

Three hours! 😲 I find Toby has consistently fresh takes on AI, often quite different than mine - can't wait to listen to this!

Inference (@inferencemag) 's Twitter Profile Photo

Inference is hosting some of the world’s leading experts for a debate on the possibility and potential consequences of automated AI research. The debate will be hosted in London on July 1st. There are limited spaces available. Register your interest below

Inference is hosting some of the world’s leading experts for a debate on the possibility and potential consequences of automated AI research. 

The debate will be hosted in London on July 1st. There are limited spaces available. Register your interest below
Rob Wiblin (@robertwiblin) 's Twitter Profile Photo

I asked Oxford philosopher Toby Ord to explain 'The Scaling Paradox': AI 'scaling' is one of the least efficient things on the planet, with costs rising as x²⁰ (!). Also how OpenAI accidentally put out a graph suggesting o3 was no better than o1 — plus revealed their latest