Jide ๐Ÿ” (@jide_alaga) 's Twitter Profile
Jide ๐Ÿ”

@jide_alaga

AI Governance @GovAI_ | Rooting for the better angels of our nature..

ID: 744242732223213568

calendar_today18-06-2016 18:57:49

452 Tweet

569 Takipรงi

503 Takip Edilen

Lennart Heim (@ohlennart) 's Twitter Profile Photo

The only race I endorse: who has the best AI safety framework? A chance for those outside big tech to shape AI safety standards by passing judgment. Fantastic work on grading AI safety frameworks by Jide ๐Ÿ”, Jonas Schuett, and Markus Anderljung.

Arjun Panickssery is in London (@panickssery) 's Twitter Profile Photo

I read all ~3500 blog posts on Robin Hanson's blog ๐‘‚๐‘ฃ๐‘’๐‘Ÿ๐‘๐‘œ๐‘š๐‘–๐‘›๐‘” ๐ต๐‘–๐‘Ž๐‘  and compiled an anthology of 125 key posts split into 5 categories and 19 sub-categories thanks to Richard Ngo for sponsoring

I read all ~3500 blog posts on <a href="/robinhanson/">Robin Hanson</a>'s blog ๐‘‚๐‘ฃ๐‘’๐‘Ÿ๐‘๐‘œ๐‘š๐‘–๐‘›๐‘” ๐ต๐‘–๐‘Ž๐‘  and compiled an anthology of 125 key posts split into 5 categories and 19 sub-categories

thanks to <a href="/RichardMCNgo/">Richard Ngo</a> for sponsoring
Chris Painter (@chrispainteryup) 's Twitter Profile Photo

Kind of wild to think we could live to see this but for dyson swarms around the sun. Somehow it doesn't feel like a big deal when it's telecom satellites, but the veneer that this is anything other than scifi made real will be shattered when its solar panels around the sun

Apollo Research (@apolloaisafety) 's Twitter Profile Photo

Frontier AI developers might want to make 'safety cases' - structured arguments that their systems won't cause catastrophic harm. In our new report, we discuss how safety cases could address the possibility of 'scheming' โ€” where AI systems covertly pursue misaligned goals.

Frontier AI developers might want to make 'safety cases' - structured arguments that their systems won't cause catastrophic harm.

In our new report, we discuss how safety cases could address the possibility of 'scheming' โ€” where AI systems covertly pursue misaligned goals.
Epoch AI (@epochairesearch) 's Twitter Profile Photo

1/ Are open-weight AI models catching up to closed models? We did the most in-depth investigation to date on the gaps in performance and compute between open-weight and closed-weight AI models. Hereโ€™s what we found: ๐Ÿงต

1/ Are open-weight AI models catching up to closed models?

We did the most in-depth investigation to date on the gaps in performance and compute between open-weight and closed-weight AI models. Hereโ€™s what we found:

๐Ÿงต
Jide ๐Ÿ” (@jide_alaga) 's Twitter Profile Photo

Letโ€™s normalize this! "To quote Eisenhower, plans are worthless but planning is indispensable. Probably none of the sketches presented here will hold up in detail in the face of future evidence. But studying them can still be valuable for identifying important research areas..."

Miles Brundage (@miles_brundage) 's Twitter Profile Photo

Seems like a lot of folks in the Bay Area think it's easy to fix government without knowing much about it. This seems unlikely (to say the least) to people who know a lot about gov't, and only seems plausible if you incorrectly think there's little to even know.

Sรฉb Krier (@sebkrier) 's Twitter Profile Photo

OpenAI comms: [underspecific hype-y 'big tings coming!!' pls like and subscribe] Google comms: [corporate vagueness about Gemini3-0011 v2 FINAL.docx on Vertex available to 14 users] GDM comms: [we have simulated a rat's brain capable of solving 4D chess, but we're not sure why]

Marius Hobbhahn (@mariushobbhahn) 's Twitter Profile Photo

Oh man :( We tried really hard to neither over- nor underclaim the results in our communication, but, predictably, some people drastically overclaimed them, and then based on that, others concluded that there was nothing to be seen here (see examples in thread). So, let me try

Lennart Heim (@ohlennart) 's Twitter Profile Photo

Too many interventions just focus on AI capabilities. Reality is: it will probably *diffuse* anyway (see this year's AI models). We also need to build adaptation and resilience infrastructure and ensure that better tech diffuses *faster* and *wider*.

Too many interventions just focus on AI capabilities. Reality is: it will probably *diffuse* anyway (see this year's AI models).
We also need to build adaptation and resilience infrastructure and ensure that better tech diffuses *faster* and *wider*.
Caleb Watney (@calebwatney) 's Twitter Profile Photo

Major paper today on mirror bacteria and the risks to human, plant, and animal life. (Truly wild) substance aside, it's neat to see effectively ~all the top scientists in an area come together for an in-depth exploration like this with a call for future discussion.

Major paper today on mirror bacteria and the risks to human, plant, and animal life.

(Truly wild) substance aside, it's neat to see effectively ~all the top scientists in an area come together for an in-depth exploration like this with a call for future discussion.
Yo Shavit (@yonashav) 's Twitter Profile Photo

Now that everyone knows about o3, and imminent AGI is considered plausible, Iโ€™d like to walk through some of the AI policy implications I see.

Keamo (@keamos_korner) 's Twitter Profile Photo

Another Christmas with both my parents alive and healthy ๐Ÿฅนโ€ฆ what more could I ask for โค๏ธ๐Ÿซถ