
Sumaya Nur
@sumayanur_
Law| AI governance| Department of Science Innovation & Technology
ID: 990989693020987392
30-04-2018 16:22:09
340 Tweet
1,1K Takipçi
639 Takip Edilen

Safety cases are gaining traction as a tool for AI governance. But what does a frontier AI safety case look like in practice? Together with the @AISafetyInst, GovAI researchers @Arthur_Goemans_, Marie Davidsen Buhl, & Jonas Schuett developed a template: arxiv.org/abs/2411.08088



While advanced AI technology is being built primarily in Global North countries, its impacts are likely to be felt worldwide, and disproportionately so in those Global South countries with long-standing vulnerabilities. Cecil Yongo, Marie Iradukunda, Duncan Cass-Beggs, Aquila Hassan


New publication! Model evaluations are critical for ensuring AI safety. But who should be responsible for developing these evaluations? Our latest research explores the key challenges and proposes four development approaches. Lara Thurnherr Robert Trager oxfordmartin.ox.ac.uk/publications/w…



Super excited to announce the first workshop on technical AI governance, taking place at ICML in Vancouver this July! Save the date(s)! ✨ #ICML2025 ICML Conference



New Event! On the 5th anniversary of Meta's Oversight Board, we’re thrilled to welcome [email protected], an inaugural member of the Board, for a timely and critical conversation on AI governance and platform accountability in Oxford. Oxford Martin School Register eventbrite.co.uk/e/ai-accountab…



New report: @BenHarack, Robert Trager et al explore a core challenge in international AI governance: how to *verify* compliance. They discover what’s technically feasible today, as well as the work that still needs to be done. Read more in Ben’s thread below ⬇️.

🚨 New paper alert! 🚨 Are human baselines rigorous enough to support claims about "superhuman" performance? Spoiler alert: often not! Patricia Paskov and I will be presenting our spotlight paper at ICML next week on the state of human baselines + how to improve them!


