Sumaya Nur (@sumayanur_) 's Twitter Profile
Sumaya Nur

@sumayanur_

Law| AI governance| Department of Science Innovation & Technology

ID: 990989693020987392

calendar_today30-04-2018 16:22:09

340 Tweet

1,1K Takipçi

639 Takip Edilen

Centre for the Governance of AI (GovAI) (@govai_) 's Twitter Profile Photo

Safety cases are gaining traction as a tool for AI governance. But what does a frontier AI safety case look like in practice? Together with the @AISafetyInst, GovAI researchers @Arthur_Goemans_, Marie Davidsen Buhl, & Jonas Schuett developed a template: arxiv.org/abs/2411.08088

Safety cases are gaining traction as a tool for AI governance. But what does a frontier AI safety case look like in practice? Together with the @AISafetyInst, GovAI researchers @Arthur_Goemans_, <a href="/MarieBassBuhl/">Marie Davidsen Buhl</a>, &amp; <a href="/jonasschuett/">Jonas Schuett</a> developed a template: 

arxiv.org/abs/2411.08088
Simon Institute for Longterm Governance (@longtermgov) 's Twitter Profile Photo

Excited to share our new interim report, outlining potential design options for the #UN's Independent International Scientific Panel on AI and Global Dialogue on AI Governance, both mandated by the #GlobalDigitalCompact in Sept 2024 🧵 simoninstitute.ch/blog/post/blue…

CIGI (@cigionline) 's Twitter Profile Photo

While advanced AI technology is being built primarily in Global North countries, its impacts are likely to be felt worldwide, and disproportionately so in those Global South countries with long-standing vulnerabilities. Cecil Yongo, Marie Iradukunda, Duncan Cass-Beggs, Aquila Hassan

While advanced AI technology is being built primarily in Global North countries, its impacts are likely to be felt worldwide, and disproportionately so in those Global South countries with long-standing vulnerabilities.

<a href="/CecilYongo/">Cecil Yongo</a>, <a href="/MarieIrad/">Marie Iradukunda</a>, Duncan Cass-Beggs, Aquila Hassan
Oxford Martin AI Governance Initiative (@aigioxford) 's Twitter Profile Photo

New publication! Model evaluations are critical for ensuring AI safety. But who should be responsible for developing these evaluations? Our latest research explores the key challenges and proposes four development approaches. Lara Thurnherr Robert Trager oxfordmartin.ox.ac.uk/publications/w…

sam manning (@sj_manning) 's Twitter Profile Photo

Really excited to release this new paper on AI benefit sharing! I think this topic -- ensuring that the economic and societal benefits of advanced AI are widely accessible internationally -- is going to be an increasingly important challenge as AI advancements continue.

Really excited to release this new paper on AI benefit sharing!

I think this topic -- ensuring that the economic and societal benefits of advanced AI are widely accessible internationally -- is going to be an increasingly important challenge as AI advancements continue.
Technical AI Governance @ ICML 2025 (@taig_icml) 's Twitter Profile Photo

📣 We’re thrilled to announce the first workshop on Technical AI Governance (TAIG) at #ICML2025 this July in Vancouver! Join us (& this stellar list of speakers) in bringing together technical & policy experts to shape the future of AI governance!

📣 We’re thrilled to announce the first workshop on Technical AI Governance (TAIG) at #ICML2025 this July in Vancouver! Join us (&amp; this stellar list of speakers) in bringing together technical &amp; policy experts to shape the future of AI governance!
Lisa Soder (@lisa_soder_) 's Twitter Profile Photo

Super excited to announce the first workshop on technical AI governance, taking place at ICML in Vancouver this July! Save the date(s)! ✨ #ICML2025 ICML Conference

Sumaya Nur (@sumayanur_) 's Twitter Profile Photo

Applications are now open for the IAPS AI Policy Fellowship! We’re also hosting a Q&A session on April 22 from 1–2 PM ET, where I’ll be joining to share my experience as a past fellow and answer your questions alongside the IAPS team. Don’t miss it if you’re considering applying!

Seán Ó hÉigeartaigh (@s_oheigeartaigh) 's Twitter Profile Photo

New working paper (pre-review), maybe my most important in recent years. I examine the evidence for the US-China race to AGI and decisive strategic advantage, & analyse the impact this narrative is having on our prospects for cooperation on safety. 1/5 papers.ssrn.com/abstract=52786…

Oxford Martin AI Governance Initiative (@aigioxford) 's Twitter Profile Photo

New Event! On the 5th anniversary of Meta's Oversight Board, we’re thrilled to welcome [email protected], an inaugural member of the Board, for a timely and critical conversation on AI governance and platform accountability in Oxford. Oxford Martin School Register eventbrite.co.uk/e/ai-accountab…

New Event! 
On the 5th anniversary of <a href="/Meta/">Meta</a>'s Oversight Board, we’re thrilled to welcome <a href="/JulieOwono/">JulieOw@mstdn.social</a>, an inaugural member of the Board, for a timely and critical conversation on AI governance and platform accountability in Oxford. <a href="/oxmartinschool/">Oxford Martin School</a> Register 
eventbrite.co.uk/e/ai-accountab…
Ben Harack (@benharack) 's Twitter Profile Photo

Governing AI requires international agreements, but cooperation can be risky if there’s no basis for trust. Our new report looks at how to verify compliance with AI agreements without sacrificing national security. This is neither impossible nor trivial.🧵 1/

Governing AI requires international agreements, but cooperation can be risky if there’s no basis for trust.

Our new report looks at how to verify compliance with AI agreements without sacrificing national security.

This is neither impossible nor trivial.🧵

1/
Oxford Martin AI Governance Initiative (@aigioxford) 's Twitter Profile Photo

New report: @BenHarack, Robert Trager et al explore a core challenge in international AI governance: how to *verify* compliance. They discover what’s technically feasible today, as well as the work that still needs to be done. Read more in Ben’s thread below ⬇️.

Kevin Wei (he/they) (@kevinlwei) 's Twitter Profile Photo

🚨 New paper alert! 🚨 Are human baselines rigorous enough to support claims about "superhuman" performance? Spoiler alert: often not! Patricia Paskov and I will be presenting our spotlight paper at ICML next week on the state of human baselines + how to improve them!

🚨 New paper alert! 🚨

Are human baselines rigorous enough to support claims about "superhuman" performance?

Spoiler alert: often not!

<a href="/prpaskov/">Patricia Paskov</a> and I will be presenting our spotlight paper at ICML next week on the state of human baselines + how to improve them!
Iason Gabriel (@iasongabriel) 's Twitter Profile Photo

Pleased to share our new piece nature titled: "We Need a New Ethics for a World of AI Agents". AI systems are undergoing an ‘agentic turn’ shifting from passive tools to active participants in our world. This moment demands a new ethical framework.

Pleased to share our new piece <a href="/Nature/">nature</a> titled: "We Need a New Ethics for a World of AI Agents".

AI systems are undergoing an ‘agentic turn’ shifting from passive tools to active participants in our world.

This moment demands a new ethical framework.
AI Security Institute (@aisecurityinst) 's Twitter Profile Photo

🚨Open-weight AI models are becoming more powerful, now knocking on the door of today’s closed-weight frontier. This poses critical safety challenges – how can we prevent the misuse of models whose parameters are free to download online? 🧵