Madhulika (@madhusrikumar) 's Twitter Profile
Madhulika

@madhusrikumar

Head of Safety @partnershipAI | AI governance | prev @Harvard_Law, @NewAmerica @orfcyber

ID: 17675388

linkhttp://madhulikasrikumar.com calendar_today27-11-2008 11:06:16

8,8K Tweet

1,1K Takipçi

757 Takip Edilen

Madhulika (@madhusrikumar) 's Twitter Profile Photo

The new Twitter views metric will confirm if the platform only rewards users who frequently tweet with more views - sussing it out …

Partnership on AI (@partnershipai) 's Twitter Profile Photo

TODAY! Join Ada Lovelace Institute to explore ethical review processes for AI and data science research with PAI's Madhulika,Andrew Strait, Wendy Hall, Dawn B, Dr Mylene Petermann, Niccolò Tempini🐜, and Ismael Kherroubi-Garcia. Register below👇 adalovelaceinstitute.org/event/looking-…

Markus Anderljung (@manderljung) 's Twitter Profile Photo

As increasingly capable AI models are trained, model evaluations for dangerous capabilities and alignment will become crucial to inform decisions about whether and how models are deployed. More in new paper: "Model evaluation for extreme risks." arxiv.org/abs/2305.15324

As increasingly capable AI models are trained, model evaluations for dangerous capabilities and alignment will become crucial to inform decisions about whether and how models are deployed. 

More in new paper: "Model evaluation for extreme risks."

arxiv.org/abs/2305.15324
Nick Clegg (@nickclegg) 's Twitter Profile Photo

The Partnership on AI brings together industry, civil society & experts as companies like ours look for the most responsible ways to develop & release AI models. Its draft guidance establishes much-needed best practices for open & restricted releases - an important step when the

Iason Gabriel (@iasongabriel) 's Twitter Profile Photo

A tremendous new multi-stakeholder initiative from Partnership on AI to create customisable safety guidelines for different kinds of AI model 👏 Delighted to contribute to the project – click the link and explore the different guidelines on offer: partnershiponai.org/modeldeploymen…

Joelle Pineau (@jpineau1) 's Twitter Profile Photo

This is one of the most comprehensive, nuanced and inclusive frameworks for responsibly building and deploying AI models through an open approach. PAI's leadership has been invaluable in bringing together many different opinions and offering clear guidance for AI model builders.

Schwartz Reisman Institute (@torontosri) 's Twitter Profile Photo

Have your say! Partnership on AI has just released their Guidance for Safe Foundation Model Deployment and they're seeking public comment until Jan. 15, 2024. Schwartz Reisman Institute Director Gillian Hadfield is on the steering committee of PAI's Safety Critical AI Program. createsend.com/t/t-A43222DF64…

Have your say! <a href="/PartnershipAI/">Partnership on AI</a> has just released their Guidance for Safe Foundation Model Deployment and they're seeking public comment until Jan. 15, 2024. <a href="/TorontoSRI/">Schwartz Reisman Institute</a> Director <a href="/ghadfield/">Gillian Hadfield</a> is on the steering committee of PAI's Safety Critical AI Program. 

createsend.com/t/t-A43222DF64…
Connor Dunlop (@cp_dunlop) 's Twitter Profile Photo

Super excited to share that our Brussels team is growing! 🚀 Join my team to work on EU & international governance and regulation, AI accountability and risk management in practise, rebalancing power and democratic oversight for AI.. and much more! adalovelaceinstitute.org/job/researcher…

GitHub Policy (@githubpolicy) 's Twitter Profile Photo

How can we actively pursue harm reduction strategies for open foundation models without hindering their accessibility? We co-hosted an expert workshop 👇 on this and related questions with Partnership on AI following up on our NTIA response github.blog/2024-04-10-hel…

Andrew Strait (@agstrait) 's Twitter Profile Photo

New blog post from Matt Davies @wonderlikeours and I on the future of the UK AI Safety Institute and AI safety after Seoul. Tl;dr: We need a shift in the the 'what and how' that AISI works on, backed up with new statutory powers and a joined-up AI regulation strategy.

Claire Leibowicz (@cleibowicz) 's Twitter Profile Photo

Check out this *PUBLIC* webinar I'll be moderating with this stellar, wise crew representing @adobe witness OpenAI BBC, on how they applied PAI's Synthetic Media Framework to real world scenarios. 🗓️June 18, 9amPST/12EST Register here! 👉 buff.ly/3x4k6me

Iason Gabriel (@iasongabriel) 's Twitter Profile Photo

If you're curious about: – AI agents 🤖 – The values they embed ⚖️ – Human relationships with AI 👫 – The choices in front of us now ✊ Then check out our new podcast with Justin Hendrix, @ShannonVallor & Tech Policy Press: techpolicy.press/considering-th…

Andrew Strait (@agstrait) 's Twitter Profile Photo

Our team spent several months speaking with firms working on foundation model evals. While they can be useful, they are not sufficient for ensuring the safety of a model, and suffer from a range of theoretical, practical, and gaming issues. A critical read for AI safety policy.

Claire Leibowicz (@cleibowicz) 's Twitter Profile Photo

Policymakers are pushing for AI labels. In my latest for Tech Policy Press, I explain why that is not enough to support trust in media. techpolicy.press/lawmakers-push…

Centre for the Study of Existential Risk (@csercambridge) 's Twitter Profile Photo

🌍 Applications are now open for CSER's MPhil in Global Risk and Resilience! If you're interested in learning more, be sure to register for our virtual open day on 4th November 2024. #cambridge #mphil

🌍 Applications are now open for CSER's MPhil in Global Risk and Resilience!

If you're interested in learning more, be sure to register for our virtual open day on 4th November 2024.
#cambridge #mphil
Iason Gabriel (@iasongabriel) 's Twitter Profile Photo

Are you interested in exploring questions at the ethical frontier of AI research? If so, then take a look at this new opening in the humanity, ethics and alignment research team: boards.greenhouse.io/deepmind/jobs/… HEART conducts interdisciplinary research to advance safe & beneficial AI.

Haydn Belfield (@haydnbelfield) 's Twitter Profile Photo

Big big job ad: 🔥Director of the Centre for the Study of Existential Risk (CSER) at the University of Cambridge🔥🏛️ You get to lead this lovely team of researchers and shape the world's first masters in global catastrophic risk - link in next tweet

Big big job ad: 
🔥Director of the Centre for the Study of Existential Risk (CSER) at the University of Cambridge🔥🏛️

You get to lead this lovely team of researchers and shape the world's first masters in global catastrophic risk - link in next tweet
Sayash Kapoor (@sayashk) 's Twitter Profile Photo

Folks in San Francisco: I'm doing a book talk on AI Snake Oil with ms. Come hear us discuss the future of AI, why AI isn't an existential risk, building AI in the public, and what goes into writing a book. November 18, 5:30pm. RSVP: forms.gle/m9BGAY6ALCXSjv…

Folks in San Francisco: I'm doing a book talk on AI Snake Oil with <a href="/msurman/">ms</a>. Come hear us discuss the future of AI, why AI isn't an existential risk, building AI in the public, and what goes into writing a book. 

November 18, 5:30pm. RSVP: forms.gle/m9BGAY6ALCXSjv…