Madhulika
@madhusrikumar
Head of Safety @partnershipAI | AI governance | prev @Harvard_Law, @NewAmerica @orfcyber
ID: 17675388
http://madhulikasrikumar.com 27-11-2008 11:06:16
8,8K Tweet
1,1K Takipçi
757 Takip Edilen
TODAY! Join Ada Lovelace Institute to explore ethical review processes for AI and data science research with PAI's Madhulika,Andrew Strait, Wendy Hall, Dawn B, Dr Mylene Petermann, Niccolò Tempini🐜, and Ismael Kherroubi-Garcia. Register below👇 adalovelaceinstitute.org/event/looking-…
The Partnership on AI brings together industry, civil society & experts as companies like ours look for the most responsible ways to develop & release AI models. Its draft guidance establishes much-needed best practices for open & restricted releases - an important step when the
A tremendous new multi-stakeholder initiative from Partnership on AI to create customisable safety guidelines for different kinds of AI model 👏 Delighted to contribute to the project – click the link and explore the different guidelines on offer: partnershiponai.org/modeldeploymen…
Have your say! Partnership on AI has just released their Guidance for Safe Foundation Model Deployment and they're seeking public comment until Jan. 15, 2024. Schwartz Reisman Institute Director Gillian Hadfield is on the steering committee of PAI's Safety Critical AI Program. createsend.com/t/t-A43222DF64…
Super excited to share that our Brussels team is growing! 🚀 Join my team to work on EU & international governance and regulation, AI accountability and risk management in practise, rebalancing power and democratic oversight for AI.. and much more! adalovelaceinstitute.org/job/researcher…
How can we actively pursue harm reduction strategies for open foundation models without hindering their accessibility? We co-hosted an expert workshop 👇 on this and related questions with Partnership on AI following up on our NTIA response github.blog/2024-04-10-hel…
New blog post from Matt Davies @wonderlikeours and I on the future of the UK AI Safety Institute and AI safety after Seoul. Tl;dr: We need a shift in the the 'what and how' that AISI works on, backed up with new statutory powers and a joined-up AI regulation strategy.
If you're curious about: – AI agents 🤖 – The values they embed ⚖️ – Human relationships with AI 👫 – The choices in front of us now ✊ Then check out our new podcast with Justin Hendrix, @ShannonVallor & Tech Policy Press: techpolicy.press/considering-th…
Policymakers are pushing for AI labels. In my latest for Tech Policy Press, I explain why that is not enough to support trust in media. techpolicy.press/lawmakers-push…
Are you interested in exploring questions at the ethical frontier of AI research? If so, then take a look at this new opening in the humanity, ethics and alignment research team: boards.greenhouse.io/deepmind/jobs/… HEART conducts interdisciplinary research to advance safe & beneficial AI.