Emma Bluemke
@emmabluemke
Canadian 🍁 Research Manager at @GovAI_, PhD Biomedical Engineering @UniofOxford, prev privacy course instructor @openminedorg
ID:750680749502849024
https://emmabluemke.com 06-07-2016 13:20:12
364 Tweets
1,9K Followers
1,6K Following
📢 We're hiring!
The Centre for the Governance of AI (Centre for the Governance of AI (GovAI)) is looking for Research Management Associates, Operations Associates, and Research Scholars.
More below 👇
Paper authors: girish sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage , Julian, Cullen O’Keefe, Gillian Hadfield, Richard Ngo, Konstantin, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert Trager, Shahar Avin, Adrian Weller, Yoshua Bengio,…
In my first blog post with Centre for the Governance of AI (GovAI), I give an overview of the methods AI companies use to prevent the misuse of general-purpose models - I also point out that unfortunately, none are wholly reliable.
It's out!!! This is what I spent Feb-Apr doing at Centre for the Governance of AI (GovAI) and I'm so glad it's out as a The Brookings Institution working paper!!! Read for what we think competition policy priorities should be in the age of large AI systems!! brookings.edu/articles/marke…
Can really recommend reading the response to the White House Office of Science & Technology Policy request for info on national priorities from four of my colleagues Centre for the Governance of AI (GovAI).
A great concise and easy-to-read overview of some key risks from and regulatory approaches to highly-capable and frontier AI.
Brief summary🧵
This week, the UK government secured pledges from three leading AI labs to provide “early access” to new models. But what kind of access? What should gov't do with it?
Nikhil Mulani & Jess Whittlestone offer some concrete suggestions.
governance.ai/post/proposing…
We have a new blog post up from Nikhil Mulani & Jess Whittlestone:
'Proposing a Foundation Model Information-Sharing Regime for the UK'
Link below:
governance.ai/post/proposing…
We’re excited to share the results of our recent expert survey on best practices in AGI safety and governance!
Paper: arxiv.org/abs/2305.07153
Co-authors: Dr Noemi Dreksler, Markus Anderljung, David McCaffary, Lennart Heim, Emma Bluemke, Ben Garfinkel
Summary in the thread 🧵
Great stuff by Centre for the Governance of AI (GovAI): 98% of respondents somewhat or strongly agreed that AGI labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming. arxiv.org/abs/2305.07153
Democratizing AI is about much more than model sharing.
Democratic involvement in decision-making about AI - about how it is used, shared, and regulated - is key.
See the Montreal AI Ethics Institute research summary
Aviv Ovadya 🥦 Ben Garfinkel Allan Dafoe Divya Siddarth
montrealethics.ai/democratising-…
Preliminary findings from our new cross-cultural survey of over 13,000 people by Centre for the Governance of AI (GovAI) & collaborators at Cornell University Council on Foreign Relations Penn Syracuse University found an overwhelming consensus for careful management of AI in Europe and the United States. 🧵(1/8)
How do you design an AI ethics board (that actually works)?
In our new paper, we list key design choices and discuss how they would affect the board’s ability to reduce risks from AI.
Paper: arxiv.org/abs/2304.07249 (with Anka Reuel and Alexis Carlier)
It's been really encouraging to see so much consensus and good-faith engagement happening in this space - see more from AI Objectives Institute here:
It's been really encouraging to see so much consensus and good-faith engagement happening - see more from AI Objectives Institute here:
twitter.com/degerturann/st…
Just as politics is not one-dimensional (liberal-conservative), so too with AI. Here's a wonderful breakdown of many axes of AI to help us better articulate our views and hopefully see more nuance in the current AI debates. Thank you Emma Bluemke!