Emma Bluemke(@emmabluemke) 's Twitter Profileg
Emma Bluemke

@emmabluemke

Canadian 🍁 Research Manager at @GovAI_, PhD Biomedical Engineering @UniofOxford, prev privacy course instructor @openminedorg

ID:750680749502849024

linkhttps://emmabluemke.com calendar_today06-07-2016 13:20:12

364 Tweets

1,9K Followers

1,6K Following

Jonas Schuett(@jonasschuett) 's Twitter Profile Photo

📢 We're hiring!

The Centre for the Governance of AI (Centre for the Governance of AI (GovAI)) is looking for Research Management Associates, Operations Associates, and Research Scholars.

More below 👇

account_circle
Miles Brundage(@Miles_Brundage) 's Twitter Profile Photo

Excited to finally get to share a paper that several folks on the Policy Research team here have been working on for a while, in collaboration with a bunch of folks across various orgs: “Computing Power and the Governance of AI”: x.com/LeverhulmeCFI/…

account_circle
Haydn Belfield(@HaydnBelfield) 's Twitter Profile Photo

Our major new report 'Computing Power and the Governance of Artificial Intelligence' has been released today.

We explain why AI hardware - chips & data centres - may be the most effective targets for risk-reducing AI policies

Our major new report 'Computing Power and the Governance of Artificial Intelligence' has been released today. We explain why AI hardware - chips & data centres - may be the most effective targets for risk-reducing AI policies
account_circle
Markus Anderljung(@Manderljung) 's Twitter Profile Photo

AI researchers and practitioners should apply to work with the UK's AI Safety Institute.

I expect this organization will do world-class evaluations work on frontier AI systems, sometimes even before they're deployed. While also informing government policy.

account_circle
Ben Clifford(@imbenclifford) 's Twitter Profile Photo

In my first blog post with Centre for the Governance of AI (GovAI), I give an overview of the methods AI companies use to prevent the misuse of general-purpose models - I also point out that unfortunately, none are wholly reliable.

In my first blog post with @GovAI_, I give an overview of the methods AI companies use to prevent the misuse of general-purpose models - I also point out that unfortunately, none are wholly reliable.
account_circle
Markus Anderljung(@Manderljung) 's Twitter Profile Photo

As the impacts of frontier AI models increase, decisions about their development and deployment can't all be left in the hands of AI companies.

In a new paper, we describe how such decisions could be more publicly accountable via external scrutiny.

As the impacts of frontier AI models increase, decisions about their development and deployment can't all be left in the hands of AI companies. In a new paper, we describe how such decisions could be more publicly accountable via external scrutiny.
account_circle
Jai Vipra(@JaaiVipra) 's Twitter Profile Photo

It's out!!! This is what I spent Feb-Apr doing at Centre for the Governance of AI (GovAI) and I'm so glad it's out as a The Brookings Institution working paper!!! Read for what we think competition policy priorities should be in the age of large AI systems!! brookings.edu/articles/marke…

account_circle
Sumaya Nur(@SumayaNur_) 's Twitter Profile Photo

Looking forward to this discussion tomorrow.

I will be speaking on, AI democratization and equitable benefit sharing, as critical aspects to consider when determining which international institutions should govern frontier AI models.

account_circle
Dr Noemi Dreksler(@NoemiDreksler) 's Twitter Profile Photo

Can really recommend reading the response to the White House Office of Science & Technology Policy request for info on national priorities from four of my colleagues Centre for the Governance of AI (GovAI).

A great concise and easy-to-read overview of some key risks from and regulatory approaches to highly-capable and frontier AI.

Brief summary🧵

account_circle
Markus Anderljung(@Manderljung) 's Twitter Profile Photo

This week, the UK government secured pledges from three leading AI labs to provide “early access” to new models. But what kind of access? What should gov't do with it?

Nikhil Mulani & Jess Whittlestone offer some concrete suggestions.

governance.ai/post/proposing…

account_circle
Centre for the Governance of AI (GovAI)(@GovAI_) 's Twitter Profile Photo

We have a new blog post up from Nikhil Mulani & Jess Whittlestone:

'Proposing a Foundation Model Information-Sharing Regime for the UK'

Link below:

governance.ai/post/proposing…

account_circle
Jonas Schuett(@jonasschuett) 's Twitter Profile Photo

We’re excited to share the results of our recent expert survey on best practices in AGI safety and governance!

Paper: arxiv.org/abs/2305.07153

Co-authors: Dr Noemi Dreksler, Markus Anderljung, David McCaffary, Lennart Heim, Emma Bluemke, Ben Garfinkel

Summary in the thread 🧵

We’re excited to share the results of our recent expert survey on best practices in AGI safety and governance! Paper: arxiv.org/abs/2305.07153 Co-authors: @NoemiDreksler, @Manderljung, David McCaffary, @ohlennart, @emmabluemke, Ben Garfinkel Summary in the thread 🧵
account_circle
Séb Krier(@sebkrier) 's Twitter Profile Photo

Great stuff by Centre for the Governance of AI (GovAI): 98% of respondents somewhat or strongly agreed that AGI labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming. arxiv.org/abs/2305.07153

Great stuff by @GovAI_: 98% of respondents somewhat or strongly agreed that AGI labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming. arxiv.org/abs/2305.07153
account_circle
Elizabeth A. Seger(@ea_seger) 's Twitter Profile Photo

Democratizing AI is about much more than model sharing.
Democratic involvement in decision-making about AI - about how it is used, shared, and regulated - is key.

See the Montreal AI Ethics Institute research summary
Aviv Ovadya 🥦 Ben Garfinkel Allan Dafoe Divya Siddarth

montrealethics.ai/democratising-…

account_circle
Dr Noemi Dreksler(@NoemiDreksler) 's Twitter Profile Photo

Preliminary findings from our new cross-cultural survey of over 13,000 people by Centre for the Governance of AI (GovAI) & collaborators at Cornell University Council on Foreign Relations Penn Syracuse University found an overwhelming consensus for careful management of AI in Europe and the United States. 🧵(1/8)

Preliminary findings from our new cross-cultural survey of over 13,000 people by @GovAI_ & collaborators at @Cornell @CFR_org @Penn @SyracuseU found an overwhelming consensus for careful management of AI in Europe and the United States. 🧵(1/8)
account_circle
Jonas Schuett(@jonasschuett) 's Twitter Profile Photo

How do you design an AI ethics board (that actually works)?

In our new paper, we list key design choices and discuss how they would affect the board’s ability to reduce risks from AI.

Paper: arxiv.org/abs/2304.07249 (with Anka Reuel and Alexis Carlier)

How do you design an AI ethics board (that actually works)? In our new paper, we list key design choices and discuss how they would affect the board’s ability to reduce risks from AI. Paper: arxiv.org/abs/2304.07249 (with @AnkaReuel and Alexis Carlier)
account_circle
Emma Bluemke(@emmabluemke) 's Twitter Profile Photo

It's been really encouraging to see so much consensus and good-faith engagement happening in this space - see more from AI Objectives Institute here:

account_circle
Emma Bluemke(@emmabluemke) 's Twitter Profile Photo

It's been really encouraging to see so much consensus and good-faith engagement happening - see more from AI Objectives Institute here:

twitter.com/degerturann/st…

account_circle
Noah Giansiracusa(@ProfNoahGian) 's Twitter Profile Photo

Just as politics is not one-dimensional (liberal-conservative), so too with AI. Here's a wonderful breakdown of many axes of AI to help us better articulate our views and hopefully see more nuance in the current AI debates. Thank you Emma Bluemke!

account_circle