Tommy Shaffer Shane (@tommyshane) 's Twitter Profile
Tommy Shaffer Shane

@tommyshane

AI Policy Manager @longresilience, doing a PhD on AI going wrong @kingsdh

previously @scitechgovuk, @firstdraftnews, @houseofcommons

ID: 41828876

calendar_today22-05-2009 15:06:23

557 Tweet

741 Followers

1,1K Following

Tommy Shaffer Shane (@tommyshane) 's Twitter Profile Photo

"Google Autocompletion appears to be a patchwork of blocks and related suppressions" Great paper by Richard Rogers on AI and the reliance on 'patchwork', where problematic outputs are quietly blocked with 'patches' - covering problems with the system journals.sagepub.com/doi/10.1177/20…

Anders Kristian Munk (@anderskmunk) 's Twitter Profile Photo

In the 2nd commentary for our SI on AI, Tommy Shaffer Shane shifts attention from "algorithmic ‘bugs’ as technical accidents that are simply found, to how we come to be troubled by AI going wrong, and therefore how publics form and acquire the capacity to alter algorithms’ design".

Markus Anderljung (@manderljung) 's Twitter Profile Photo

As the impacts of frontier AI models increase, decisions about their development and deployment can't all be left in the hands of AI companies. In a new paper, we describe how such decisions could be more publicly accountable via external scrutiny.

As the impacts of frontier AI models increase, decisions about their development and deployment can't all be left in the hands of AI companies.

In a new paper, we describe how such decisions could be more publicly accountable via external scrutiny.
Tommy Shaffer Shane (@tommyshane) 's Twitter Profile Photo

Interesting example of how AI works as part of a human system. Tempting to see AI as a lone actor, but governance / policy needs to look at human-AI systems

Big Data & Society (@bigdatasoc) 's Twitter Profile Photo

In this commentary, Tommy Shaffer Shane takes up the example of an #AI incident in September 2020, when a Twitter user created a ‘horrible experiment’ to demonstrate the racist bias of Twitter's algorithm for cropping images. Check it out! ➡️ buff.ly/46WaOET

Tommy Shaffer Shane (@tommyshane) 's Twitter Profile Photo

🚨 My new paper, where I look at how Twitter users discovered that an AI model was biased, forcing it to be pulled - What can we learn from AI incidents like these, and their role in governing AI? 1/ 🧵⬇️

Big Data & Society (@bigdatasoc) 's Twitter Profile Photo

🧐 Ever wondered about the fallout of #AI gone wrong? In this commentary, Tommy Shaffer Shane hones in on AI incidents, using a 2020 Twitter algorithm bias fiasco as a case study. Read it here! ➡️ buff.ly/46WaOET

🧐 Ever wondered about the fallout of #AI gone wrong? In this commentary, <a href="/tommyshane/">Tommy Shaffer Shane</a> hones in on AI incidents, using a 2020 Twitter algorithm bias fiasco as a case study. Read it here! ➡️ buff.ly/46WaOET
Tommy Shaffer Shane (@tommyshane) 's Twitter Profile Photo

As we head into 2024 deepfake election year, the disillusionment of 'don't trust your own eyes' could be more harmful than the deepfakes themselves.

Tommy Shaffer Shane (@tommyshane) 's Twitter Profile Photo

This is my key concern for disinformation - we convince ourselves it's a bigger problem than it is, and reduce trust i.e., we create the problem ourselves

Big Data & Society (@bigdatasoc) 's Twitter Profile Photo

In this article, Tommy Shaffer Shane argues for a research agenda focused on AI incidents – examples of AI going wrong and sparking #controversy – and how they are constructed in online environments. Read it here! buff.ly/46WaOET

Tommy Shaffer Shane (@tommyshane) 's Twitter Profile Photo

I've synthesised 75 reports to build a picture of how AI will enable the disinformation threat. I find that while the threat is often overstated, there are important changes happening - particularly for low-resourced actors - that need attention.

Tommy Shaffer Shane (@tommyshane) 's Twitter Profile Photo

🚨 New report: We explain why incident reporting is a gap in the UK’s regulatory plans that urgently needs addressing – and we provide concrete actions the UK Government can take to address it. theguardian.com/technology/art…