Aisha Kehoe Down (@aishakdown) 's Twitter Profile
Aisha Kehoe Down

@aishakdown

SABEW-award-winning investigative journalist exposing the systems that make the world work. Bylines @GuardianUS, @OCCRP, @cambodiadaily, @ColoradoSun and more.

ID: 707820416161718272

calendar_today10-03-2016 06:48:32

598 Tweet

496 Followers

828 Following

Garrison Lovely (@garrisonlovely) 's Twitter Profile Photo

Artificial general intelligence is not inevitable. My latest for The Guardian challenges one of the most popular claims made about AGI. Among those who believe AGI is possible, it's common to think it's unstoppable, whether you're excited or terrified of the prospect đź§µ

Artificial general intelligence is not inevitable.

My latest for The Guardian challenges one of the most popular claims made about AGI.

Among those who believe AGI is possible, it's common to think it's unstoppable, whether you're excited or terrified of the prospect đź§µ
Shakeel (@shakeelhashim) 's Twitter Profile Photo

NEW on Transformer: The UK government today announced a ÂŁ15 million "Alignment Project", which will fund research into AI alignment, control, and interpretability. The government called AI alignment "one of the most urgent technical challenges of our time".

NEW on Transformer: The UK government today announced a ÂŁ15 million "Alignment Project", which will fund research into AI alignment, control, and interpretability.

The government called AI alignment "one of the most urgent technical challenges of our time".
Aisha Kehoe Down (@aishakdown) 's Twitter Profile Photo

This is, sorry, a manipulative spin: if tech companies like OpenAI cared about privacy, they wouldn't push to be pre-empt state consumer-protection legislation. What this appears to be about is getting out of a well-founded copyright lawsuit.

Steven Adler (@sjgadler) 's Twitter Profile Photo

OpenAI claims to have introduced a new safety feature for GPT-5, but OpenAI has had this feature since 2021 I'm sad OpenAI is implying they created this for GPT-5, which they didn't. It's hard to rely on AI companies' System Cards when there's incentive to overclaim. (thread)

OpenAI claims to have introduced a new safety feature for GPT-5, but OpenAI has had this feature since 2021

I'm sad OpenAI is implying they created this for GPT-5, which they didn't. It's hard to rely on AI companies' System Cards when there's incentive to overclaim. (thread)
David de Bruijn (@dmdebruijn) 's Twitter Profile Photo

An important point Gary Marcus --a leading AI thinker-- makes here. LLMs are *designed* to be human language pattern-machines. They're designed to *mimick* us. For LLMs to "think", an argument is needed to move from mimicking to actuality. No such argument has been given.

Aisha Kehoe Down (@aishakdown) 's Twitter Profile Photo

Sharp review here of If Anyone Builds It, Everyone Dies: "In reality, Yudkowsky and Soares (unwittingly) serve the same interests as the accelerationists: those who profit from the unfounded certainty that AI will transform the world." theatlantic.com/books/archive/…

Steven Adler (@sjgadler) 's Twitter Profile Photo

AI companies are struggling to support users in crisis. What can they do better? I pored over a million-word-long ChatGPT psychosis episode to figure out where things went wrong, and what can be done to help. (đź§µ)

AI companies are struggling to support users in crisis. What can they do better?

I pored over a million-word-long ChatGPT psychosis episode to figure out where things went wrong, and what can be done to help. (đź§µ)
Aisha Kehoe Down (@aishakdown) 's Twitter Profile Photo

My latest for the The Guardian on the challenges of middle-income countries in a multi-billion dollar AI arms race. theguardian.com/technology/202…

Gary Marcus (@garymarcus) 's Twitter Profile Photo

Dear OpenAI, not everybody who criticizes your decade-long history of shady practices has anything to do with Elon Musk. A lot of us just don’t like how you roll. Serving subpoenas on your critics is not cool.

Jeffrey Ladish (@jeffladish) 's Twitter Profile Photo

OpenAI tried to weaponize the courts to force SB 53 advocates to share all of their private communications about the bill while it was being debated

Garrison Lovely (@garrisonlovely) 's Twitter Profile Photo

Wow, OpenAI's head of mission alignment just spoke out against the way the company has been using subpoenas to intimidate and disrupt political opponents. A surprising number of OAI rank & file have no idea what their leadership is doing to kill regulation.

Wow, OpenAI's head of mission alignment just spoke out against the way the company has been using subpoenas to intimidate and disrupt political opponents.

A surprising number of OAI rank & file have no idea what their leadership is doing to kill regulation.
Ethan Mollick (@emollick) 's Twitter Profile Photo

I don’t have much to add to the bubble discussion, but the “this time is different” argument is, in part, based on the sincere belief of many at the AI labs that there is a race to superintelligence & the winner gets,.. everything. It is a key dynamic that is not discussed much

Nate Silver (@natesilver538) 's Twitter Profile Photo

Should save this for a newsletter, but OpenAI's recent actions don't seem to be consistent with a company that believes AGI is right around the corner.

@timnitGebru (@dair-community.social/bsky.social) (@timnitgebru) 's Twitter Profile Photo

So after we built an addictive tool based on stolen data, exploited labor and plundering the environment, which pushes kids to commit suicides, we were like, why not cash in on "erotica"? After all, this is what building so-called AGI that "benefits humanity" looks like.

Shakeel (@shakeelhashim) 's Twitter Profile Photo

I have been doing a lot of interviews recently, and have asked LLMs to try to clean up the transcripts — no substantive changes, just removing filler words and adding paragraph breaks to make them more readable. But every time I've asked the models have hallucinated HUGE chunks

Gary Marcus (@garymarcus) 's Twitter Profile Photo

AI chatbots can be dangerous, but we also need to be careful not to anthropomorphize those dangers. excellent, nuanced thread: