Dr. Marigo Raftopoulos (@marigo) 's Twitter Profile
Dr. Marigo Raftopoulos

@marigo

Human-Centered Artificial Intelligence | Augmented-Humans | Post-Doctoral Researcher | Marie Curie fellow 🇪🇺 | Tampere University

ID: 15267017

linkhttps://webpages.tuni.fi/gamification/members/marigo-raftopoulos/ calendar_today29-06-2008 00:15:12

24,24K Tweet

4,4K Takipçi

2,2K Takip Edilen

Anthropic (@anthropicai) 's Twitter Profile Photo

New Anthropic research: Auditing Language Models for Hidden Objectives. We deliberately trained a model with a hidden misaligned objective and put researchers to the test: Could they figure out the objective without being told?

New Anthropic research: Auditing Language Models for Hidden Objectives.

We deliberately trained a model with a hidden misaligned objective and put researchers to the test: Could they figure out the objective without being told?
Ruben Hassid (@rubenhssd) 's Twitter Profile Photo

BREAKING: Stanford just surveyed 1,500 workers and AI experts about which jobs AI will actually replace and automate. Turns out, we've been building AI for all the WRONG jobs. Here's what they discovered: (hint: the "AI takeover" is happening backwards)

BREAKING: Stanford just surveyed 1,500 workers and AI experts about which jobs AI will actually replace and automate.

Turns out, we've been building AI for all the WRONG jobs.

Here's what they discovered:

(hint: the "AI takeover" is happening backwards)
@timnitGebru (@dair-community.social/bsky.social) (@timnitgebru) 's Twitter Profile Photo

I know I'm late to the party to be finishing Karen Hao Empire of AI Now. But now that I've finished it, I can tell you that it is nothing short of a masterpiece: required reading for people in the tech or anyone who wants to be in the tech industry. penguinrandomhouse.com/books/743569/e…

Dr. Émile P. Torres (@xriskology) 's Twitter Profile Photo

God damnit, The New York Times. I have been screaming about this for *years*. The TESCREAL ideologies are *pro-extinctionist*--they do not want our species to exist in the future. Here are just a few recent articles I wrote about this (below). Please take this seriously!

Michael Levin (@drmichaellevin) 's Twitter Profile Photo

I'm constantly irritated that I don't have time to read the torrent of cool papers coming faster and faster from amazing people in relevant fields. Other scientists have the same issue and have no time to read most of my lengthy conceptual papers either. So whom are we writing

Rosanne Liu (@savvyrl) 's Twitter Profile Photo

The opportunity gap in AI is more striking than ever. We talk way too much about those receiving $100M or whatever for their jobs, but not enough those asking for <$1k to present their work. For 3rd year in a row, ML Collective is raising funds to support Deep Learning Indaba attendees.

The opportunity gap in AI is more striking than ever. We talk way too much about those receiving $100M or whatever for their jobs, but not enough those asking for &lt;$1k to present their work.

For 3rd year in a row, <a href="/ml_collective/">ML Collective</a> is raising funds to support <a href="/DeepIndaba/">Deep Learning Indaba</a> attendees.
Brendan McCord 🏛️ x 🤖 (@mbrendan1) 's Twitter Profile Photo

I’m struck by how profoundly non-humanistic many AI leaders sound. - Sutton sees us as transitional artifacts - x-risk/EA types reduce the human good to bare survival or aggregates of pleasure and pain - e/accs reduce us to variables in a thermodynamic equation - Alex Wang calls

Samuel Marks (@saprmarks) 's Twitter Profile Photo

xAI launched Grok 4 without any documentation of their safety testing. This is reckless and breaks with industry best practices followed by other major AI labs. If xAI is going to be a frontier AI developer, they should act like one. 🧵

Grady Booch (@grady_booch) 's Twitter Profile Photo

My p(doom) remains asymptotically approaching zero. That notwithstanding, there exist clear and present dangers associated with contemporary large language models, dangers that are irredeemable consequences of their architecture. As professionals in the field and a society is a

OriginTrail (@origin_trail) 's Twitter Profile Photo

"World models or cognitive models is absolutely fundamental. Having the ability to generalize abstract knowledge is fundamental." - Gary Marcus, neuro-symbolic AI expert & cognitive scientist, at the DKGcon 2024, at the DKGcon 2024