Krn (@kornucopian) 's Twitter Profile
Krn

@kornucopian

AI alignment skeptic | Demanding transparency & audits in safety claims | Structural risks over hype

ID: 237296068

linkhttps://substack.com/@kornucopian calendar_today12-01-2011 14:36:47

35 Tweet

49 Followers

144 Following

Krn (@kornucopian) 's Twitter Profile Photo

A system doesn’t need to be wrong to be dangerous. If a decision is unappealable, unauditable, and irreversible, alignment stops being meaningful - regardless of accuracy or intent. Which decisions are we currently automating that must retain a human veto? #AI #AIEthics

Krn (@kornucopian) 's Twitter Profile Photo

The real risk in advanced automation isn’t that machines become smarter than humans. It’s that decisions get made faster than humans can question, appeal, or reverse them - until responsibility still exists in name, but no longer in practice. I tried to write this concern down

Krn (@kornucopian) 's Twitter Profile Photo

Which types of real-world decisions should NEVER become effectively unappealable due to AI automation, no matter the efficiency gains? Feel free to reply with others - or argue none should be off-limits. #AIethics #AIGovernance

Krn (@kornucopian) 's Twitter Profile Photo

Editing real photos of real people without consent is a different harm category than generating synthetic images. Safety is about preventing that outcome, not arguing who else might theoretically do something similar.

Krn (@kornucopian) 's Twitter Profile Photo

The BBC reporting makes something clear. If reporting the harm spreads the harm, this is a design failure, not a moderation problem. My note on why certain AI capabilities should not exist at all. substack.com/@kornucopian/n…