Keith Duggar (@doctorduggar) 's Twitter Profile
Keith Duggar

@doctorduggar

MIT Doctor of Philosophy, strategist, polymath, engineer, lifelong learner, problem solver, and communicator. Ally to All humanity. @MLStreetTalk pod.

ID: 385167355

linkhttps://www.youtube.com/c/MachineLearningStreetTalk calendar_today05-10-2011 00:34:58

66 Tweet

1,1K Takipçi

19 Takip Edilen

Keith Duggar (@doctorduggar) 's Twitter Profile Photo

Six years ago, in my wanderings along the dusty early paths of the Philosophy of Science, I came upon "Chance, Love, and Logic", a collection of philosophical essays by Charles Sanders Peirce (b. 1839 - d. 1914) published posthumously in 1923. The following beautiful, thoughtful

Wendy (@wendyweeww) 's Twitter Profile Photo

Hallucination is baked into LLMs. Can't be eliminated, it's how they work. Dario Amodei says LLMs hallucinate less than humans. But it's not about less or more. It's the differing & dangerous nature of the hallucination, making it unlikely LLMs will cause mass unemployment (1/n)

Hallucination is baked into LLMs. Can't be eliminated, it's how they work. <a href="/DarioAmodei/">Dario Amodei</a> says LLMs hallucinate less than humans. But it's not about less or more. It's the differing &amp; dangerous nature of the hallucination, making it unlikely LLMs will cause mass unemployment
(1/n)
Alex Imas (@alexolegimas) 's Twitter Profile Photo

I sent this to a friend, who is a partner at a prominent law firm. Their response, verbatim: “lol no. We’ve tried all the frontier models. It’s useful for doing a first pass on low level stuff, but makes tons of mistakes and associate has to check everything.”

Machine Learning Street Talk (@mlstreettalk) 's Twitter Profile Photo

Had a fascinating chat with New York University Professor Andrew Gordon Wilson (Andrew Gordon Wilson) about his paper "Deep Learning is Not So Mysterious or Different." We dug into some of the biggest paradoxes in modern AI. If you've ever wondered how these giant models actually work, this is

sdmat (@sdmat123) 's Twitter Profile Photo

Rob Wiblin There is no evidence your toilet bowl has subjective experience, and I have no idea what empirical evidence you would expect to observe if it did. Should we err on the side of caution? Imagine the suffering.

Wendy (@wendyweeww) 's Twitter Profile Photo

An LLM doesn't have a self that cares about your struggles or has personal opinions. Inside an LLM, there's nobody home. Its "identity" shifts instantly when reprogrammed, asked to roleplay, or if it's jailbroken. (substack post link below)

An LLM doesn't have a self that cares about your struggles or has personal opinions. Inside an LLM, there's nobody home. Its "identity" shifts instantly when reprogrammed, asked to roleplay, or if it's jailbroken.

(substack post link below)
Keith Duggar (@doctorduggar) 's Twitter Profile Photo

A picture is worth a thousand words. A movie is worth a thousand pictures. A story is worth a thousand movies. A game is worth a thousand stories. Life is a thousand games.

Will Oremus (@willoremus) 's Twitter Profile Photo

A German teen told Meta's child safety researchers his under-10 little brother had been sexually propositioned multiple times on its VR platform. Their bosses at Meta deleted the evidence. New internal whistleblower docs suggest that was part of a broader cover-up.

A German teen told Meta's child safety researchers his under-10 little brother had been sexually propositioned multiple times on its VR platform. 

Their bosses at Meta deleted the evidence. 

New internal whistleblower docs suggest that was part of a broader cover-up.
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) (@rao2z) 's Twitter Profile Photo

Yes, I peer reviewed DeepSeek-R1 paper for @nature and hope to see more frontier model developers follow their lead of sharing peer-reviewed technical details of their work, and go beyond buzzy blogs and patronizing "yeah, we have done it this way too" post facto claims..🤞

Yes, I peer reviewed DeepSeek-R1 paper for @nature  and hope to see more frontier model developers follow their lead of sharing peer-reviewed technical details of their work, and go beyond buzzy blogs and patronizing "yeah, we have done it this way too" post facto claims..🤞
Andrej Karpathy (@karpathy) 's Twitter Profile Photo

zenitsu_apprentice Good question, it's basically entirely hand-written (with tab autocomplete). I tried to use claude/codex agents a few times but they just didn't work well enough at all and net unhelpful, possibly the repo is too far off the data distribution.

Keith Duggar (@doctorduggar) 's Twitter Profile Photo

Imagine an LLM as a mountainscape—vast, deep, and jagged beyond imagination. Your prompt drops a marble atop one of a billion peaks. It careens down the rock faces, marking a trail of tokens until it comes to rest in a crevice or on a patch of dirt. There is no intent, no

Keith Duggar (@doctorduggar) 's Twitter Profile Photo

"Those who begin coercive elimination of dissent soon find themselves exterminating dissenters. Compulsory unification of opinion achieves only the unanimity of the graveyard." ― Justice Robert H. Jackson, majority opinion, West Virginia State Board of Education v. Barnette, 1943

Wendy (@wendyweeww) 's Twitter Profile Photo

Are AI chatbots plotting against humans? At least that's what OpenAI and Paul Jankura are claiming, where their LLMs *deliberately* blackmailed & deceived humans. But based on how LLMs fundamentally work, they can't. There's just no way and nowhere to keep their scheme. 🧵(1/n)

Are AI chatbots plotting against humans?

At least that's what <a href="/OpenAI/">OpenAI</a> and <a href="/Anthropic/">Paul Jankura</a> are claiming, where their LLMs *deliberately* blackmailed &amp; deceived humans.

But based on how LLMs fundamentally work, they can't. There's just no way and nowhere to keep their scheme.
🧵(1/n)
ueaj (@_ueaj) 's Twitter Profile Photo

Judd Rosenblatt The roleplay hypothesis should be the other way around? The models are SFT/RLed to say they're not human, and being trained to mimic human data would mean they would by default claim to be conscious. I would imagine the first "fit" learned in SFT/RLHF to respond to questions