Colin Hales (@dr_cuspy) 's Twitter Profile
Colin Hales

@dr_cuspy

Neuroscientist/Engineer. Artificial General Intelligence builder. Expert in brain electromagnetism.

ID: 566040125

calendar_today29-04-2012 06:30:38

4,4K Tweet

1,1K Followers

1,1K Following

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

Add me to that short list. The disease of mistaking automation for intelligence is now a pandemic with predatory self-interested pseudo-lockdowns. I know these eras are not new in science, but living through such a thing in a compute-riven world... is a whole other thing.

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

The failure to handle novelty literally defines the kryptonite of "AI" and can be traced through all its failures for 70 years. François Chollet gets it. I've been trying to get the same message across for >> a decade. Real intelligence autonomously handles novelty. Like us.

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

Computing abstract models of properties of natural intelligence will fail to create an artificial equivalent to natural intelligence for exactly the same reason computing abstract models of properties of natural flight fails to create artificial flight. "AI" is so lost.

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

Trillion-string venal "AI" puppets. Neither A nor I. A plague from a cargo cult science. Credulity sapped, cashed out. Until all that is left is to die.

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

Gary Marcus I proved why computers will never do AGI in 2011. Paywalled in an obscure journal... Out of sight, I just rediscovered it. Barely cited. I wish I had gone to a better journal. 😐. Read it again yesterday. It stacks up. The new approach is right there. worldscientific.com/doi/abs/10.114…

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

It would be so cool to show Alan Turing the debacle of the imitation game being passed so comprehensively and at the same time AI being dumb as a rock and a completely impotent source of any scientific insight on consciousness. What a mess!

Tony Zador (@tonyzador) 's Twitter Profile Photo

Trump administration’s budget plan “would essentially end America’s longstanding role as the world leader in science and innovation” and ... “is threatening not only science but the American public. If approved by Congress, it will make the public less safe, poorer and sicker.”

Trump administration’s budget plan “would essentially end America’s longstanding role as the world leader in science and innovation” and ...
“is threatening not only science but the American public. If approved by Congress, it will make the public less safe, poorer and sicker.”
Colin Hales (@dr_cuspy) 's Twitter Profile Photo

Now that Gary Marcus' long push for the "neurosymbolic" approach is gaining traction, we can finally start discussing the real kryptonite of AGI: the symbol grounding problem. If you don't sort it, the same failure result (novelty handling) will continue as usual.

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

Gary Marcus The correct test for AGI at human level is "human-hands off" autonomous acquisition of a novel capability that the test subject must prove it doesn't know already. That is, it must fail to handle a situation that demands the new capability, and then autonomously acquire it.

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

I automated factories for 2 decades. Bad automation looks exactly like this. And that's exactly what LLM "AI" currently is. Zero intelligence. Knowledge that cannot be trusted. Imagine an aircraft delivered in the same state to customers. So much worse than wrong.

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

Gary Marcus Geoffrey Hinton An LLM is "on". From its perspective it is like a human in a dreamless sleep. Nothing. The LLM is switched "off". From its perspective it is like a human in a dreamless sleep. Nothing. Dead. Alive. Awake. Asleep. The LLM doesn't know/notice the difference.

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

My continuing campaign to peek out from behind a decade or two of lab obscurity ... Another podcast interview of yours truly in relation to "the hole in science" called "consciousness", and how we're going with it. Enjoy! url.au.m.mimecastprotect.com/s/pM5DCE8knvsl…

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

We have the beginnings of some progress in thinking about how thinking actually works. Bring in the role of EM fields in both guiding dynamics and in the grounding of it all in the production and use of subjectivity (consciousness) .... compelling.

Colin Hales (@dr_cuspy) 's Twitter Profile Photo

Having fun in the braininspired.co complexity email forum. They are going through the Santa Fe compendium of complexity science historic papers (Vol 1) while wearing a Neuro-AI hat. Journal club zoom videos for each paper. I did Turing 1950. braininspired.co/complexity-gro…