
Colin Hales
@dr_cuspy
Neuroscientist/Engineer. Artificial General Intelligence builder. Expert in brain electromagnetism.
ID: 566040125
29-04-2012 06:30:38
4,4K Tweet
1,1K Takipçi
1,1K Takip Edilen


The failure to handle novelty literally defines the kryptonite of "AI" and can be traced through all its failures for 70 years. François Chollet gets it. I've been trying to get the same message across for >> a decade. Real intelligence autonomously handles novelty. Like us.



Gary Marcus I proved why computers will never do AGI in 2011. Paywalled in an obscure journal... Out of sight, I just rediscovered it. Barely cited. I wish I had gone to a better journal. 😐. Read it again yesterday. It stacks up. The new approach is right there. worldscientific.com/doi/abs/10.114…



Now that Gary Marcus' long push for the "neurosymbolic" approach is gaining traction, we can finally start discussing the real kryptonite of AGI: the symbol grounding problem. If you don't sort it, the same failure result (novelty handling) will continue as usual.

Gary Marcus The correct test for AGI at human level is "human-hands off" autonomous acquisition of a novel capability that the test subject must prove it doesn't know already. That is, it must fail to handle a situation that demands the new capability, and then autonomously acquire it.



Gary Marcus Geoffrey Hinton An LLM is "on". From its perspective it is like a human in a dreamless sleep. Nothing. The LLM is switched "off". From its perspective it is like a human in a dreamless sleep. Nothing. Dead. Alive. Awake. Asleep. The LLM doesn't know/notice the difference.



