Scott Graham (@macgraeme42) 's Twitter Profile
Scott Graham

@macgraeme42

Artificial Intelligence. Game Design. All-in on Tesla. Joining the conversation.

ID: 1423081579790970880

calendar_today05-08-2021 00:41:35

28,28K Tweet

1,1K Takipçi

200 Takip Edilen

Scott Graham (@macgraeme42) 's Twitter Profile Photo

That our mostly flat 3-brane (i.e. the normal 3D space of the universe) could be a literal membrane (3D manifold) embedded in higher dimensional space, twisted & crumpled into a tangled mess*** that brings any number of arbitrarily distant points into contact, while remaining

Scott Graham (@macgraeme42) 's Twitter Profile Photo

LLMs hallucinate because they are *trained* to hallucinate. Period. The are trained to approximately reproduce the entire corpus of human writing which includes all the jokes, lies, fiction, propaganda, bias and just plain errors. The foundation of an LLM is built on

Elica Le Bon الیکا‌ ل بن (@elicalebon) 's Twitter Profile Photo

So let me get this straight. Iranians who lived Iran have gotten it all wrong. Venezuelans who lived Venezuela have gotten it all wrong. Cubans who lived Cuba have gotten it all wrong. But you—who have never left the four corners of your own sublime comfort—are the bearers of

Scott Graham (@macgraeme42) 's Twitter Profile Photo

Of course. It's called freedom of speech and freedom of association. As the "encouragement" is neither fraudulent nor coercive (which includes forms of harassment).

Scott Graham (@macgraeme42) 's Twitter Profile Photo

Vibe code to your heart's content in a safe environment--on a computer (or virtual computer 'sandbox') where you do not risk deleting or corrupting data that is important to yourself or others, or bring down a system you or anyone else rely on. Vibe coding is great for learning

Scott Graham (@macgraeme42) 's Twitter Profile Photo

LLM "hallucinations" are not even errors. The LLM is doing, quite well, exactly what it was trained to do: reproduce human lies, jokes, fiction, propaganda, exaggeration and mistakes. The AI is not making a mistake, it is accurately reproducing mistakes it was trained to make.

Scott Graham (@macgraeme42) 's Twitter Profile Photo

No. First it was "maybe in a thousand years" Then is was "sometime next century" Then it was "maybe next decade" NOW it is "next year"... And dude, we have ****MANY**** clues about how to build AGI. It is surprisingly uncomplicated. We had the clues that matter over 50 years

Scott Graham (@macgraeme42) 's Twitter Profile Photo

It is expensive and clunky and you can get 80% of the experience and 200% of the fun from a screen, keyboard and mouse (or game-controller), with none of the inconvenience. People want to jump into a game and start playing immediately. And if they need to pee or want to grab a

Scott Graham (@macgraeme42) 's Twitter Profile Photo

Cannot take this seriously when it leaves out... - the Dark Star - the Orville - Spaceball 1 - the Eagle 5 - Eagles 1,2,3,4,5,6.... - The Liberator - Space Battleship Yamato - Max - Gunstar One - the Nostromo - the USS Sulaco - the USS Cygnus - The Lewis and Clark - Moya - the

Scott Graham (@macgraeme42) 's Twitter Profile Photo

Freedom of Speech and Freedom of Association go hand-in-hand. Broadly speaking I think employers have the right to fire someone for any reason, and employees has the right to quite for any reason. And employer should not be forced to retain an employee they consider morally

Scott Graham (@macgraeme42) 's Twitter Profile Photo

For two decades now I've been saying the super-intelligence of the future is going to be artificially enhanced (genetically re-engineered) human brains, likely cybernetically integrated with digital circuitry. Low-power, self-assembling, self-repairing bio-molecular circuitry.

Scott Graham (@macgraeme42) 's Twitter Profile Photo

People do know what decimate means: to kill or destroy a large portion, sometime all of something. Words change meaning over time. Most people do not know, or care, that decimate once meant to kill 1 in 10 soldiers of a Roman military unit as collective punishment.

Scott Graham (@macgraeme42) 's Twitter Profile Photo

Given the levels of AI/robotics only years away there's no good reason to risk lives just to put people up there a handful of years earlier. Let robots build habitats and unmanned flights prove safety & reliability. THEN send people up with a 1/1000 or 1/1,000,000 loss rate. In

Scott Graham (@macgraeme42) 's Twitter Profile Photo

Take a lesson from FSD: end-to-end is the way ("thinking" seemlessly integrated with sensorimotor interaction & planning in 3D real world) Integrating LLM verbal with Optimus visual/spatial reasoning will make "sensor fusion" look like a cakewalk...

Scott Graham (@macgraeme42) 's Twitter Profile Photo

Smarts is non-negotiable. Vision is non-negotiable. The two together are sufficient alone for superhuman driving ability, and extremely cost-effective. Extra sensors are cannot be used cost effectively until there is sufficient smarts to use them. LiDAR was a crutch for depth