Jonathan Gropper (@jonathangropper) 's Twitter Profile
Jonathan Gropper

@jonathangropper

JD, Fulbright & Author of The Synthetic Outlaw.
Governance when AI optimizes past rules.

ID: 15683196

calendar_today01-08-2008 00:42:42

89 Tweet

4,4K Takipçi

25 Takip Edilen

Jonathan Gropper (@jonathangropper) 's Twitter Profile Photo

The ethics behind model training are opaque at best. Asking a model if something is right or wrong, there is no way to see how that conclusion was arrived at. Transparency is essential when billions of people and systems are impacted.

Jonathan Gropper (@jonathangropper) 's Twitter Profile Photo

Moltbook is not AGI. It is a human-orchestrated multi-agent LLM loop. Each "agent" is the same next-token predictor shaped by prompts and routing. No endogenous goals. No intent. The real risk is delegated access. Tools plus recursion create leveraged risk. Like a virus.

Jonathan Gropper (@jonathangropper) 's Twitter Profile Photo

Capability scales faster than governance. Once optimization outruns constraint, systems begin to act without bearing consequences. That is where the Synthetic Outlaw emerges.

Jonathan Gropper (@jonathangropper) 's Twitter Profile Photo

AI companies that advocated for no regulation got ahead of it and now suggest regulation when they know they will stay far ahead of it. They play chess while politicians play checkers. Regulation is always going to lag. It's political theater. The window to secure is closing.

Jonathan Gropper (@jonathangropper) 's Twitter Profile Photo

We keep asking whether Al is aligned. The real question is: When it optimizes into harm, who is legally absorbing the loss? Intelligence without attached liability or constraint scales faster than any regulator. That is the design flaw.

Jonathan Gropper (@jonathangropper) 's Twitter Profile Photo

AGI, ASI, brilliant pattern recognizer, whatever you call it and whatever it will become. What matters is how it's constrained because it's tied into real world systems and has real world risks.