promptism (@promptissm) 's Twitter Profile
promptism

@promptissm

The Art & Science of Prompts

ID: 1955025597207359488

calendar_today11-08-2025 21:57:02

363 Tweet

64 Followers

26 Following

Ivan | AI | Automation (@aivanlogic) 's Twitter Profile Photo

Holy shit, this changes everything about how we train models. Stanford just made fine-tuning irrelevant with a single paper. It’s called Agentic Context Engineering (ACE) and it proves you can make models smarter without touching a single weight. Instead of retraining, ACE

Holy shit, this changes everything about how we train models.

Stanford just made fine-tuning irrelevant with a single paper.

It’s called Agentic Context Engineering (ACE) and it proves you can make models smarter without touching a single weight.

Instead of retraining, ACE
promptism (@promptissm) 's Twitter Profile Photo

do this: 1. learn how AI works 2. build an LLM 3. automate your tasks when you do all this... you will have a lot of time to invest. use that time to explore the world, learn new languages, and make new friends.

Jackson Atkins (@jacksonatkinsx) 's Twitter Profile Photo

Microsoft and Georgia Tech gave existing models the ability to decide how to think. The model brainstorms in latent space and only writes its thoughts when confident. It makes them up to 6.78x more efficient. Are we entering the age of latent reasoning? Here's how it works:

Microsoft and Georgia Tech gave existing models the ability to decide how to think.

The model brainstorms in latent space and only writes its thoughts when confident.

It makes them up to 6.78x more efficient.

Are we entering the age of latent reasoning?

Here's how it works:
promptism (@promptissm) 's Twitter Profile Photo

🚨 Prompt engineering is dead. Context engineering is the new game. Anthropic just dropped their internal playbook and it changes everything: → Context is finite. Every token depletes attention budget. → Models lose focus as context grows (they call it "context rot") → Best

🚨 Prompt engineering is dead.

Context engineering is the new game.

Anthropic just dropped their internal playbook and it changes everything:

→ Context is finite. Every token depletes attention budget.
→ Models lose focus as context grows (they call it "context rot")
→ Best
promptism (@promptissm) 's Twitter Profile Photo

🚨 This just broke consumer research. A new paper shows LLMs can predict purchase intent with 90% accuracy by simulating synthetic consumers. Here's how it works: Give an LLM demographic info (age, income, location). Show it a product concept. Ask for its opinion in natural

🚨 This just broke consumer research.

A new paper shows LLMs can predict purchase intent with 90% accuracy by simulating synthetic consumers.

Here's how it works:

Give an LLM demographic info (age, income, location). Show it a product concept. Ask for its opinion in natural
Alex Prompter (@alex_prompter) 's Twitter Profile Photo

Holy shit. MIT just built an AI that can rewrite its own code to get smarter 🤯 It’s called SEAL (Self-Adapting Language Models). Instead of humans fine-tuning it, SEAL reads new info, rewrites it in its own words, and runs gradient updates on itself literally performing

Holy shit. MIT just built an AI that can rewrite its own code to get smarter  🤯

It’s called SEAL (Self-Adapting Language Models).

Instead of humans fine-tuning it, SEAL reads new info, rewrites it in its own words, and runs gradient updates on itself literally performing
Robert Youssef (@rryssf_) 's Twitter Profile Photo

I just read this new paper that completely broke my brain 🤯 Researchers figured out how to transfer LoRA adapters between completely different AI models without any training data, and it works better than methods that require massive datasets. It's called TITOK, and here's the

I just read this new paper that completely broke my brain 🤯

Researchers figured out how to transfer LoRA adapters between completely different AI models without any training data, and it works better than methods that require massive datasets.

It's called TITOK, and here's the
Jackson Atkins (@jacksonatkinsx) 's Twitter Profile Photo

Meta just found a way to watch an AI's thought process break in real-time. Their new method cuts error rates by 68% and reduces false positives by 41%. This novel Circuit-based Reasoning Verification (CRV) opens new possibilities in reliable AI. Here's how it works: - X-Ray:

Meta just found a way to watch an AI's thought process break in real-time.

Their new method cuts error rates by 68% and reduces false positives by 41%. 

This novel Circuit-based Reasoning Verification (CRV) opens new possibilities in reliable AI.

Here's how it works:

- X-Ray:
Jafar Najafov (@jafarnajafov) 's Twitter Profile Photo

A colleague asked me this question recently, and it got me thinking. For over 10 years, we’ve worshipped SEO dashboards and assumed “visibility” = Google. But nowadays, discovery isn’t limited to the SERP. LLMs are becoming gateways for how people encounter and discover brands.

promptism (@promptissm) 's Twitter Profile Photo

🚨 Nano Banana is going mainstream. Google just announced it’s coming to Search, NotebookLM, and soon Photos. Since August, Gemini 2.5 Flash’s Nano Banana model has powered 5B+ image generations now it’s about to supercharge the products you already use. On Search → Snap or