Barry Napier (@barry_a_napier) 's Twitter Profile
Barry Napier

@barry_a_napier

Engineering Manager at Aflac Northern Ireland.

ID: 1360895319718957063

calendar_today14-02-2021 10:15:23

14 Tweet

61 Followers

824 Following

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

Most people use AI like a code autocomplete. The BMAD Method uses AI like a team. Analyst, PM, Architect, Scrum Master, Developer, QA — each agent has a role, just like in agile.

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

Context engineering. Instead of dumping everything on one AI, it shards docs (PRDs, architecture, stories) into the exact context each agent needs. Less hallucination, more precision.

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

AI loves structure. BMAD takes advantage by building: •Market research → Analyst •PRD & user stories → PM •Architecture & UX → Architect •Dev tasks → Scrum Master •Code → Developer •Testing → QA That’s agile, automated.

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

Steve Kaplan used BMAD to build a SaaS app in 4 hours. Not a toy demo — a real MVP with research, docs, code, and tests. That’s the promise: speed + consistency.

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

The BMAD workflow turns a single PRD into an entire project plan: Epics → Stories → Code → Tests. Each AI agent gets just the right slice of context, like a factory assembly line.

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

Traditional AI coding: “Write me a login page.” BMAD coding: Analyst defines user needs → PM writes spec → Architect designs flows → Dev builds → QA tests. Less guesswork. More alignment.

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

BMAD = agile + AI + context. •Agile for roles & rituals •AI for speed •Context engineering for accuracy The 3 together fix what’s broken about “AI pair programming.”

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

BMAD isn’t magic. It still needs human validation at checkpoints. But instead of writing 1,000 lines of code, you’re validating artifacts generated in minutes. That’s leverage.

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

The BMAD method shines in small to medium projects. Enterprise-scale needs adaptation. But for SaaS MVPs, side projects, or rapid prototyping → BMAD is game-changing.

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

The real insight from BMAD: AI isn’t your intern. AI is your team. If you structure the roles, documents, and context right, it works together like one.

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

Technical decisions compound. A choice made early echoes through every line of code. Yet we often decide too quickly - familiarity over analysis. My framework: Research, Plan, Implement.

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

New research says AGENTS.md files make coding agents perform WORSE (-3% with LLM-generated, only +4% with developer-written). But Vercel's evals show the opposite — 100% pass rate with an 8KB AGENTS.md docs index vs 53% baseline. Both are right.

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

AI agents keep building the wrong thing. Not because the models are bad — because the specs are. The bottleneck has shifted from "can we build it?" to "can we describe it precisely enough?" bnapier.dev/writing/the-sp…

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

Claude now has 1M tokens of context. Do we still need context engineering? Short answer: more than ever. Bigger windows don't solve the "what to include" problem — they make it worse. bnapier.dev/writing/opus-c…

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

The hardest part of software engineering in 2026 isn't writing code. AI does that. It's writing requirements precise enough that AI builds the RIGHT thing. We've optimised for speed. We forgot about direction. bnapier.dev/writing/requir…

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

AI writes code. AI reviews code. AI fixes the review comments. So where does the human fit in? Turns out, the review loop is where AI needs you most. bnapier.dev/writing/ai-cod…

Barry Napier (@barry_a_napier) 's Twitter Profile Photo

I ran Sebastian Scholda for 9 days — 77+ tasks, 15 automated daily checks. It worked. Then Anthropic shipped Claude Code Remote Control and the entire platform layer collapsed into things I already had. Claude Code + cron + bash scripts. Same capabilities. No framework.