Why AI-powered teams need lean methodology, not ceremonies
Let me ask you something.
How many hours did you spend in meetings this week? Sprint planning. Daily standups. Backlog grooming. Sprint review. Retro.
Now ask yourself: did any of those meetings help you ship faster?
- 🗓️ Are your sprints constantly disrupted by changing priorities?
- 📝 Do you spend more time writing Jira tickets than writing code?
- 🔄 Does "Agile" feel anything but agile?
In Part 1 and Part 2 of this series, I showed how Context Engineering—giving AI the right information at the right time—can cut project costs by 98%. But there's a problem: your process can't keep up.
When a Spring Boot migration takes 15 minutes instead of 2-3 days, when AI agents can execute complex workflows autonomously—the 2-week sprint starts to feel... quaint.
Here's the uncomfortable truth:
💡 Scrum has become the very thing the Agile Manifesto rebelled against.
In Parts 1 and 2, we covered the technical side of Context Engineering—AGENTS.md, MCPs, Custom Agents, and Prompts. But here's what I didn't address: what happens to your team's process when AI can complete a sprint's worth of work in an afternoon?
🎯 What the Agile Manifesto Actually Said
Let's go back to 2001. The Agile Manifesto was a rebellion against heavyweight, document-driven development. It valued:
- 👥 Individuals and interactions over processes and tools
- 💻 Working software over comprehensive documentation
- 🤝 Customer collaboration over contract negotiation
- 🔄 Responding to change over following a plan
Notice what's NOT in the Manifesto:
- ❌ Two-week sprints
- ❌ Story points
- ❌ Jira
- ❌ Daily standups
- ❌ Sprint ceremonies
- ❌ Velocity tracking
- ❌ Backlog grooming
The Manifesto was about principles, not processes. Scrum added the process layer—and over two decades, that layer calcified into the very bureaucracy Agile sought to escape.
📜 We replaced waterfall documentation with Jira stories. We replaced status meetings with daily standups. We swapped one bureaucracy for another.
⚡ Why AI Breaks Scrum
The fundamental assumption of Scrum is that software development happens at a predictable, human pace. Sprint planning assumes you can estimate work. Velocity tracking assumes consistency. Two-week iterations assume that's a reasonable feedback loop.
AI development breaks all of these assumptions.
The Speed Mismatch
In my Context Engineering series (Part 1 and Part 2), I documented real results:
| Task | Traditional | With AI |
|---|---|---|
| Spring Boot 4.0 migration | 2-3 days per app | 15 minutes per app |
| Business case document | 2-3 days | 4 hours |
| 10 app migration project | $20,000 | $350 (98% reduction) |
When tasks complete in minutes instead of days:
- 📅 Sprint planning becomes obsolete before the sprint ends
- 🎯 Estimates are meaningless when AI can complete tasks 10x faster than expected
- 📋 The backlog churns faster than you can groom it
Paul Dix calls this The Great Engineering Divergence. He applies Amdahl's Law—a principle from computer performance theory—to software delivery:
📐 "If coding is 20% of the end-to-end cycle, making it 10x faster only yields ~1.25x overall speedup. Once coding speed jumps, everything around it becomes the constraint."
Developers report 20-50% productivity gains from AI, but Dix argues they could achieve 10x, 100x or more—if their pipelines could handle it. The bottlenecks aren't in the code. They're in code review bandwidth, testing capacity, release confidence, security gates, and sprint ceremonies.
In other words: Scrum is the constraint.
🗣️ What the Industry Leaders Are Saying
I'm not alone in this observation. The developers building with AI every day are reaching the same conclusion.
Geoff Huntley (Creator of Ralph Wiggum Technique)
"Agile and standups doesn't make sense any more."
"The days of being a Jira ticket monkey are over."
Huntley's Ralph Wiggum technique runs AI in continuous loops:
while :; do cat PROMPT.md | claude-code ; done
This deceptively simple script feeds a prompt file into Claude Code repeatedly—the AI attempts a task, evaluates its own errors, and retries until it succeeds. At ~$10/hour in compute costs, this technique has cloned Atlassian products and created entire programming languages. Humans stay "in the loop" but enter later and less frequently than traditional workflows demand.
Peter Steinberger (PSPDFKit Founder)
From his Pragmatic Engineer interview:
"Pull requests are dead, long live 'prompt requests.'"
"Code reviews are dead for this workflow—architecture discussions replace them."
"I ship code I don't read."
Steinberger runs 5-10 AI agents simultaneously, maintains flow state throughout development, and deliberately rejects remote CI in favour of local validation. Why wait 10 minutes for a pipeline when agents can validate in seconds?
🔄 The Lean Alternative
So what replaces Scrum? Return to the Manifesto's principles—but with AI-native practices. If you've been following this series, you'll recognise Agentic Context Engineering at work here.
From Ceremonies to Continuous Flow (Agentic Context Engineering in Action)
| Scrum Practice | AI-Native Alternative |
|---|---|
| Sprint planning | Just pull the next card — work completes too fast to batch-plan |
| Daily standups | Real-time observability — PRs, commits, and deploys speak for themselves |
| Jira stories | Specs define intent, Plan mode designs approach, AGENTS.md provides context |
| Story point estimation | Just run it and see (inference-speed feedback) |
| Code reviews | Focus shifts — less syntax, more architecture, security, and prompt review |
| Remote CI (10+ min) | Local validation in seconds — agents run CI via Skills or AGENTS.md |
| 2-week sprints | Continuous delivery |
📋 AGENTS.md: Project Context for AI
In my langchain-anthropic-pdf-support project, I use AGENTS.md to document everything an AI agent needs to work on the codebase.
What AGENTS.md replaces:
- 📚 Confluence documentation
- 🎓 Onboarding wikis
- 🧠 Tribal knowledge
- 📝 Acceptance criteria patterns (the reusable parts)
What AGENTS.md doesn't replace: You still need to capture the task itself—what you're working on right now. That might be a Kanban card, a GitHub issue, or a prompt to the agent. Tools like OpenSpec take this further with spec-driven development—you write specs before code, and AI agents read them to understand intent. The difference is that task definitions become lightweight because all the project context lives in AGENTS.md.
Example structure:
## Development Environment
- Run agent: `uv run poe dev`
- FastAPI server: `uv run poe serve`
## Testing Strategy
- Unit tests: `uv run poe test`
- Full CI locally: `uv run poe ci`
## Key Architecture Decisions
- Default model and why
- Caching strategy
- API design patterns
## Code Patterns
- How to add new tools
- Testing conventions
## Important Gotchas
- API key requirements
- Coverage targets (80%)
| Scattered Documentation | AGENTS.md |
|---|---|
| Confluence pages with prose | Executable commands |
| Lives in external tools | Lives in the repo, versioned with code |
| Gets stale, disconnected | Updated with every PR |
| Requires human interpretation | Machine-readable |
| Tribal knowledge in people's heads | Codified patterns and gotchas |
💡 An AI agent can read AGENTS.md and immediately know how to run, test, and contribute to the project. Give it a task—via a Kanban card, GitHub issue, or prompt—and it has all the context it needs.
⚡ Local CI: Shift Left to Seconds
Why wait for remote CI when you can validate locally in seconds?
In my projects, I use pre-commit hooks with 9 automated checks:
- 🔍 ruff — Linting and formatting
- 🔒 detect-private-key — Security check
- 📝 mypy — Strict type checking
- ✅ check-yaml, check-merge-conflict — File hygiene
The workflow comparison:
Traditional:
Code → Commit → Push → Wait 10min for CI → Fix → Repeat
AI-native:
Code → Agent validates locally → Commit → Push → CI passes first time
The command uv run poe ci mirrors exactly what remote CI would do, but runs locally in seconds. Combined with pre-commit hooks, problematic code never leaves your machine.
There are two ways to achieve this with Claude Code:
Approach 1: Hooks. Configure hooks to automatically run linting and formatting after every file change. The agent validates its own work in real-time, catching issues before you even see them.
Approach 2: Document commands in AGENTS.md. Tell the agent what validation commands exist so it knows how to verify its changes:
## Validation
- After editing files: `uv run poe check` (lint + format + typecheck)
- Before committing: `uv run poe ci` (full validation suite)
The agent reads this, runs the checks, and self-corrects.
🎬 Making the Transition
You don't have to abandon everything overnight. Start here:
Week 1: Add AGENTS.md
Document your project's development workflow, testing strategy, and architecture decisions. Make it executable—commands, not prose.
Week 2: Set Up Local Validation
Configure pre-commit hooks. Add a poe ci or equivalent command that mirrors your remote pipeline. Stop waiting for GitHub Actions.
Week 3: Question Each Ceremony
For every meeting on your calendar, ask: "Does this help us ship faster?" If the answer is "we've always done it this way," that's not a reason.
Week 4: Measure
Track time spent in ceremonies vs. time shipping. The numbers will speak for themselves.
🎯 Key Takeaways
- 1️⃣ The Agile Manifesto never prescribed Scrum — ceremonies were added later
- 2️⃣ AI development happens at inference speed — 2-week sprints can't keep up
- 3️⃣ AGENTS.md replaces scattered documentation — project context lives in the repo, not Confluence
- 4️⃣ Local CI replaces remote waiting — validate in seconds, not minutes
- 5️⃣ Architecture discussions replace code reviews — review prompts, not generated code
- 6️⃣ Context Engineering gave you the tools — now remove the constraints that slow them down
✨ The days of being a Jira ticket monkey are over. The question is: will you adapt—reviewing architecture instead of syntax, shipping continuously instead of in sprints—or will you keep attending standups while the industry moves on?
🔗 Resources
This Series:
- 📖 Context Engineering Part 1 — The four pillars
- 📖 Context Engineering Part 2 — AAIF and real examples
References:
- 📐 Paul Dix: The Great Engineering Divergence — Amdahl's Law applied to delivery
- 📰 Pragmatic Engineer: I Ship Code I Don't Read
- 📰 The Register: Ralph Wiggum Technique
- 📜 The Agile Manifesto — What it actually says
- 🐙 langchain-anthropic-pdf-support — AGENTS.md and pre-commit example