Is Agile, Agile Enough for AI?

A senior developer on my team came back from leave a few months ago, and I walked him through a project I wanted him to take on with AI-assisted development. I told him I'd like to try to get it done in about three months. His first response was that it was impossible.

I told him I thought it was worth a shot. Let's think about how we could break it down. He had the full spec, a strong understanding of what needed to be built, and most of the product context already in his head from years on the team.

Two weeks later, he came back with everything he'd told me couldn't be done in three months. And then some.

A year earlier, that same project would have been staffed with five developers for six months. Part of the gap is what AI does to raw coding speed — a developer with the right tools genuinely produces output much faster than they did a year ago, and the difference is real. But that doesn't account for all twenty-nine developer-months of difference. The rest is coordination — meetings, hand-offs, reviews, status updates, the layered communication structure of getting five people to converge on a single feature.

The first half of that gap, the AI productivity gain, you collect regardless of how you organize the work. The second half — the coordination tax — is what your sprint cadence and your ticket-sizing discipline are designed around. And in this new world, it isn't just unhelpful. It's the friction you're paying to preserve.

Which leaves a question that feels almost too cute to ask: is agile, agile enough for AI?

Brooks's Law Just Got Heavier

Fred Brooks wrote The Mythical Man-Month in 1975. The core insight — adding people to a late project makes it later — has held up remarkably well, but the underlying math is the part worth dusting off.

Communication overhead between people on a team grows quadratically. With n people, you have n(n-1)/2 possible channels. Five people, ten channels. Ten people, forty-five. Twenty people, one hundred ninety. That's the cost you pay before anyone has written a line of code, and it compounds across every decision the team has to converge on.

AI didn't help with this. If anything, it made it worse. When each developer is now running multiple agent sessions — generating code, drafting documentation, proposing architectural choices — the surface area that needs coordinating explodes. It's not five people aligning. It's five people and their dozens of concurrent workstreams.

The thing AI did help with is the part that scales the other way: an individual developer's throughput. One person with the right context, the right tools, and a broad mandate can now produce what used to require four or five. That gap — coordination cost growing while individual capacity grows even faster — is the whole story of the productivity wins my team has seen. It's also the story Kent Beck has been telling. Beck has argued recently that AI is "accelerating a return to the small-team, customer-proximate, cross-disciplinary practices that extreme programming first described." He's not wrong. The economics have flipped.

What the Sprint Was Actually For

Two-week sprints didn't appear from nowhere. They were a load-bearing answer to a real problem.

Before agile, software was planned in months and delivered in years. Requirements drifted. Teams went dark for long stretches and came back with the wrong thing. The sprint cadence solved this by forcing forward motion in small, visible increments. The ticket discipline — atomic stories, clear acceptance criteria — solved a related problem: how do you keep five developers from stepping on each other when none of them can hold the whole system in their head?

Both of those answers were good. But both depend on assumptions that don't hold anymore.

The first assumption was that humans are slow at writing code. They are, sort of, until you give them an AI agent that drafts the boilerplate and most of the structure in minutes. The second assumption was that no single developer could hold enough context to deliver a meaningful slice of the product alone. That's also not true now. AI tools dramatically increase the working context a single person can effectively hold — not by making them smarter, but by acting as a tireless second brain that remembers the architecture, the patterns, and the next three steps.

When the assumptions change, the discipline that was downstream of those assumptions becomes the cost. Atomic tickets, sprint reviews, hand-offs between developers — these were answers to a coordination problem. Apply them to a project that doesn't have that coordination problem, and you've reintroduced the friction agile was designed to remove.

What's Replacing It

The interesting thing isn't that agile is dying. It isn't. The principles — short feedback loops, working software over documentation, responding to change — are arguably more important than they've ever been.

What's changing is the practice. The cycle length is getting longer because the unit of meaningful delivery has gotten bigger. Basecamp's Shape Up methodology — six-week fixed-time, flexible-scope cycles with explicit cool-down periods between them — has been getting renewed attention precisely because it fits this new reality better than two-week sprints do. The pattern I'm watching is the consolidation of work. Where I used to break a project into ten tickets and hand them to four developers, I'm increasingly trying to give one developer a whole feature, a clear north star, the full spec, and the autonomy to deliver. The ceremony around them shrinks because there are fewer people whose work has to converge.

What This Means for Sizing

If you accept the argument, the operational consequences are uncomfortable.

A two-week sprint with everything broken down into half-day tickets is structurally incompatible with what the developer is actually doing now. The breakdown work is no longer pre-coordination — it's post-hoc reporting. You're imposing a granularity on the work that doesn't reflect how the work is being done. That overhead used to be cheap because it was paid by humans whose throughput was the bottleneck. With AI tools shifting the bottleneck, the overhead has gotten expensive in a way that doesn't show up on a dashboard.

The sizing rule I'm running with now is closer to this: hand a developer a meaningful chunk of product, give them weeks not days, and check in around outcomes rather than tickets. Track progress through demos and working software, not story points and burn-down charts. Resist the instinct to "help" by breaking the work down further. Most of the time, the breakdown is what you're paying the friction with.

It also changes what team size looks like. Five people on a team is now too many for most of what I'd hand them. Even two developers on the same single-page application step on each other on a regular basis — not because they aren't communicating, but because the per-developer velocity AI provides means changes are landing faster than the merge-and-align loop can keep up with. The Brooks's Law math hasn't changed; it's just become binding at much smaller numbers.

This doesn't scale to every project. Some work genuinely requires multiple specialists in close coordination. Some teams haven't built the trust or autonomy that makes broad mandates safe. Some codebases are too entangled for one developer to safely own a vertical slice without stepping on someone else's feature. None of that invalidates the argument; it just bounds it.

The Risks Worth Keeping in View

I want to be careful here. I'm not arguing for a return to lone-wolf development or for treating the sprint cadence as an obstacle to be swept aside.

The human-in-the-loop bottleneck is real. AI can produce ten thousand lines of code overnight; a thoughtful reviewer can responsibly evaluate maybe a thousand. If the consequence of broader mandates is that developers ship code they don't fully understand, you've solved one productivity problem by creating a much worse comprehension problem. The accountability discipline matters more here, not less.

There's also a real risk that not every developer is ready for a broad mandate. The senior developer who pulled this off had years of product context and the architectural instincts to make good decisions in flight. A junior developer with the same mandate would have produced a working artifact that nobody could maintain. The pipeline question — how do you develop developers who can hold this kind of mandate — connects directly to the talent feeder argument I made in the last post. Fewer roles and broader mandates is a much harder operating model to staff if you haven't been deliberately growing engineers who can handle it.

The Hard Part

The hardest part of all of this is letting go of practices that were genuinely good answers to a real problem. Two-week sprints worked. Atomic tickets worked. The reason they worked is that the coordination cost they imposed was less than the coordination cost they prevented. AI tools just changed that math.

A year from now, I think most engineering teams will have meaningfully larger work units, smaller staffing, and looser ceremony than they have today. Not because agile failed, but because the problem agile was solving has changed shape underneath us. The principles are still right. The mechanics need an overhaul.

And the next time a senior developer tells you something is impossible in three months — consider handing them the whole spec, pointing them at AI tools, and asking them to give it a couple of weeks before you believe them.