I've been deep into using AI in my own development work for the last six months, and I'm not sure anymore whether I'm just really hooked on it or actually addicted to it.
I love to create things. That's the whole reason I'm in this industry — the experience of starting with an idea and ending with something real that other people can use. Until recently, that arc was measured in weeks, sometimes months. The thinking happened over coffee. The building happened in cycles. The rest happened naturally between them because the work itself imposed a pace.
AI compressed that arc to hours. I have an idea at lunch and I'm shipping working code by dinner. The dopamine hit of going from "I wonder if this would work" to "this works" used to take a sprint. Now it takes an evening. And it turns out that compression hits all the fun parts of the brain in a way that's hard to step away from.
I find myself up at the wee hours of the night trying to get something just right, when nothing about the project actually demanded it. The natural exit ramps in a development cycle — the moments where the work itself made you stop and rest — have largely disappeared.
Six months in, I think we have a problem. And I don't think engineering leaders are talking about it.
I'm Not the Only One
The good news, if you can call it that, is that this is being studied. The bad news is what the research shows.
A BCG study released in early 2026 introduced the term "AI brain fry" — mental fatigue from excessive use, interaction with, or oversight of AI tools. Workers in high-AI-oversight roles expended 14% more mental effort at work, reported 12% more fatigue, and 19% more information overload than their non-AI counterparts. They described a "buzzing" feeling, mental fog, and a physical need to step away from the screen. Out-of-hour commits among engineers using AI coding tools rose 19.6%. That number is the one that stopped me when I read it. The wee hours aren't just my problem. They're a measurable industry pattern.
What "Cognitive Load" Doesn't Capture
Most of the discussion about AI and developer wellbeing frames the problem as cognitive load. Reviewing AI-generated code is exhausting. Validating output you didn't write requires a different kind of attention than writing the output yourself. Tool proliferation creates fatigue. All of that is true, and it's worth taking seriously.
But there's a second thing happening that the cognitive-load frame doesn't capture, and it's the thing I've been struggling with personally. The development feedback loop with AI has gotten short enough to be habit-forming in a way previous tools were not. When the gap between "I want to try something" and "I have a working version of it" collapses to minutes, you've engineered a slot-machine reward schedule into the work. Pull the lever, see what comes out, pull the lever again. That's not exhaustion. That's something more like compulsion.
Cognitive overload and dopamine compulsion are different problems. They probably need different responses. And the second one is barely on the management radar.
You Can Be a 10x Developer. You Can't Stay One.
A couple of weeks ago I argued in this space that AI is changing the math on team sizing — that one developer with the right tools and a broad mandate can deliver what used to require five. That's true. I've seen it on my team. I've experienced it personally. The productivity gains are real.
What I didn't address in that post, and what I think I should have, is what running at that pace actually costs.
Sustainable pace has been a core principle of agile since the beginning. The Agile Alliance still defines it the same way it always did: a pace the team could maintain indefinitely. But the practice of sustainable pace was built around human-speed development. It assumed the bottleneck was a person typing code, and the rest periods were the natural valleys between sprints, between features, between the hard problems that took a few days of thought.
AI removes most of those valleys. There's always one more thing you could ship tonight. The work itself doesn't tell you to stop, because the work doesn't end at the boundaries it used to. If you're a developer running at 10x or 20x productivity, you're not running 10x as fast for an hour and then stopping. You're running 10x as fast for as long as you have AI credits and willpower.
That's the trap. The headline number gets celebrated. The cost of staying at the headline number doesn't.
What I'd Want Us to Be Talking About
I don't have a worked-out answer to any of this. I'm in the middle of figuring it out for myself, and the honest version of this post is that I'm writing it partly to make myself confront what I've been seeing in my own behavior and in my team.
Here's what I'm starting to think we should be talking about as engineering leaders, instead of the next AI tool or the next governance framework.
Protected recovery is structural, not motivational. If we accept that AI-driven development hits a reward circuit hard enough to keep developers working into the wee hours, we should accept that the response can't be telling people to "have better work-life balance." That's never worked in any era of software development. Building actual breaks into the cycle — slow weeks after intense ones, no-AI days, mandatory time off after a fast-shipping push — might.
Permission to step away matters more than ever. One of the strongest cultural signals a manager can send is publicly stepping away from the AI for a day, or a week, and saying so. It normalizes the act for everyone else. Without that signal, the implicit message is the opposite: if your boss is shipping at midnight, you should be too.
The 10x conversation needs an honest counterpart. When we celebrate a developer doing in two weeks what would have taken five engineers six months, we should be asking what comes next — not what comes after that. The pace isn't the win. The pace is a tool that has to be deployed deliberately and recovered from.
We need to use the word out loud. The dopamine loop, the compulsive pull, the difficulty stepping away — these are real, and they're not character flaws. They're the predictable result of building a tool that hits the brain's reward circuit harder than anything we've put in front of developers before. Pretending it's just exhaustion lets us avoid the more uncomfortable conversation.
The Conversation We're Not Having
Engineering leadership has been mostly silent on this, and I include myself in that. We've been busy talking about AI strategy, AI governance, AI productivity gains, where AI fits in the SDLC. We have not been talking about whether the people on our teams are okay, and whether the speed we're so excited about is sustainable for the humans actually delivering it.
I think we need to start. I'm not sure exactly what that conversation looks like yet, but I'm certain it doesn't start with another framework or another tool. It starts with someone — a manager, a senior engineer, a team lead — saying out loud that they're not sure if they're hooked on AI or addicted to it.
So here I am. Saying it.