It was 3:47 AM on a Tuesday and I was still building.
Not because I had a deadline. Not because someone was waiting. Because ServetIA — the citizen services tool I was prototyping for Spain's Office of the Prime Minister — had just almost worked, and the distance between almost and done felt like one more prompt. One more Accept All. One more iteration.
I had started at 10 PM, after my daughter went to sleep. Just a quick test. Five hours later, my terminal had 14 Codex and Claude sessions' worth of conversation, the app had a feature I hadn't planned, and I couldn't tell you which lines of code I'd written and which the model had. I wasn't coding. I was vibing. And it felt incredible.
Until the next morning, when it didn't.
The New DDD
Software engineering loves its three-letter acronyms. We've had TDD (Test-Driven Development), BDD (Behavior-Driven), and the venerable DDD — Domain-Driven Design, Eric Evans' 2003 masterpiece about aligning code architecture with business reality.
I want to introduce a new DDD. One that nobody designed. One that emerged, bottom-up, from the collision of large language models with human neurochemistry:
Dopamine-Driven Development.
It's the development methodology you didn't choose but are already practicing. Every time you prompt, get a working result, feel the rush, and prompt again. Every time the 20-minute task becomes a 20-hour session because the feedback loop is so tight, so rewarding, so frictionless that your prefrontal cortex — the part of your brain responsible for saying "enough, go to bed" — simply loses the argument against the nucleus accumbens, the part that says more.
If you've built anything with Codex, Cursor, Claude Code, Windsurf, or Copilot in the last year, you know exactly what I'm talking about. And if the phrase "I was up until 2 AM prompting" sounds familiar, you're not alone. The team at Pydantic — the people who literally build the tools we use to make LLM software more reliable — feel it too.
The Neuroscience of the Tight Loop
Here's what's happening in your brain, and it's not metaphorical.
Dopamine is often mischaracterized as the "pleasure chemical." It's more accurately the anticipation chemical — the neurotransmitter that fires not when you receive the reward, but when you expect one. Wolfram Schultz's foundational work on dopamine prediction errors, published in the late 1990s at Cambridge, established that dopamine neurons fire most intensely in response to unexpected rewards. When a reward arrives predictably, the response flatlines. When it arrives unpredictably — sometimes brilliant, sometimes garbage — the system goes into overdrive.
Sound familiar? It should. It's the exact architecture of an LLM-assisted coding session.
You prompt. Sometimes you get a working app. Sometimes you get hallucinated imports and invented APIs. You never know which. This is, neurologically, the same variable-ratio reinforcement schedule that drives slot machines, social media feeds, and — as B.F. Skinner demonstrated decades ago — the most addiction-prone behavioral patterns in mammals.
The old coding loop was different. You wrestled with a bug for hours. You questioned your life choices at 2 AM. Then you fixed it, and the dopamine hit was singular, massive, earned. The struggle was the price; the solution was the reward. That loop had natural braking mechanisms built in: frustration, confusion, the physical fatigue of thinking hard. These weren't bugs in the process. They were features. They were the friction that told your brain: time to stop, rest, synthesize.
LLM-assisted development removed the friction without removing the drive.
What the Research Says
The anecdotal evidence is everywhere. But the rigorous evidence has arrived too.
In February 2026, UC Berkeley Haas researchers Aruna Ranganathan and Xingqi Maggie Ye published their findings in Harvard Business Review from an eight-month ethnographic study of 200 employees at a U.S. technology company. Their conclusion was counterintuitive and devastating: AI doesn't reduce work. It intensifies it.
They found three mechanisms of intensification. First, scope expansion: people began taking on tasks that would previously have belonged to someone else, because AI made starting them feel trivially easy. Second, temporal bleed: work seeped into moments that used to function as pauses — prompting during lunch, before meetings, in the evening when an idea came to mind. The natural stopping points in the workday dissolved. Third, thread proliferation: workers ran multiple AI processes simultaneously, keeping numerous tasks alive at once, context-switching continuously between their own thinking and AI outputs.
What surprised me most was the contrast between how people described their moment-to-moment engagement and how they described their overall experience. In micro-moments, people talked about momentum and expanded capability. But when they stepped back, a different tone emerged. They described feeling busier, more stretched, or less able to fully disconnect.
— Xingqi Maggie Ye, UC Berkeley Haas, HBR, February 2026
Meanwhile, ActivTrak's analysis of over 164,000 workers found that after AI adoption, time spent on email and messaging more than doubled, business software usage surged by 94%, but focused, uninterrupted work time fell by 9%. The capacity AI freed up didn't go to rest or reflection. It got immediately repurposed into more work.
This is the paradox of Dopamine-Driven Development. It feels like empowerment while it erodes sustainability.
The Human-in-the-Loop Is Tired
Laura Summers at Pydantic wrote what I consider the essential text on this moment. In her February 2026 essay "The Human-in-the-Loop Is Tired," she described a peculiar new form of fatigue: the fatigue of supervision.
Her colleague Douwe — who maintains the Pydantic AI framework — described waking up to thirty pull requests every morning, each generated overnight by someone's AI assistant, needing snap judgment calls on every one. The temptation to delegate the review itself to another AI was enormous. But as he put it: "at that point, what am I still doing here?"
Summers named the core problem: the human reward function problem. Writing code by hand was never easy, but it was full of small rewards — solving a problem in your head, understanding a gnarly bit of logic, watching the code compile, the feeling of control. LLM-assisted programming automated much of the work that generated those dopamine hits and replaced it with the cognitive load of review and supervision. The satisfying part shrank. The exhausting part grew.
And then she named the thing nobody was saying: it's lonely. Programming with an LLM is an intensely solitary activity. You and the machine, going back and forth. The natural moments where you'd turn to a colleague — to rubber-duck a problem, to share the small victory of something clicking — get quietly replaced by another prompt.
This describes my own experience building CiudadanIA, the citizen correspondence system that runs on locally fine-tuned Qwen3-4B for privacy-by-design. I spent entire weekends iterating on system prompts — prompting, testing, adjusting, prompting again — in a loop so tight that by Sunday evening I'd done genuinely impressive work but couldn't remember eating lunch. The number of things I could start had dramatically increased. The number of things I could thoughtfully finish hadn't changed at all.
The Cambrian Explosion of Co-Creation
But here's where I refuse to write the doomer essay.
Because the thing that kept me up until 3:47 AM? It worked. ServetIA works. CiudadanIA processes citizen correspondence in production. PromptAventura gamified AI literacy for government employees. GlobalAlert tracks multilingual misinformation in real time. In one year at PresidencIA, a team that would be considered microscopic by any government standard shipped applications that would have taken departments of fifty people and eighteen months of procurement cycles.
This is the other side of the DDD coin, and it's real. We're living through what can only be called a Cambrian explosion of co-creation between humans and AI systems. The biological Cambrian explosion — 541 million years ago — was triggered by a convergence of environmental conditions that suddenly made rapid diversification not just possible but inevitable. Eyes, limbs, and complex nervous systems emerged in parallel across multiple lineages in a geological eyeblink.
The same convergence is happening now. LLMs sophisticated enough to generate working code. Context windows large enough to hold an entire project. Agent frameworks that can execute multi-step plans. Voice interfaces that let you talk to your IDE. All arriving simultaneously.
Y Combinator reported that 25% of startups in their Winter 2025 batch had codebases 95% generated by AI. Karpathy coined "vibe coding" in February 2025, watched it become Collins Dictionary's Word of the Year by November, then declared it passé by February 2026 — replaced by "agentic engineering." In 14 months, the entire concept went from tweet to dictionary to obsolescence. That's not a hype cycle. That's a phase transition.
At Saturdays.AI, I've watched people with no formal programming background build functional AI applications in weekend workshops. Things that would have required a team of engineers five years ago. The creative democratization is genuine. The dopamine isn't lying to you about the magnitude of the moment. The magnitude is real.
The Dual Nature of DDD
DDD as superpower. You're in flow. The AI extends your cognitive reach. Ideas materialize faster than you can test them. You're building the thing you imagined, and the gap between vision and artifact is narrowing with every prompt. This is what Summers' colleague meant by "dipping your hands into the fabric of the universe." It's real. It produces real things.
DDD as Skinner box. You've lost awareness of time. You're prompting not because you have clear intent but because stopping feels harder than continuing. The "one more iteration" has lost its connection to any specific goal. You're not in flow — you're in a rut that feels like flow. Your judgment has degraded but you can't tell because the machine keeps producing confident output.
The terrifying thing is that Mode 1 and Mode 2 feel identical from the inside. The slot machine doesn't announce when fun becomes compulsion.
Toward an AI Practice
The Berkeley researchers proposed something I find compelling: not AI use, but an AI practice — a deliberate set of rhythms and boundaries around AI-enabled work. The word "practice" is chosen carefully. Like a meditation practice or a medical practice, it implies intentionality, discipline, ongoing calibration.
Here's what an AI practice looks like for me, refined through a year of building with AI at government scale:
After 90 minutes of AI-assisted building — roughly aligned with the human ultradian rhythm — I stop. Not pause. Stop. Walk away from the screen. The hardest part isn't setting the timer. It's obeying it when you're in Mode 1 and the thing is almost working.
Before accepting a block of AI-generated code, I explain out loud what it does and why. If I can't, I don't accept it. Simon Willison's distinction still holds: if you reviewed it, tested it, and can explain it, that's software development. If you can't, that's vibe coding.
Before a complex session, I open a separate session and ask a fresh model to assume the plan has catastrophically failed, then diagnose why. It catches specification gaps I miss after hours deep in the details.
At least once a day, I discuss what I'm building with a human. Not for approval. For contact. Because the most insidious effect of DDD is isolation: you and the machine in a feedback loop so tight you forget other humans exist.
Every Friday: what did I start vs. finish? Thread proliferation feels like productivity. It's not. Productivity is finished work that meets a standard.
The Builder's Dilemma in the AGI Era
There's a deeper philosophical tension I don't want to paper over.
If you believe — as I do, as the trajectories suggest — that we're in the early stages of something that may produce artificial general intelligence within years rather than decades, then the imperative to build is not just professional ambition. It's a civilizational necessity. Someone has to build the verification infrastructure, the safety frameworks, the institutional capacity to govern systems that are getting smarter every quarter.
That's the dilemma. The same urgency that makes this work important also makes it addictive. The feeling that you're running out of time to shape the future activates the same dopaminergic circuits as the coding loop itself. Momentum and anxiety become indistinguishable. And the AI, ever helpful, ever available, never tired, never judging, is right there with another suggestion, another iteration, another 2 AM session disguised as purpose.
I don't have a resolution for this tension. What I have is the conviction that naming it matters. That recognizing Dopamine-Driven Development as a pattern — not a personal failure, not a character flaw — is the first step toward managing it.
The problem is not that builders suddenly became undisciplined. The problem is that the loop became exquisitely optimized for momentum and almost indifferent to stopping. Once you name that pattern, you can build practices that protect judgment, sleep, and finishability.
The machine doesn't need sleep. You do.
The machine doesn't need meaning. You do.
The machine will always whisper one more prompt.
The question is whether you've built the practice to know when the answer is not tonight.
It's 11:42 PM as I finish this article. Codex is suggesting edits. The loop wants to continue. I'm closing the laptop.
If you recognized yourself in this piece, that's the point. The Cambrian explosion needs builders who last.
References
- Ranganathan, A. & Ye, X.M. (2026). "AI Doesn't Reduce Work — It Intensifies It." Harvard Business Review, February 2026. UC Berkeley Haas School of Business.
- Summers, L. (2026). "The Human-in-the-Loop Is Tired." Pydantic, February 2026.
- Karpathy, A. (2025). Original "vibe coding" post. X, February 2, 2025. Collins Dictionary Word of the Year 2025.
- Karpathy, A. (2026). "Agentic Engineering" post. X, February 2026.
- Schultz, W. (1997). "A Neural Substrate of Prediction and Reward." Science, 275(5306), 1593-1599.
- Willison, S. (2025). "Not all AI-assisted programming is vibe coding." March 2025.
- ActivTrak (2026). Study of 164,000 workers on AI adoption and work patterns.
- Guaali, B. (2025). "The Dopamine Shift in Engineering." Medium.
- Catalini, C., Hui, X., Wu, J. (2026). "Some Simple Economics of AGI." MIT Sloan / SSRN.
- Evans, E. (2003). Domain-Driven Design. Addison-Wesley.