Here’s something that’s been bugging me. All those “enterprise architecture” patterns we spent years debating? The ones that felt like overkill for most projects?
They’re making a comeback.
Not because we finally have projects complex enough to need them. But because AI coding agents actually benefit from them.
The irony is almost too good. We designed these patterns to communicate with humans—domain experts, future developers, whoever ends up maintaining our code. Now computers are reading our code. And it turns out they need the same things humans do.
Ubiquitous Language Wasn’t Built for LLMs
Domain-Driven Design gave us ubiquitous language. The idea that everyone—developers, product people, domain experts—should use the same vocabulary when talking about the system. No more “users” in code and “customers” in meetings.
The promise was better communication. The reality was… mixed. Getting everyone to actually use the same terms? Harder than writing the code.
But here’s the twist: AI agents speak ubiquitous language fluently.
When your codebase uses terms like “PolicyCheck” and “RegulationClause” instead of generic “validate” and “rule” methods, something clicks. You can prompt an agent with business language—“run a policy check on this application”—and it maps directly to your code. No translation. No ambiguity about which method to call.
The vocabulary you built for domain experts now helps AI understand what you mean.
I watched a team recently where the AI navigated their codebase almost intuitively because they’d invested in good naming. Another team with the same technical chops but generic names? The agent kept stumbling, misinterpreting intent, calling the wrong methods.
Same AI. Different architecture. Night and day results.
Clean Architecture: Now AI-Readable
Remember hexagonal architecture? Ports and adapters? All those debates about where exactly to draw the line between domain and infrastructure?
I’ll be honest—I sometimes wondered if we were overthinking it. For most projects, simpler would work fine. The benefits of strict layer separation felt theoretical.
Not anymore.
AI agents thrive on predictable patterns. When your architecture has clear boundaries—domain logic here, infrastructure there, application orchestration in between—the AI navigates confidently. It knows where to look. It respects the separation. It generates code that fits.
When your architecture is a tangled mess? The AI reflects the mess right back. It mixes database calls into business logic. It generates adapters that leak domain concepts. It makes the spaghetti worse.
Clean architecture’s explicit dependencies and clear boundaries are almost perfect for AI-assisted development. The patterns we designed for humans now work for machines. Same benefit, different reader.
And there’s a practical bonus I didn’t expect: technological independence. When your LLM provider is isolated behind a port, switching from OpenAI to Anthropic to a local model becomes a config change. Your domain logic doesn’t care who’s generating responses.
CQRS Matters More Than Ever
CQRS—Command Query Responsibility Segregation—always felt like overkill. Separate your reads from writes? Maintain two models? For what, exactly?
For AI, it turns out.
Here’s the thing: commands represent intent. They’re the business decisions. The stuff that actually matters. “ApproveApplication.” “RejectClaim.” “TransferFunds.” These carry meaning. These need human eyes.
Handlers and orchestration? That’s the how. Wire up services, call APIs, update databases. Important, but not where the real risk lives.
This separation lets you divide AI involvement intelligently:
- Commands: Human-reviewed, carefully designed, representing actual business decisions
- Orchestration: AI can write this. It’s plumbing. Let the machine handle it.
You review what matters. You skim what doesn’t. The architecture tells you which is which.
CQRS gives you a structural answer to “what should I actually review carefully?” The commands. Always the commands.
Event Sourcing Is Back
Okay, this one surprised me.
Event sourcing—storing every state change as an immutable event instead of overwriting current state—has always had passionate fans and skeptics. The complexity was real. The benefits were situational.
Then AI started making changes to our codebases. Lots of changes. Fast changes. Changes that looked reasonable but turned out to be subtly wrong.
Suddenly, “what changed, when, and why” became a critical question. And event sourcing answers it by default.
Think of it like Git for your database. Every mutation recorded. You can replay history to understand how you got here. You can time-travel to debug what went wrong. And crucially—you can compensate. Write new events that reverse the effect of old ones.
When AI makes a mistake, you need to undo it. Not just the last change—sometimes an entire sequence of changes that seemed fine individually but broke something together. Event sourcing gives you that capability built in.
The audit trail isn’t just nice to have anymore. When AI is making decisions, you need to know what it decided and why. For debugging. For compliance. For the inevitable “what happened here?” postmortem.
Modern event stores have gotten fast enough that performance isn’t the blocker it used to be. Millions of events per second, sub-millisecond latency. The complexity cost remains, but the benefit case has shifted.
Layers by Risk
Here’s a frame that’s been useful: organize your code by how dangerous it is.
- Domain layer: Business-critical logic. Must be correct. Bugs here cost money, customers, reputation. Human-reviewed, always.
- Application layer: Orchestration and coordination. Important but not existential. AI can help, human spot-checks.
- Infrastructure layer: Adapters, serialization, boilerplate. Low risk, high tedium. AI can own this.
This is basically clean architecture with a risk lens. But the risk lens is what matters for AI collaboration.
You can’t review everything the AI writes. You don’t have time. So you need architecture that tells you where to focus. Layer by risk gives you the answer: domain layer, every time. Application layer, mostly. Infrastructure layer, only if something breaks.
The architecture becomes a review guide. The structure embodies your priorities.
The Ironic Comeback
Let me spell out what’s happened here.
Why we originally built these patterns:
- Ubiquitous language → Talk to business stakeholders
- Clean architecture → Make code maintainable for future devs
- Event sourcing → Audit trail for compliance
- CQRS → Scale reads and writes independently
Why they matter now:
- Ubiquitous language → AI understands domain intent
- Clean architecture → AI navigates and generates confidently
- Event sourcing → Undo AI mistakes, audit AI decisions
- CQRS → Separate what needs human review from what doesn’t
Same patterns. New reasons. And honestly? The new reasons might be more compelling.
We spent years debating whether these patterns were “worth it.” The complexity was real. The human benefits were sometimes marginal. But AI tips the scales. The patterns that help humans understand code help AI understand code—and AI makes the patterns easier to build in the first place.
Good architecture makes AI more effective. Effective AI makes good architecture cheaper to build. Virtuous cycle.
What This Means for You
If you’re starting something new, this is easy: invest in architecture. Use ubiquitous language. Separate your concerns. Consider event sourcing if “what happened” matters.
If you’re maintaining an existing codebase, it’s harder. You can’t rewrite everything. But you can establish patterns in new code. Refactor the most-touched modules toward cleaner boundaries. Start naming things meaningfully, even if you can’t rename everything.
The AI will meet you where you are. But it’ll work better the further you go.
And if you’re skeptical—if you’ve heard architects push these patterns before and rolled your eyes—I get it. I’ve rolled mine too. But the economics changed. The patterns that used to be “nice to have” are becoming load-bearing.
The code you write today will be read by humans and AI tomorrow. Architecture that serves both isn’t over-engineering anymore.
It’s just engineering.