← Papers

AI in Practice

March 14, 2026

AI can generate code quickly. That part is no longer interesting. What matters is what happens when you try to build a real system with it. Not a prototype. Not an isolated feature. Something with clear boundaries, real constraints, and expectations of correctness.

The goal was not to prove AI could write code. The goal was to see how it behaves when treated like a development team.

AI is not a shortcut around engineering.
It is a mirror of how you approach it.


Speed Is Not the Constraint

The most immediate realization is that AI does not struggle with implementation. Given clear direction, it produces large amounts of working code quickly. Services, data models, interfaces, and integration points appear at a pace that would be unrealistic for a single engineer working alone.

Speed is not the limiting factor. Clarity is.

When direction is precise, output aligns. When it is not, output diverges. AI fills gaps with reasonable assumptions, but those assumptions are not coordinated across the system. Each piece is generated in isolation, against whatever context was present in that moment. The gaps between pieces are where the problems live.


Consistency Does Not Emerge on Its Own

Early in the process it becomes obvious that AI will not enforce consistency without being directed to. It generates similar solutions in slightly different ways. Naming drifts. Patterns vary. Boundaries blur. Individually, each piece works. Together, they do not form a coherent system.

This forces a shift in how development is approached. The primary work is no longer writing implementation. It is directing it. Defining structure, setting constraints, correcting deviations, and reinforcing consistency across every unit of output. The role of the engineer moves up a level, and that is where most of the value is created.

Without that direction, AI produces output that looks complete but behaves inconsistently under real conditions. With it, AI becomes an effective extension of the engineer rather than a source of compounding drift.


Context Does Not Persist

There is a practical limit that becomes clear quickly. AI does not retain system-wide context reliably over time. It does not remember every decision. It does not enforce previously established patterns unless they are explicitly restated or encoded into the workflow.

This means consistency must be externalized. Architecture must be documented. Patterns must be defined explicitly. Constraints must be repeated or enforced at the boundary of every new unit of work. This is not a flaw in the tool. It is a characteristic of how it operates. Treating AI as though it understands the system leads to subtle errors that compound quietly over time. Treating it as a capable but stateless contributor produces better results.


Plausible Is Not the Same as Correct

AI will confidently generate incorrect solutions when constraints are unclear. Not obviously broken solutions. Plausible ones. Solutions that pass review, compile cleanly, and appear to work until they meet a condition the prompt did not anticipate.

This is where experience matters in a way that cannot be delegated. You have to know when something is wrong even when it looks right. AI will not tell you when a boundary is misplaced. It will not challenge an assumption unless explicitly asked. It will not protect the system from a poor decision. It will execute one.


What This Means in Practice

Over time, a stable pattern emerges. AI works best when architecture is defined before implementation begins, constraints are explicit and repeated, patterns are enforced consistently across the codebase, and output is reviewed with intent rather than assumed correct. Under those conditions, development becomes faster without losing coherence. Without them, it becomes faster and less predictable. That is a worse outcome than moving slowly with clarity.

The experiment makes one thing clear. AI is not replacing engineering. It is raising the bar for it. When implementation becomes cheap, the cost of poor decisions increases. Systems can be built faster, but they can also become unmaintainable faster. That is the trade.

AI changes how software is built. It does not change what makes software good. Clarity still matters. Structure still matters. Judgment still matters. If anything, they matter more now than they did before, because the consequences of their absence arrive sooner.

Continue reading

Papers

This reflects how I approach building systems at XRiley. If you are solving real problems and need clarity in how to design, scale, or stabilize your software, that is the work I do.