Why architecture review cannot wait for later cleanup
Teams often accept AI-generated structure on the assumption that they can clean it up later. That sounds practical, but generated architecture spreads quickly. Once the same boundary mistake exists in six files, cleanup becomes a refactor instead of a review comment.
The cost is not only extra code. It is the silent normalization of weak ownership, fuzzy service boundaries, and helper layers that exist only because the generator needed somewhere to put things.
The review lens that matters most
Start with ownership. Ask which layer owns fetching, which layer owns mutation, and which layer owns presentational composition. If the answer is blurry, the architecture is already drifting.
- Check whether server and client responsibilities are still explicit.
- Check whether business rules live in reusable service boundaries instead of UI patch code.
- Check whether the component tree exposes one source of truth or several mirrored copies.
Signals the draft is getting expensive
Be wary of wrapper-heavy trees, utility files with vague names, and helper functions that only one component will ever call. Those patterns often indicate generated scaffolding rather than intentional architecture.
Another signal is when the team needs comments to explain basic ownership. Good boundaries reduce explanation overhead instead of increasing it.
What a passable AI-generated structure looks like
A strong generated draft usually has small interactive islands, clear data flow, and service helpers that correspond to real reuse or policy boundaries. It does not need three indirection layers to feel professional.
If the draft cannot explain its boundaries in plain language, it is not ready to spread through the codebase yet.