Beyond Usability: Designing UX for Trust in the Age of AI
For years, “good UX” meant simplicity. But as AI begins to shape — and sometimes…
We are currently living through the “honeymoon phase” of Generative AI design. You’ve seen the demos: a developer scribbles a rough wireframe on a napkin, uploads a photo to an AI tool, and poof—a fully coded, high-fidelity interface appears in seconds. It feels like magic. It feels like the future.
But any seasoned product veteran knows that magic usually comes with a price.
While we celebrate the insane velocity of “Vibe Coding”—where creators prompt apps into existence based on “vibes” rather than specs—we are silently accruing a massive, compounding liability. It’s not just technical debt; it’s Design Debt. And unlike the messy code in your backend that no one sees, this debt lives right on the “glass,” directly impacting your users.

Based on an extensive literature review of the current state of AI-generated interfaces, here is why your “AI-first” product might actually be a UX disaster in disguise—and how to stop the bleeding before it’s too late.
The first symptom of AI Design Debt is the spread of what researchers call the “Page-Shaped Object” [1]: an interface that looks complete but has no connection to the underlying logic of the product. When you ask an AI to “generate a settings dashboard,” it doesn’t understand permission models, data lifecycles, or business rules. It simply predicts the surface pattern of what dashboards usually look like.
The result is UI without ontology. It carries the visual grammar of a product but none of the conceptual or architectural grounding that makes it real. And the debt shows up in deeper ways than missing functionality alone.
AI tends to invent conceptual structures that don’t exist in your system—like linear flows for processes that are actually branching, or clean hierarchies for data models that are graph-shaped. It also introduces fictional relationships between objects, implying connections, dependencies, or aggregation logic that your backend cannot support. Even when the visuals look plausible, the model often proposes over-optimistic interactions—global filters, inline edits, real-time updates—that are trivial to draw but enormously complex or legally constrained to implement.
These designs also embed an incompatible information architecture, often forcing your product into neat categories and page transitions that contradict how data actually moves across services. What emerges is not a rough draft of your system, but a parallel universe version of it: clean, coherent, and fundamentally impossible.

This creates the most dangerous form of Design Debt: the integration mirage. High-fidelity AI screens give stakeholders a false sense of progress—“It looks done, so we must be close”—while leaving designers and engineers with a hollow shell that must be dismantled and rebuilt around the actual logic of the product. The “Page-Shaped Object” becomes a demo wearing the costume of a feature, and the time you thought you saved is repaid later with interest.
We are seeing the rise of “Vibe Coding”, a term coined by Andrej Karpathy in February 2025 to describe a workflow where you “fully give in to the vibes” and let the AI write the code, often without reading the diffs. While this democratizes creation, allowing non-engineers to build software, it often leads to Shadow Design—design work created outside the governance of your design system [3].
The result is Design System Drift. AI models are probabilistic, not deterministic. An AI might generate a hex code that is visually similar to your brand color but mathematically distinct, or apply a 4px border radius instead of your standard 6px. Over time, these micro-deviations accumulate, turning your codebase into a “vibe-coded mess” that is difficult to maintain or refactor.
Perhaps the most ethically troubling form of debt we are accruing is Accessibility Debt.[5]
LLMs are engines of statistical probability trained on the open web. Unfortunately, the open web is a toxic dataset: the WebAIM Million study consistently finds that nearly 96% of top home pages fail basic accessibility standards [14]. When AI learns to code from this data, it inherits these failures by default [Rabelo et al., 2025].

The result is a “garbage in, garbage out” cycle that manifests in three dangerous patterns:
In traditional design, the rationale is as important as the result. Why is this button red? Because it’s a destructive action. Why is the padding 16px? To align with our touch-target standards.
With AI, the answer to “Why?” is often just: “Because the model predicted it.”
This leads to Epistemic Debt—a loss of knowledge about your own system. As AI generates more of our UI, we lose the institutional memory of why decisions were made. The design rationale becomes buried in the “black box” of the model [8]. Six months from now, when your team needs to refactor that dashboard, they won’t know if that specific layout was chosen for legal compliance, user preference, or just a random hallucination. This leads to a paralysis of decision-making, where teams are afraid to touch code they don’t understand.
The narrative of AI efficiency often ignores the transfer of effort. While AI automates the mechanical act of creation, it increases the burden of verification, shifting the role of the human from creator to auditor.
So, should we stop using AI? Absolutely not. But to survive the AI design revolution, we must move from passively accepting “magic” to actively managing the output. We need to shift from Vibe Coding to Intentional Design.
Here is how to stop the bleeding and turn AI into a tool for leverage rather than liability.
1. Become the Architect, Not the Decorator To defeat the “Page-Shaped Object,” designers must move their focus “up the stack.” AI is excellent at the execution of pixels and syntax (the “Decorator”), but it is terrible at the logic of how a system works (the “Architect”).
2. Adopt the “Sandwich” Workflow “Vibe Coding” fails because it lacks human governance. The most effective way to prevent accessibility debt and “Div Soup” is to layer human intent around the AI’s speed.

3. Treat Your Design System as Law, Not Suggestion AI models are probabilistic—they guess what looks good. Your Design System is deterministic—it dictates what is correct.
AI is an incredible engine for velocity, but velocity without steering is just a crash waiting to happen. The goal isn’t to slow down—it’s to ensure that the time we save on creation isn’t lost to fixing the debt we blindly accepted. Don’t let the magic of the demo distract you from the reality of the debt.
[1] Iouguina, A. (n.d.). Design Isn’t Disappearing, It’s Moving Up the Stack. Medium. https://medium.com/@alenaiouguina/design-isnt-disappearing-it-s-moving-up-the-stack-165a2f49069b
[2] Drinkwater, A. (2025, June 10). The UX debt of AI-first products. Medium. https://medium.com/design-bootcamp/the-ux-debt-of-ai-first-products-e056578331e3
[3] SketchDeck. (n.d.). Shadow Design and Its Business Impact. https://sketchdeck.com/blog/shadow-design-and-its-business-impact/
[4] DesignRush. (n.d.). UX/UI AI Tools and Trends. https://www.designrush.com/agency/ui-ux-design/trends/ux-ui-ai-tools
[5] MDPI. (2025). Accessibility Debt in AI Competition Platforms. https://www.mdpi.com/2076-3417/15/13/7165
[6] Bootcamp. (n.d.). AI-Generated UX and the Growing Accessibility Debt: How to Fix It. Medium. https://medium.com/design-bootcamp/ai-generated-ux-and-the-growing-accessibility-debt-how-to-fix-it-8109fda7d9d5
[7] TestParty. (n.d.). AI-Written Code Accessibility Risks: Copilot (Snippet 201). https://testparty.ai/blog/ai-written-code-accessibility-risks-copilot
[8]Wang, et al. (n.d.). Design Rationale Loss in AI. PMC. https://pmc.ncbi.nlm.nih.gov/articles/PMC12470571/
[9] API4AI. (n.d.). AI Code Review APIs: Reducing Developer Burnout. Medium. https://medium.com/@API4AI/ai-code-review-apis-reducing-developer-burnout-46c3b71c64c1
[10] Hivel.ai. (n.d.). Hidden Costs of Manual Code Reviews. https://www.hivel.ai/blog/hidden-costs-of-manual-code-reviews
[11] Sharma, A. (2025, August 21). 20 AI-Assisted Coding Risks And How To Defend Against Them. Forbes Tech Council. https://www.forbes.com/councils/forbestechcouncil/2025/08/21/20-ai-assisted-coding-risks-and-how-to-defend-against-them/
[12] American Foundation for the Blind. (2025, October 16). Beyond alt text: Rethinking visual description in the age of AI. https://afb.org/blog/entry/alt-text-age-ai
[13] Das, M., Fiannaca, A., Morris, M. R., Kane, S., & Bennett, C. L. (2024). “I look at it as the king of knowledge”: How blind people use and understand generative AI tools. Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’24). Association for Computing Machinery. https://doi.org/10.1145/3663548.3675620
[14] WebAIM. (2024, February 29). The WebAIM million: The 2024 report on the accessibility of the top 1,000,000 home pages. https://webaim.org/