Beyond Usability: Designing UX for Trust in the Age of AI
For years, “good UX” meant simplicity. But as AI begins to shape — and sometimes…
User interviews, usability tests, journey maps, accessibility audits, AI evaluations — most product teams today are doing research continuously. Many organisations even have ResearchOps teams, shared insight repositories, and well-defined discovery processes.
On paper, this looks like progress.
In reality, users are still struggling with products in ways teams already understand.
Known usability issues keep shipping.
Accessibility gaps remain “on the backlog.”
AI systems repeat the same trust and comprehension failures release after release.
So the uncomfortable question is no longer whether teams do research.
It’s this:
If UX research is so common, why does so little of it actually change what gets built?
By 2025, UX research methods are not the problem.
Most research teams today know how to recruit properly, avoid obvious bias, triangulate qualitative and quantitative data, and synthesise insights responsibly. Industry reviews consistently show that methodological quality is not where things break down [1].
The problem starts after the research is done.

Insights don’t land in neutral territory. They land in organisations shaped by:
In those environments, research is welcome as long as it supports direction — but becomes uncomfortable the moment it challenges it.
Organisational research has shown this pattern for decades: companies are very good at absorbing new information while keeping existing plans intact, especially when that information threatens certainty or authority [2].
So research gets heard, discussed, even praised — without being allowed to change the outcome.
One of the biggest myths in UX is that insight automatically leads to impact.
It doesn’t.

Recent decision science research makes a clear distinction between understanding and action. Information only matters when it changes what decisions are allowed, delayed, or blocked [3].
Most UX research increases understanding.
Very little of it changes constraints.
An insight only creates impact when it forces a trade-off:
Without that leverage, research remains intellectually acknowledged — and operationally ignored.
That’s why so many teams experience the same pattern:
“Yes, this makes sense.”
“This is important.”
“Let’s keep it in mind.”
And then… nothing changes.
Between 2024 and 2025, many enterprise AI platforms ran extensive UX research on AI-assisted decision tools. Across industries, researchers found the same issues:
These findings closely match recent HCI research on automation bias and trust calibration in human–AI interaction [4].
And yet, many of these products shipped without meaningful interaction changes.
What changed instead?
Post-launch reviews showed why research didn’t stop the release:
The research wasn’t ignored.
It just didn’t have the power to say “no.”
Research activity is easy to celebrate.
Interviews conducted.
Studies completed.
Insights shared.
Research impact is harder. It often means slowing down delivery, revisiting assumptions, or challenging leadership decisions.
Organisational scholars call this ceremonial learning — when organisations perform learning activities to signal competence, without letting those activities change outcomes [5].

UX research fits this pattern perfectly.
By 2025, studies on digital transformation show that many organisations maintain strong research rituals while quietly insulating strategic decisions from their consequences [6].
The result is a paradox:
Healthcare software offers a stark example.
Recent longitudinal studies of clinical systems show consistent evidence of:
The research is solid. The risks are documented.
Yet interface changes often take years.
Why?
Over time, these issues stop being treated as problems to solve and start being treated as constraints to accept.
The organisation doesn’t deny the research.
It simply learns how to live with it.
Modern product teams are under constant pressure to move fast.
In that environment, research is welcomed when it confirms direction — and quietly sidelined when it introduces ambiguity.
High-quality UX research rarely provides certainty. It reveals nuance, trade-offs, and uncomfortable truths.
Decision-making research shows that under time pressure, organisations systematically favour evidence that supports existing plans and discount evidence that requires reframing or slowing down [8].
This is how “data-driven” cultures quietly drift into selective listening.
Research is consulted — but only when it’s convenient.

Across recent studies, one factor consistently determines whether research creates impact:
Who is responsible for acting on it?
In many organisations:
When responsibility is spread this thin, accountability disappears.
A 2025 study on decision ownership shows that evidence without a clearly assigned decision owner steadily loses influence over time, no matter how good it is [9].
Research becomes “important input.”
Not an obligation.
In organisations where research does change outcomes, it’s treated differently.
Research is not just input — it’s infrastructure.
That means:
Recent product and HCI research confirms this: continuous discovery only works when learning is structurally tied to decision rights, not just ideation cycles [10][11].
In these environments, research doesn’t persuade.
It constrains.
Most UX research isn’t wasted because teams lack skill or care.
It’s wasted because organisations are not designed to be changed by learning.
They’re designed to:
Until that changes, research will keep piling up — thoughtful, rigorous, and quietly ignored.
The real question isn’t:
“Are we doing enough UX research?”
It’s:
“Which decisions are we willing to let UX research change?”
Until organisations answer that honestly, UX research will remain abundant — and largely wasted.
[1] Rosenfeld Media (2024). The State of UX Research in Industry.
[2] Argyris, C., & Schön, D. (2023). Organizational Learning in Practice. Oxford University Press.
[3] Sibony, O., Sunstein, C. R., & Kahneman, D. (2024). Decision Quality. Harvard Business Review Press.
[4] Bansal, G., et al. (2024). “Beyond Transparency: Trust Calibration in Human–AI Interaction.” CHI 2024 Proceedings.
[5] Bromley, P., & Powell, W. (2023). “Ceremony, Consequence, and Organizational Learning.” Academy of Management Annals.
[6] Maiden, N., et al. (2025). “Design Governance and Accountability in Digital Transformation.” IEEE Software.
[7] Ratwani, R. M., et al. (2025). “Usability, Cognitive Load, and Patient Safety in Health IT.” JAMIA.
[8] Kahneman, D., & Klein, G. (2023). “Conditions for Intuitive Expertise.” American Psychologist.
[9] Edmondson, A. C., & Mortensen, M. (2025). “Accountability Without Authority.” MIT Sloan Management Review.
[10] Torres, T., & Meyer, M. (2024). “Continuous Discovery at Scale.” Product Management Journal.
[11] Shneiderman, B. (2025). “Human-Centered AI and Evidence-Based Governance.” Communications of the ACM.