Why “Good” UX Research Still Leads to Weak Decisions?
UX research is everywhere. User interviews, usability tests, journey maps, accessibility audits, AI evaluations —…
We are moving from designing screens to designing for anticipation. Here is the HCI framework for the age of Intent-Driven Design.
Think about the digital tools you love most. Usually, they are the ones you barely notice.
Great design dissolves into the background. When a system truly understands your intent, the mechanics of clicking, scrolling, and tapping become secondary to the magic of getting things done.
For decades, User Experience design has operated on a reactive paradigm: The user has a goal, they translate that goal into commands, and the system responds. We have spent years refining this interaction — making the buttons clearer, the flows smoother, and the feedback faster.
But there is an inherent limit to this model. It assumes that the user should be doing the heavy lifting of translating their complex human needs into the rigid language of the system.
In an age of fragmented attention and AI, this assumption is evolving. We are shifting from the era of Direct Manipulation to the era of Predictive Intent.
The designer’s job is no longer just to build better controls — it is to design systems that understand the goal behind the control.
Here is how we bridge that gap, backed by Human-Computer Interaction (HCI) research.
In 1988, Donald Norman (the grandfather of UX) coined the “Gulf of Execution.”

This concept describes the psychological gap between what a user wants to happen and the awkward steps the system forces them to take to make it happen.
Imagine a user, let’s call him Mark, who takes a funny photo of his dog.
Mark’s Goal (Intent): “Show this to my wife.” (Simple, immediate).
The System’s Requirement (The Gulf):
The Insight: Every step in that list is a “translation” Mark has to perform. He has to translate the concept of “Wife” into the mechanical steps of “App Selection > Contact Search.” That is cognitive friction.
Today, this friction is more than just an annoyance; it is a barrier. Modern users are often multitasking or distracted. As researchers in Anticipatory Computing have noted, the objective of next-generation UX is to bridge this gulf by predicting the destination based on context [2].
The Shift: We are moving from Reaction (waiting for Mark to click six times) to Anticipation (Moving the “Send to Wife” button to the top of the screen because the system knows Mark always shares dog photos with her).
“Intent” isn’t just about predicting the next click. In HCI, specifically within Activity Theory frameworks [3], intent is a hierarchy. Most designers stop at the bottom layer. Great designers work at the top.

Let’s break this down using a scenario everyone knows: Listening to Music.
The Insight:
Predictive UX works because it skips the Operations to serve the Activity. It saves the user from the drudgery of the mechanics.
We are currently living through a massive transition in how humans relate to machines.

This aligns with what Microsoft Research describes as Mixed-Initiative Interaction [4]. This is the difference between a waiter who waits for you to wave them down (Reactive) and a waiter who refilling your water because they see your glass is empty (Predictive).
Predictive UX is a balance. If we get it right, it feels helpful. If we get it wrong, it can feel intrusive.
Drawing from Microsoft’s Guidelines for Human-AI Interaction [5], here are three core patterns for designing safe, helpful predictions:
The system proposes the next best action but leaves the final decision to the user.
The Scenario: You are writing an email to a colleague about a file.
Reactive Design: You type “I have attached the file,” hit send, and then realize you forgot the attachment.
Intent-Driven Design (Gmail): The system scans your text. It sees the word “attached” but detects no file. It gently interrupts the send action to ask: “Did you mean to attach a file?”
Why it works: It anticipated the error based on the linguistic signal of intent, saving the user from a mistake.
Humans don’t always know exactly what they want when they start. Lucy Suchman’s research on Situated Action [6] tells us that intent is often constructed during the interaction.
The Scenario: A user wants a vacation but doesn’t know where.
The Old Way: The travel app asks “Destination?” If the user types “Warm place,” the system errors out: “Invalid City.”
The Intent Way: The search bar accepts “Warm place in December.” The UI morphs. It stops showing a list of flights and starts showing a map of tropical regions with weather widgets.
Insight: The UI adapts its structure to match the ambiguity of the user’s intent.
If your system guesses wrong, the cost to fix it must be zero.
The Rule: If you auto-correct a word, one tap must revert it. If you auto-filter a feed, the “Clear All” button must be the most visible element. If the user feels trapped by a prediction, trust is broken instantly.
There is a critical responsibility here. As we design systems that think for us, we must be wary of Automation Bias [7].
The Psychological Trap: Humans have a tendency to trust automated suggestions, even when they are wrong. We’ve all heard stories of drivers following GPS directions into a lake. That is Automation Bias — prioritizing the system’s “Intent” over their own sensory input.
The Shneiderman Compromise
Ben Shneiderman (University of Maryland) argues for Human-Centered AI [8]. We shouldn’t choose between Computer Control and Human Control. We need High Automation AND High Human Control.
Your Ethical Checklist:
Ready to move beyond static screens? Use the ID-UX (Intent-Driven) Framework. Let’s apply this to a Smart Calendar App:

Context: It is 8:55 AM.
History: User has a ‘Standup’ meeting every day at 9:00 AM.
Environment: User is on mobile, moving at 30mph (commuting).
2. Model Probabilities:
Prediction: The user is likely late and needs to join via phone, not video.
3. Design “Soft” Predictions:
Do not just dial the number (too intrusive).
Solution:
Send a push notification: “Running late? Tap to join Standup via Audio Only.”
4. Feedback Loops:
If the user swipes it away, the system learns: “Don’t ask this on Tuesdays.”
Designing for intent is not about removing humans from the loop. It is about removing the friction from the loop.
The future of UX isn’t just smarter interfaces; it’s interfaces that possess social intelligence — knowing when to act, when to wait, and when to ask.