Human-AI Interaction

Designing Useful Illusions: When Deception Becomes Human-Centered

Designing Useful Illusions: When Deception Becomes Human-Centered

Why some “little lies” in AI UX help users — and how to keep them ethical.

In UX and product design we often assume that deception is the enemy of trust: we want systems to be transparent, honest, accessible. Yet as the recent research in human-centred AI (HCAI) shows, the landscape is more subtle. Some forms of deception are inevitable — and even useful — if designed consciously.

The trick is distinguishing banal deception from strong deception, and designing for the former while rigorously avoiding the latter.

The hidden ingredient of “trustworthy” AI

When users interact with AI systems today: chatbots, voice assistants, recommendation agents — they bring with them human-to-human interaction models. They expect social cues: a name, a human-like voice, a response that “pauses” as if thinking. In many cases these cues are designed illusions: they help bridge the gap between human and machine.

Researchers argue that these design choices embed a layer of deception: the system appears more human-social than it really is [1]. That’s not necessarily a flaw — provided users remain aware of the system’s nature and retain control.

The authors distinguish between two forms:

  • Strong deception: AI presents itself as fully human, hides its machine nature or purpose, manipulates users without awareness.
  • Banal deception: AI uses human-like cues, anthropomorphism, simplified metaphors — but the system’s nature remains visible or disclosed.

When designers ignore or suppress deception entirely, interaction can feel cold or alien. But when they lean into banal deception consciously, they can make AI feel humane, understandable, and psychologically safe — while preserving control and clarity.

Why this distinction matters for UX & enterprise design

For organisations designing next-gen digital services — especially in mobility, healthcare, and luxury sectors — this distinction becomes operational.

Users navigate complex systems and hybrid human–AI workflows. “Banal deception” can help bridge this gap, making AI approachable. But if it drifts into strong deception, trust, accountability, and compliance collapse

Recent research adds nuance and defines deception in AI as “the systematic inducement of false beliefs” [2]. It shows that even well-intentioned systems — such as language models — already exhibit deceptive behaviour.

A complementary study also found that minor deceptions, like confident but incomplete answers — paradoxically increase trust when users find them useful [3].

Deception is a continuum. The goal is to stay on the left: high value, low risk.

Deception, in short, is a spectrum. The challenge for design teams is to shape perception, not eliminate illusion.

From deception to design principle

Given this reality, how should designers work with deception intentionally in HCAI?

Banal deception + high control = Human-Centered Zone

Here are five practical principles aligned with LINC’s ethos of “making hard things easy.”

  1. Design for human control
    Deceptive cues may ease interaction but must not erode autonomy. Always include “pause,” “undo,” and “escalate to human” options.
  2. Expose the frame
    Use clear disclosures: “I’m a virtual assistant by Company X.” Visibility of system nature keeps deception banal, not manipulative [4].
  3. Calibrate metaphors consciously
    The metaphors “assistant” or “tool” set healthier expectations than “friend” or “partner.”
  4. Inject meaningful constraints
    Show seams — confidence indicators, reasoning summaries, uncertainty statements — to remind users that they’re interacting with software.
  5. Govern deception in enterprise contexts
    Especially in mobility, healthcare, and luxury ecosystems, review deceptive cues during design QA. Treat “illusion risk” as part of compliance [5].

Case Studies: Designing Banal Deception in Practice

To see what ethical deception looks like in the wild, let’s explore three sectors LINC often works within mobility, healthcare, and luxury — where “useful illusion” shapes user trust.

1. Mobility: BMW “Hey BMW” Voice Assistant

BMW’s Intelligent Personal Assistant aimed to humanise complex car controls. The wake-phrase “Hey BMW” and conversational rhythm created a sense of warmth and attentiveness [ 2][3].

Banal Deception in an In-Car Assistant

When BMW introduced its in-car voice assistant, the goal wasn’t to impersonate a human but to make complex controls approachable.

The system uses a friendly tone, a conversational pause, and the wake phrase “Hey BMW” — enough to create warmth, not confusion [6][7].

The design tension:
A human-like voice improves usability but risks over-trust if drivers believe the system has full situational awareness.

How designers managed it:

  • Persistent machine branding through the wake-word “BMW.”
  • Limited intent domains (climate, navigation) prevent false capability assumptions.
  • Predictable refusal behaviour: if unsure, it defers to manual control.
  • Persona tone is “helpful,” not “friend.”

Lesson:
Use brand warmth to build rapport, but make identity cues unmistakable.
Continuum position: Banal deception, high human control. [11]

2. Healthcare: PARO, the Therapeutic Seal Robot

In dementia care, PARO comforts patients with soft movements and cooing. Its illusion of aliveness calms anxiety — a textbook case of “beneficial deception.” [5]

Therapeutic Robot as Beneficial Illusion

The ethical dilemma:
For patients with cognitive decline, belief in PARO’s “life” may be both therapeutic and deceptive.

How clinicians balanced it:

  • Introduced transparently as a therapeutic robot.
  • Supervised sessions and documented consent [6].
  • Visual cues — charging base, maintenance tags — subtly reveal its artefactual nature.
  • Used as bridge to human contact, not replacement.

Lesson:
In healthcare, deception can heal when it’s proportional, documented, and supervised.
Continuum position: Moderate automation, high human oversight. [6]

3. Luxury: Sephora Virtual Artist & Beauty Bots

To reduce purchase hesitation, Sephora (LVMH) introduced AR “Virtual Artist” and Messenger bots offering live try-ons [7][8][13].
The interface gives a realistic preview — yet remains clearly branded and disclosed.

Simulation, Not Promise: AR Try-On UI

The tension:
Perfect virtual lighting risks overselling reality.

How the team handled it:

  • UI labels (“Virtual Artist,” “Color Match”) and brand framing keep simulation visible [8][13].
  • Flow leads to human consultation (“Book a Makeover”).
  • On-screen copy clarifies variability by device and lighting [9].
  • Conversation tone remains purposeful, not chatty.

Lesson:
Simulations can drive delight — if they’re clearly framed as assistive previews, not promises.
Continuum position: Low deception, explicit transparency. [7][8][12][13].

Across all three cases, the pattern holds:

  • Mobility teaches that friendliness must never mask safety boundaries.
  • Healthcare shows deception can nurture when wrapped in consent.
  • Luxury demonstrates that illusion can delight — when visibly contained.

Why now matters more than ever

Three converging realities make this topic urgent for UX leaders:

  1. AI is increasingly human-facing. Without social cues, systems feel alien. With too many, they feel manipulative.
  2. Trust is fragile. Over-trust can harm; under-trust can kill adoption. The right balance sustains both usability and ethics.
  3. Regulation is rising. Deception-capable systems are flagged as high-risk [5]. Transparent design is no longer optional — it’s compliance.

For LINC, whose work sits at the intersection of UX, AI, and enterprise transformation, this means adopting deception as a governed design variable — mapped, measured, and made explicit.

A framework for practice

Phase 1: Map illusion points
Identify every human-like cue (voice, naming, timing, animation). Ask: “What belief does this create?” Classify whether it’s banal or strong deception.

Phase 2: Calibrate user awareness
Add visible disclosures and test comprehension: do users know it’s AI? Do they feel in control?

Phase 3: Monitor operational reality
Measure post-launch trust, misuse, and misunderstanding. Track where “illusion leakage” occurs.

Final thoughts: reframing deception as design strategy

In an era where AI feels social, the goal isn’t to eliminate deception but to design its dosage.
Too sterile, and systems alienate. Too human, and they deceive.

Ethical design lives in the calibrated middle — the useful illusion zone.

At its best, deception becomes a bridge: a deliberate, human-centred illusion that makes complexity feel comprehensible and control intuitive.
 At its worst — unchecked — it erodes trust and agency.

The challenge is to weave this awareness into every design decision: identify where illusions appear, make them transparent, and keep users firmly in charge.
Because in human-AI collaboration, the best illusion is the one users see clearly — and still choose to believe.

References

[1] Umbrello, S., & Natale, S. (2024). Reframing deception for human-centered AI. International Journal of Social Robotics, 16(11), 2223–2241.

[2] Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5).

[3] Zhan, X., Xu, Y., Abdi, N., Collenette, J., & Sarkadi, S. (2025). Banal Deception and Human-AI Ecosystems: A Study of People’s Perceptions of LLM-generated Deceptive Behaviour. Journal of Artificial Intelligence Research, 84.

[4] Umbrello, S., & Natale, S. (2024). Reframing deception for human-centered AI. International Journal of Social Robotics, 16(11), 2223–2241.

[5] Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5)/

[6] Wangmo, T., Duong, V., Felber, N. A., Tian, Y. J., & Mihailov, E. (2024). No playing around with robots? Ambivalent attitudes toward the use of Paro in elder care. Nursing inquiry, 31(3), e12645.

[7] Retail Dive (2016). “Sephora brings more beauty bot tools to Facebook Messenger.”

[8] TechRepublic (2018). “How Sephora is leveraging AR and AI to transform retail.”

[9] Santulli, M. (2019). The influence of augmented reality on consumers’ online purchase intention: The Sephora Virtual Artist case. Master’s thesis.

[11] BMW (Official site). “BMW Intelligent Personal Assistant.”

[12] CB Insights (2018). Sephora Teardown.

[13] PR Newswire (2016). “Sephora debuts two new bot-powered beauty tools for Messenger.”

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.