The hidden environmental cost of our digital habits: What we learned from users about video streaming’s impact
How our daily TikTok, YouTube, and Instagram habits quietly impact the planet, and what we…
Why some “little lies” in AI UX help users — and how to keep them ethical.
In UX and product design we often assume that deception is the enemy of trust: we want systems to be transparent, honest, accessible. Yet as the recent research in human-centred AI (HCAI) shows, the landscape is more subtle. Some forms of deception are inevitable — and even useful — if designed consciously.
The trick is distinguishing banal deception from strong deception, and designing for the former while rigorously avoiding the latter.
When users interact with AI systems today: chatbots, voice assistants, recommendation agents — they bring with them human-to-human interaction models. They expect social cues: a name, a human-like voice, a response that “pauses” as if thinking. In many cases these cues are designed illusions: they help bridge the gap between human and machine.
Researchers argue that these design choices embed a layer of deception: the system appears more human-social than it really is [1]. That’s not necessarily a flaw — provided users remain aware of the system’s nature and retain control.
The authors distinguish between two forms:
When designers ignore or suppress deception entirely, interaction can feel cold or alien. But when they lean into banal deception consciously, they can make AI feel humane, understandable, and psychologically safe — while preserving control and clarity.
For organisations designing next-gen digital services — especially in mobility, healthcare, and luxury sectors — this distinction becomes operational.
Users navigate complex systems and hybrid human–AI workflows. “Banal deception” can help bridge this gap, making AI approachable. But if it drifts into strong deception, trust, accountability, and compliance collapse

Recent research adds nuance and defines deception in AI as “the systematic inducement of false beliefs” [2]. It shows that even well-intentioned systems — such as language models — already exhibit deceptive behaviour.
A complementary study also found that minor deceptions, like confident but incomplete answers — paradoxically increase trust when users find them useful [3].

Deception, in short, is a spectrum. The challenge for design teams is to shape perception, not eliminate illusion.
Given this reality, how should designers work with deception intentionally in HCAI?

Here are five practical principles aligned with LINC’s ethos of “making hard things easy.”
To see what ethical deception looks like in the wild, let’s explore three sectors LINC often works within mobility, healthcare, and luxury — where “useful illusion” shapes user trust.
BMW’s Intelligent Personal Assistant aimed to humanise complex car controls. The wake-phrase “Hey BMW” and conversational rhythm created a sense of warmth and attentiveness [ 2][3].

When BMW introduced its in-car voice assistant, the goal wasn’t to impersonate a human but to make complex controls approachable.
The system uses a friendly tone, a conversational pause, and the wake phrase “Hey BMW” — enough to create warmth, not confusion [6][7].
The design tension:
A human-like voice improves usability but risks over-trust if drivers believe the system has full situational awareness.
How designers managed it:
Lesson:
Use brand warmth to build rapport, but make identity cues unmistakable.
Continuum position: Banal deception, high human control. [11]
In dementia care, PARO comforts patients with soft movements and cooing. Its illusion of aliveness calms anxiety — a textbook case of “beneficial deception.” [5]

The ethical dilemma:
For patients with cognitive decline, belief in PARO’s “life” may be both therapeutic and deceptive.
How clinicians balanced it:
Lesson:
In healthcare, deception can heal when it’s proportional, documented, and supervised.
Continuum position: Moderate automation, high human oversight. [6]
To reduce purchase hesitation, Sephora (LVMH) introduced AR “Virtual Artist” and Messenger bots offering live try-ons [7][8][13].
The interface gives a realistic preview — yet remains clearly branded and disclosed.

The tension:
Perfect virtual lighting risks overselling reality.
How the team handled it:
Lesson:
Simulations can drive delight — if they’re clearly framed as assistive previews, not promises.
Continuum position: Low deception, explicit transparency. [7][8][12][13].
Across all three cases, the pattern holds:
Three converging realities make this topic urgent for UX leaders:
For LINC, whose work sits at the intersection of UX, AI, and enterprise transformation, this means adopting deception as a governed design variable — mapped, measured, and made explicit.
Phase 1: Map illusion points
Identify every human-like cue (voice, naming, timing, animation). Ask: “What belief does this create?” Classify whether it’s banal or strong deception.
Phase 2: Calibrate user awareness
Add visible disclosures and test comprehension: do users know it’s AI? Do they feel in control?
Phase 3: Monitor operational reality
Measure post-launch trust, misuse, and misunderstanding. Track where “illusion leakage” occurs.
In an era where AI feels social, the goal isn’t to eliminate deception but to design its dosage.
Too sterile, and systems alienate. Too human, and they deceive.
Ethical design lives in the calibrated middle — the useful illusion zone.
At its best, deception becomes a bridge: a deliberate, human-centred illusion that makes complexity feel comprehensible and control intuitive.
At its worst — unchecked — it erodes trust and agency.
The challenge is to weave this awareness into every design decision: identify where illusions appear, make them transparent, and keep users firmly in charge.
Because in human-AI collaboration, the best illusion is the one users see clearly — and still choose to believe.
[1] Umbrello, S., & Natale, S. (2024). Reframing deception for human-centered AI. International Journal of Social Robotics, 16(11), 2223–2241.
[2] Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5).
[3] Zhan, X., Xu, Y., Abdi, N., Collenette, J., & Sarkadi, S. (2025). Banal Deception and Human-AI Ecosystems: A Study of People’s Perceptions of LLM-generated Deceptive Behaviour. Journal of Artificial Intelligence Research, 84.
[4] Umbrello, S., & Natale, S. (2024). Reframing deception for human-centered AI. International Journal of Social Robotics, 16(11), 2223–2241.
[5] Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2024). AI deception: A survey of examples, risks, and potential solutions. Patterns, 5(5)/
[6] Wangmo, T., Duong, V., Felber, N. A., Tian, Y. J., & Mihailov, E. (2024). No playing around with robots? Ambivalent attitudes toward the use of Paro in elder care. Nursing inquiry, 31(3), e12645.
[7] Retail Dive (2016). “Sephora brings more beauty bot tools to Facebook Messenger.”
[8] TechRepublic (2018). “How Sephora is leveraging AR and AI to transform retail.”
[9] Santulli, M. (2019). The influence of augmented reality on consumers’ online purchase intention: The Sephora Virtual Artist case. Master’s thesis.
[11] BMW (Official site). “BMW Intelligent Personal Assistant.”
[12] CB Insights (2018). Sephora Teardown.
[13] PR Newswire (2016). “Sephora debuts two new bot-powered beauty tools for Messenger.”