Skip to main content
The Adaptavist Group Logo
Recruitment scam alert: Brew Digital is being impersonated | Read more
Read more
arrow icon
AI in UX research: Part 2
Share on socials
A person confused standing next to an AI machine that is returning rubbish
Photo of Barbara Accioly, UX Researcher
Barbara Accioly
Published on 19 January 2026

AI in UX research: Part 2

AI tools can appear rigorous while missing the nuance that makes UX research valuable. Discover the limits of AI and the areas of UX research that still need a human touch.
In the first part of this series, we explored the strengths and potential benefits of using AI in UX research. Equally important, however, are the areas where AI underperforms, or, in some cases, actively misleads.

1. Behavioural and qualitative analysis

We tested Gemini and Notebook LM on clustered notes, interview summaries, and quantitative survey data, comparing their outputs to human-generated insights.
What went wrong:
AI identified a few obvious, surface-level patterns but missed key insights
  • It failed to connect related data points across questions
  • Recommendations often diverged entirely from those of experienced UX researchers, with no clear rationale
  • Notebook LM, in particular, offered effectively zero value for UX analysis, scoring 0/5 in our evaluation.
The underlying problem: AI reads literal text. UX research interprets human behaviour.
Real interviews contain nuance—tone, hesitation, body language, and what participants don't say are often more revealing than explicit answers. AI cannot reliably detect that "It's fine", said with a flat tone and a pause, may actually mean "This is a problem I don't feel safe complaining about."
Practical takeaway: AI should never replace human qualitative analysis or generate final UX recommendations based solely on raw data. It may have small, tightly scoped uses—like turning themes into bullet points—but never as the primary analyst.

2. Planning efficiency vs. planning quality

Does AI speed up planning? Not really. Tools like Claude require iterative back-and-forth to refine context, challenge assumptions, and correct mistakes. In practice:
  • Planning with AI takes roughly the same amount of time as planning without it
  • The difference is quality, not speed: AI helps produce more robust plans with better-articulated rationales, but not faster ones
Reality check for teams: There is no "push-button, perfect research plan" AI shortcut. The value is in thought partnership, not time savings.

3. Business realities: budgets, timelines, and capacity

Across tools, AI consistently proposed idealised, textbook scenarios:
  • Budgets up to 6x higher than reality
  • Timelines 2–3x longer than real sprint cycles
  • Sample sizes far beyond constraints (e.g., 30–35 participants vs. a realistic 5)
Even with explicit context—"One researcher, two weeks, X budget"—AI would drift back to idealised designs.
For customer-facing teams, this is a critical risk: AI produces plans that sound impressive but are impossible to execute within your organisation's constraints.

Key takeaway: AI can support UX research, but human judgment is non-negotiable. Misplaced trust can lead to wasted effort, missed insights, or unworkable recommendations.

What's clear is that AI's limitations aren't primarily technical. They're contextual. UX research is not a data-processing problem; it's a sense-making discipline rooted in human behaviour, organisational constraints, and judgment under uncertainty. Those are precisely the areas where AI is most likely to produce outputs that sound credible while being subtly wrong.
This doesn't make AI irrelevant. It makes discernment essential. Teams that understand where AI breaks down are better positioned to use it safely—supporting structure, articulation, and synthesis without surrendering interpretation or decision-making.
For product teams, this distinction matters. The risk isn't that AI replaces researchers, but that it reshapes how decisions are justified, scoped, and defended. When confident-sounding recommendations are detached from behavioural nuance or delivery reality, the cost shows up downstream—in misaligned roadmaps, wasted cycles, and avoidable rework.
The final part of this series explores what all of this means for product: how teams should think about AI-assisted research, where it can add genuine value, and how to avoid letting tool capability quietly redefine research quality.

Let our research support you

If you'd like to learn more about UX at Adaptavist and how we can support your team, fill out our contact form to schedule a call.