Skip to main content
The Adaptavist Group Logo
Recruitment scam alert: Brew Digital is being impersonated | Read more
Read more
arrow icon
AI in UX research: Part 1
Share on socials
Person inputting an idea into an AI machine and outputs coming out the other side
Photo of Barbara Accioly, UX Researcher
Barbara Accioly
Published on 14 January 2026

AI in UX research: Part 1

AI boosts UX research operations like transcription and planning support, but humans remain essential for judgment, context, and interpretation.
One question has increased in nearly every conversation: Can't we just use AI for this? You are probably asking it to, and across UX research and Product teams, it is a constant. What began as an experimental tool is rapidly becoming an assumed solution, positioned as a shortcut to faster synthesis, cleaner insights, and more targeted recommendations.
The appeal is undeniable. If AI could reliably analyse interviews, surface themes, and generate actionable findings, research cycles could shrink from weeks to days without compromising rigour. But when we move beyond theory and apply AI to real-world studies, the picture becomes more nuanced.
Some tasks AI handled exceptionally well. Others, it disrupted or diminished entirely.
Here I share what we've learned–not from product demos or hypothetical scenarios, but from direct, hands-on application across real projects. Our goal is to clarify where AI truly strengthens UX research today and where it still introduces risk.

Why we put AI to the test

At Adaptavist, the UX team had reached an inflexion point. Some researchers were using AI. Others didn't trust it at all. Debates were shaped more by opinions and blog posts than by evidence. And with no shared standards, especially around risk and GDPR, progress stalled.
We created an AI Task Force and spent three months running controlled experiments across tools we were already licensed to use, in order to comply with GDPR, including Claude Sonnet 4, ChatGPT-5, Google AI Studio, Gemini, Notebook LM, and the already familiar transcription tools such as Zoom AI. But more importantly, we created a space for safe experimentation, where people could try, fail, compare results, and learn without judgment. Openness was essential. Without it, the loudest opinions win and innovation stalls.
We tested AI against live work—customer churn analysis, B2B/B2C ecosystem mapping, and brand and messaging validation—with one clear objective:
To understand when AI genuinely strengthens UX research, and when it introduces risk.

The short version

If you take away only one point from this article, let it be this: AI shines at operational work but struggles with interpretive work.
Across tools, patterns, and project types, one consistent insight was that AI serves as a productivity multiplier for repetitive and mechanical tasks. But the moment a task requires judgment, context, or human interpretation, its limitations become clear.
You could stop reading here—and that would be the core insight. However, if you stick with me, I'll guide you through our experiments and share practical lessons on how to utilise AI to enhance UX research, rather than compromise it.

Where AI actually works today

1. Transcripts:

We tested manual transcription against AI transcription on eight one-hour interviews. Manual transcription took roughly two hours per interview. With AI tools like Zoom AI, that time dropped to nearly zero—16 hours saved across eight interviews, and more than 100 hours per quarter for a moderately active research team.
Where we see real value:
  • Fast, searchable records of conversations
  • Instant recall of stakeholder discussions
  • The ability to focus on listening during interviews instead of note-taking
Where we draw the line:
  • AI transcripts are raw material, not analysis
  • We always review the recordings alongside transcripts to capture tone, pauses, and non-verbal cues that AI misses
Key takeaway: AI captures words. It does not capture meaning. Even in this obvious use case, human oversight is essential to preserve the richness of qualitative research.

2. Catching gaps in planning: AI as a strategic sparring partner

Using AI to challenge research plans delivered tangible benefits. Claude, in particular, excelled at prompting critical questions like:
  • "What are your success criteria?"
  • "How does this tie back to business objectives?"
Even senior researchers can overlook these under tight deadlines, making AI a valuable safety net.
We tested Claude Sonnet 4 and ChatGPT‑5 as simulated “experienced UX researchers” to:
  • Question our methods
  • Identify logical gaps
  • Flag missing success criteria
Crucial point: AI was used as a supplement, not a replacement. Insights were valuable only when critically reviewed by human researchers who understood the context and nuance.

3. Challenging methodology choices:

What we asked AI to do
We positioned the model not as a passive assistant, but as a critical thinking partner. Its role was to:
  • Analyse our plans and assumptions with rigour, not agree by default
  • Identify gaps, risks, and flawed reasoning we may have overlooked
  • Offer alternative perspectives and correct us clearly—with evidence-backed rationale
Help us reach sharper clarity, stronger logic, and higher-quality decision-making aligned to product outcomes
  • This framing was designed to support three key business goals:
  • Reduce churn and increase customer retention, driving revenue growth
  • Resolve high-impact experience friction, improving product usability and adoption
Strengthen brand positioning within the monday.com ecosystem, reinforcing value and differentiation
Keeping business context front and centre:
AI often reminded us: AI often reminded us of the gaps of Information we could be missing and relating the methodology suggested to those gaps
Why Claude stood out:
  • Structured, scannable outputs with clear step-by-step reasoning
  • Easy to extract actionable insights quickly
  • Less verbose than ChatGPT‑5, which tended to produce long, generalised responses that were harder to parse
The critical catch: In testing, roughly 55% of Claude's recommendations required correction—it forgot context, invented plausible-sounding KPIs, or suggested unrealistic timelines. This is not a rounding error. Experienced people must remain informed at all times.
In practice, we now treat these tools as:
  • A thinking partner for experienced researchers
  • A way to improve quality, not to save time (planning still takes roughly as long as doing it yourself due to iterative back-and-forth).

4. Methodology reminders and theory refreshers

Another genuinely valuable use case: using AI as an on-demand research textbook.
Tools like Google AI Studio, Gemini, and ChatGPT‑5 helped us:
  • Remind ourselves of less commonly used methods, helping to achieve triangulation.
  • Refresh knowledge on when particular methods are appropriate
  • Compare approaches at a high level
This is especially useful because researchers tend to fall into methodological ruts, defaulting to familiar approaches (e.g., in-depth interviews) even when alternatives might be better. AI prompted us to reconsider methods we already knew but weren't actively utilising.
Rule of thumb:
  • Use AI for "what exists?" and "what's this method for?"
  • Ignore AI's suggestions on timelines, sample sizes, and budgets—they were consistently disconnected from reality (e.g., recommending 30–35 participants over three months when we had two weeks and a budget for five participants).
AI is not a shortcut to doing research faster. It's a tool for doing the right parts of research faster—so we can spend more time on the work only humans can do.
In the next part of this series, we'll explore where AI fails in UX research, why that matters, and how to spot the risks before they impact your work.

Let our research support you

If you’d like to learn more about UX at Adaptavist and how we can support your team, fill in our contact form to schedule a call with us.