What's clear is that AI's limitations aren't primarily technical. They're contextual. UX research is not a data-processing problem; it's a sense-making discipline rooted in human behaviour, organisational constraints, and judgment under uncertainty. Those are precisely the areas where AI is most likely to produce outputs that sound credible while being subtly wrong.
This doesn't make AI irrelevant. It makes discernment essential. Teams that understand where AI breaks down are better positioned to use it safely—supporting structure, articulation, and synthesis without surrendering interpretation or decision-making.
For product teams, this distinction matters. The risk isn't that AI replaces researchers, but that it reshapes how decisions are justified, scoped, and defended. When confident-sounding recommendations are detached from behavioural nuance or delivery reality, the cost shows up downstream—in misaligned roadmaps, wasted cycles, and avoidable rework.
The final part of this series explores what all of this means for product: how teams should think about AI-assisted research, where it can add genuine value, and how to avoid letting tool capability quietly redefine research quality.