The real question for product and CX teams isn't whether to use AI in UX research, but how to do so without lowering the standard of thinking that good research requires.
AI is most effective when it supports clearly defined, low-risk tasks, such as capturing information, accelerating early thinking, or transforming outputs after human analysis is complete. Its value drops quickly when it's asked to interpret behaviour, resolve ambiguity, or make strategic judgments without a deep understanding of context.
Teams that succeed with AI will be the ones that treat it as infrastructure, not authority. They will standardise its use, train people to challenge its outputs, and put guardrails around where it can and cannot operate. Crucially, they will recognise that AI amplifies experience; it does not substitute for it.
Used this way, AI doesn't replace UX research. It protects the time and energy researchers need to do the work that still can't be automated: making sense of human behaviour, navigating constraints, and helping organisations make better product decisions under real-world conditions.