AI in Qualitative Research: 5 essential practices for quality at scale

Qual AI Human Connection
Gina Henderson
Gina Henderson

Head of Qualitative

Article

Learn how AI transforms qualitative research with best practices for moderation, sample quality, and analysis that preserve human insight.

We stand at a pivotal moment in qualitative research: artificial intelligence is no longer a guest in our discipline. AI has become so embedded in our workflows that it’s now on the payroll, expected to contribute to every stage of the research process. Why? Because AI-powered qualitative research enables us to conduct qualitative-at-scale studies, reach more people efficiently, elevate our analysis, and deliver insights with unprecedented speed and agility. Our clients are embracing AI in their own organizations and expect us, as their research and growth partners, to do the same.

What is Qual at Scale?

This new era combines AI in qualitative research with traditional human insight, enabling researchers to conduct in-depth studies with 50-500+ participants while maintaining the contextual understanding that defines qualitative methodology.

The challenge now is to harness AI’s strengths while preserving what makes qualitative research unique: our human ability to interpret the spoken and unspoken, to understand nuance, and to tell compelling stories. This new era of analytical ability ushers in something we like to call “Qual at Scale”, meaning we can now combine the rich contextual understanding typical of qualitative research with scale, precision, and speed of AI analysis. At Kantar Qualitative, we’re not just exploring this collaboration – we’re leading it. We’re seeing real shifts in how research is designed and executed when AI becomes a teammate. Here are five key ways Qual at Scale is changing research methodologies and how we at Kantar have evolved our practices to deliver meaningfully different, impactful insights, maintaining our clients’ ability to make confident, strategic decisions.

1. Sample Source: Breadth Without Sacrificing Depth in AI-Moderated Research

One of AI’s greatest advantages is enabling the higher volume and reach achieved in Qual at Scale, where we can engage 50, 100, or even hundreds of people simultaneously. To accelerate recruitment, we often tap into nontraditional panels: quantitative panels, social media sources, and beyond. But here’s the catch: these people may not be accustomed to qualitative questioning. They’re used to quick surveys made up mostly of close-ended questions, not the kind of reflective, open-ended exploration that qualitative research depends on. The result? Sometimes the quality of responses suffers. The old adage holds true: “garbage in, garbage out.”

While Qual at Scale offers unprecedented breadth, the output is only as strong as the human voice behind the data. To truly keep people at the center of this scaled approach, sample sources need to evolve. Even when we’re moving fast, we need people who are willing and able to share their perspectives, experiences, and feelings in a meaningful way. That is what brings depth to qualitative work, and it cannot be compromised.

The concern with the enhancement of scale is maintaining quality. Scaling up doesn’t change the foundation of qualitative research. Even at higher volume, we must ensure we recruit people who can offer rich, reflective input. Speed should never come at the expense of insight quality.

To maintain accuracy and directionality of results, qualitative work at Kantar includes a discerning human quality “check” layer. While the numbers AI can deliver look impressive, without this added layer, responses could end up lacking the depth needed for truly actionable insights. For us, scaling up doesn’t mean letting go of rigor. It means doubling down on it and building new processes that allow Qual at Scale to operate with both efficiency, accuracy, and human sensibility.

2. Data Capture Methods for AI Moderation: Matching Method to Objective

When using AI moderation during fieldwork, it’s essential to consider both how people respond and the length of the interview. The mode of response (text, voice-to-text, or video) should be chosen based on the research objectives, as each works best in different circumstances. Text input is ideal for early concept evaluation or message testing, where quick reactions suffice. Voice-to-text is better suited for discovery and emotional exploration, as people tend to express themselves more naturally and with richer language. Video is invaluable for product and UX testing, where observing usage matters, and both audio and video become essential when hearing or seeing emotional reactions is critical to the learning.

Selecting the wrong method for AI moderation can impact the quality of results. AI-moderated experiences are typically shorter than live sessions because people get fatigued more quickly, especially with video. Text-based responses can lead to typing fatigue, with engagement dropping off as early as 10 minutes. Video brings its own challenges, with camera fatigue setting in rapidly. Voice only responses, however, tend to sustain engagement for longer – often between 20 and 40 minutes – offering a more natural and less taxing experience.

For instance, when we run an AI-moderated study using video capture, we shorten the discussion. In doing so, engagement remained consistently high throughout the interview, leading to richer data from start to finish. So, it’s critical to think through the experience of the respondents and how the method you are using shifts expectations, energy, and input.

3. AI Prompting and Question Design: Intentionality Is Key

Modern fieldwork platforms often feature AI driven prompting, whether in partnership with a live researcher or through full service AI moderation. In both scenarios, there may be limited opportunities for a human researcher to course correct if the conversation veers off track. Kantar’s AI models drive 89% accuracy vs. survey benchmarks, validating the reliability of AI‑enabled interpretation. But that 11% gap is where humans play a huge role. Researchers and clients must be intentional and thoughtful about how questions are posed. We need a clear sense of what we hope to learn, crafting initial questions and follow-up probes that set the stage for meaningful insights.

The respondent experience matters, too. Confusing or repetitive AI prompts can frustrate people, and that frustration inevitably shows up in the data. We want to avoid hearing: “I’m not sure what you mean by that?” or “I just answered that.” And we’ve seen that happen as we learn more about AI prompting in qualitative research. The skill of framing effective AI questions is now essential, especially when human intervention might be minimal.

This is where that discerning human editor becomes invaluable yet again. By reviewing and refining prompts generated by AI, our researchers ensure each question included in a panel adds unique value to the conversation, a better experience for respondents, and clearer results for our clients, while operating at faster speeds with expanded data sets.

4. Projective Techniques in AI-Powered Research: The Human Touch Remains Essential

The “secret sauce” of qualitative research often lies in projective techniques. Tools leveraging these methods uncover people’s deeper, sometimes unconscious, motivations and beliefs. When leveraging AI in fieldwork, it’s important to know which projective techniques pair well with technology, and how to educate AI on their use. Success here depends on softer skills that AI may lack. Uncovering how something makes a person feel is tricky even in live research; it becomes even harder when relying on AI-led moderation.

Qualitative researchers must discern which techniques to use (or invent new ones) and train their AI partners to ensure we’re still uncovering the depth these methods are designed to reveal. As smart as AI is, human expertise remains indispensable.

Cultural context also matters. A projective technique that works beautifully in one culture may fall flat in another. Again, a researcher’s discernment is essential.

This also raises important questions about how much agency we give AI-moderation platforms that can autogenerate discussion guides. While these systems can take direction and construct solid questions using strategic inputs and contextual information, they are not yet capable of thoughtfully incorporating projective techniques with the nuance and precision of a skilled qualitative researcher. Human judgment still matters, greatly.

Without the human editing layer, classic project exercises via an AI platform can result in responses that are literal and miss the emotional nuance we typically see in live groups. The best AI techniques still need a human touch to unlock their full potential and guarantee contextual accuracy.

5. AI-Enhanced Analysis and Deliverables: Collaboration Elevates Insights

With AI and human researchers working side by side, analysis has reached new heights. Our platforms feature embedded AI analysis, and proprietary Kantar tools leverage established frameworks to elevate deliverables. Yet, human review and editing remain crucial to capture nuance and context. As Tara Prabhakar notes: “Humans bring the ability to read between the lines, uncover unspoken meaning, and empathize with others…” – and importantly, connect insights to business realities.

AI can accelerate pattern recognition, but only humans understand the larger business context – brand ambition, competitive dynamics, category nuance – which is essential for delivering insights with real impact.

Report generation is also evolving. In Qual at Scale, we’re not just reporting with words, we’re incorporating numbers. Once considered taboo in qualitative research, quantitative-style displays now help clarify themes at scale. Qualitative researchers must learn to present these findings effectively, often collaborating with quantitative colleagues to ensure we communicate insights in a way that resonates with clients.

It’s important to note that while Qual at Scale gives us larger sample sizes, it does not represent statistical rigor. A group of 75 or 100 people isn’t meant to represent a whole population, but it is enough to identify meaningful patterns best conveyed through qualitative commentary supported by quant like charts and graphs as visual reinforcement. The numbers support the story; they are not the story.

As we move beyond the initial shock and uncertainty of AI’s impact on qualitative research, it’s vital to keep these considerations in mind for optimal outcomes when implementing AI-powered qualitative research at scale. At Kantar Qualitative, we believe these are essential steps, but not the final word. The blend of AI and human insight will continue to evolve rapidly, and there’s always more to learn.

Discover how Kantar’s quintessentially human qualitative research helps you grow your brand with meaning, difference, and authenticity. Learn more here.

Related solutions
The image depicts four individuals standing closely together with their arms around each other, conveying a strong sense of camaraderie and connection.
Qualitative Research 
research
Get trusted answers from your customers and non-customers
Get in touch
Want more like this?
AI-Human collaboration, human and machine complementarity, deep human understanding, cutting-edge GenAI, brand growth
AI is transforming qualitative research, now is the time for a meaningful reinvention of the practice.
Technology
AI is increasingly integrated into marketing workflows, with its greatest value seen in predictive capabilities that enhance creative development and campaign effectiveness. However, human judgment remains essential to interpret and act on AI insights appropriately.
AIs Secret Sauce How Quality Data Supercharges AI
Discover AI’s true potential with quality data. Kantar’s 40+ years of consumer intelligence drives real results with trusted AI-powered insights.