Skip to main content
Cover image for article: Primary Research for Consulting Insights Teams: A Playbook
AI11 min read

Primary Research for Consulting Insights Teams: A Playbook

A playbook for Insights Managers at strategy consultancies — how to scale primary research for case teams without growing headcount.

IT

InsightAgent Team

April 14, 2026

Most articles about "AI for consulting" read like a McKinsey infographic. This isn't one of them.

If you run or work inside an Insights Center at a strategy consultancy, your problem isn't that AI sounds interesting — it's that you have six case teams asking for expert calls this week and three associates to staff against them. Quality isn't the bottleneck. Capacity is. And throwing headcount at the problem takes nine months to pay off and blows up your utilization math.

This playbook is for Insights Managers, Research Managers, and Engagement Managers at firms like LEK, BCG, McKinsey, Oliver Wyman, Bain, and the long tail of vertical-specialist boutiques. It covers what AI-moderated expert interviews actually change about the five workflows your function runs every day — and what to leave alone.

Where consulting primary research actually breaks

Every Insights leader I've talked to describes the same failure mode. It's not that the team can't produce good primary research. It's that they can't produce it fast enough on enough cases simultaneously to keep all the case teams unblocked.

The queue is the bottleneck.

A typical Insights function supports 8–15 active cases at any moment. Each case has 6–30 expert interviews queued across its timeline. The case teams all want their interviews this week. Meanwhile, the Insights Center runs on 4–8 associates, each of whom can realistically run and synthesize 3–5 calls per week before quality drops.

The math doesn't work. Something gives:

  1. Cases wait. The case team's working hypothesis gets tested with three interviews instead of twelve. Partners notice. Clients notice.
  2. Expert networks eat the budget. When Insights can't staff internally, case teams escalate to GLG, AlphaSights, Guidepoint, or Third Bridge. The per-hour credits compound across the engagement.
  3. Associates burn out. The Insights team sprints through every case, and attrition becomes a structural tax on the function.

None of these are AI problems in the normal sense. They're capacity problems. And the thing that changes capacity is the ability to run interviews in parallel, on your scripts, without adding human hours — which is exactly what AI-moderated expert calls enable.

(For context on how expert network spend has shifted over the last two years, see our piece on the end of the expert network broker.)

The five workflows that unblock the Insights function

Here's what actually changes when you drop AI-moderated expert interviews into a strategy consulting workflow. Not every workflow benefits equally — the ones below are where we see the sharpest return.

1. Commercial Due Diligence Interviews

What it is: Customer, supplier, former-employee, and channel interviews run in support of a CDD engagement for a PE or corporate buyer.

Why it matters: Commercial DD is the single highest-volume primary research workflow at most strategy consultancies. A typical engagement calls for 15–30 expert interviews inside a 2–3 week diligence window. That's more than an Insights associate can run solo while also synthesizing.

What changes: You drop in your question list and your expert contact list. The AI agent runs the interviews in parallel — three or four concurrent calls across the week — and returns full transcripts, key quotes, and structured data extraction formatted for the case team's deck. Your associate's day shifts from running calls to reviewing transcripts and synthesizing the so-what.

Honest assessment: This is the highest-leverage workflow and the easiest sell internally. Case teams already understand the CDD motion; swapping one step of it (dialing + running) doesn't require a process rewrite.

2. Market-Sizing Channel Checks

What it is: Customer and distributor interviews in support of TAM/SAM work, market entry studies, or growth strategy engagements.

Why it matters: Market-sizing work is expert-interview-intensive but often de-prioritized on Insights' queue because it's slower-building and less deadline-critical than live-deal diligence. Market-entry cases suffer the most — teams ship a sized opportunity based on 4 interviews when they wanted 20.

What changes: You can now run 20. Same script, same quality bar, parallel execution. The sized opportunity gets real confidence intervals around it because the underlying interview sample is large enough to support them.

Honest assessment: This is the workflow where the volume argument is strongest. The unit economics flip completely once you're not paying for human time per interview.

3. Policy and Regulatory Expert Scans

What it is: Interviews with former regulators, policy advisors, compliance officers, and industry associations in support of healthcare, energy, financial services, and fintech engagements.

Why it matters: These experts are famously hard to schedule. A conversation with a former Medicare policy advisor can take three weeks to arrange through an expert network. When the case team needs it in five days, something breaks.

What changes: You bring your own expert (former colleague, board contact, or direct sourcing). The AI runs the interview on your schedule — often same-day. The script is your script, not a repurposed one from another client.

Honest assessment: The constraint here isn't AI capability — it's expert sourcing. If you don't have a way to find the right regulator, AI-moderated interviews don't help. If you do, they collapse the scheduling loop from weeks to days.

4. Proposal-Phase Scoping Interviews

What it is: 2–4 quick interviews during proposal writing to scope the case, price it, and de-risk key assumptions before the client signs.

Why it matters: This is the workflow everyone wants to run and nobody can staff. You can't justify booking an Insights associate against work that isn't won yet. But not running the interviews means the proposal is written blind, which hurts win rates and pricing accuracy.

What changes: Proposal teams run 2–3 scoping interviews without queueing against live case work. Same-day turnaround on the transcripts. The proposal reflects real market reads instead of desk research and pattern-matching from past engagements.

Honest assessment: This is the workflow most likely to change your firm's win rate, not just its delivery speed. That makes it the easiest one to get leadership attention on.

5. Deliverable QA and Fact-Check Interviews

What it is: Short, high-stakes follow-up interviews to verify specific claims, data points, or expert quotes before the deliverable ships to the client.

Why it matters: QA calls are small but brutal on timing. You find out you need to verify a claim 48 hours before the steering committee meeting. There's no room in the Insights queue.

What changes: You get the verification interview scheduled and transcribed inside a 24-hour window. The deck gets shipped on time with defensible primary-source backing on the spots the Partner wanted to double-check.

Honest assessment: Low volume, high leverage. This is the workflow that protects the firm's reputation on every deliverable, which makes it disproportionately valuable even though the call count is small.

What NOT to automate

The workflows above are where AI-moderated interviews change the math. The workflows below are where you should leave the motion alone.

Client-sensitive interviews. If the CEO of the portfolio company needs to speak with the Partner directly — for relationship, commercial, or trust reasons — AI doesn't belong in the room. That's a Partner's call, not a research call.

Synthesis and "so what" interpretation. The value a case team extracts from an expert interview is in the interpretation layer — connecting what the expert said to the client's strategic question. That's where your associates earn their keep. AI runs the interview; humans own the analysis.

Live negotiations and sensitive disclosure. Some calls involve negotiating what the expert can and can't say, reading body-language cues, or reacting to new information in ways that shape the next question. Those remain human.

Anything that builds a relationship the firm wants to own. If the expert could become a future source, a steering committee member, or an introduction to a client — a human should run the call. AI-moderated interviews are for throughput, not relationship-building.

The distinction is simple: automate the capacity-bound work, protect the judgment-bound work.

How Insights leadership should pilot this

Based on how strategy consultancies we talk to have started down this path, here's the pattern that works:

Pick one live case with a heavy interview load. Commercial due diligence is almost always the right choice — it has the highest volume, the clearest ROI, and the tightest timeline. Pick a case that has 10+ interviews queued and a 2-week diligence window.

Don't swap out your expert network spend yet. Run InsightAgent alongside your GLG or AlphaSights motion, not instead of. Your goal in the first case is to measure whether the AI-run interviews deliver case-team-usable output, not to prove a cost savings narrative.

Involve one Insights Manager, one Engagement Manager, and one case team associate. Keep the pilot scope small enough that you can iterate on scripts and output formats within the 2-week window. This is not a firm-wide rollout; it's a working session.

Measure three things:

  1. Time from question list to transcript — should be under 4 hours for a standard commercial DD interview.
  2. Case-team usability of the output — did the associate paste quotes directly into the deck, or did they have to re-run the interviews?
  3. Parallel capacity — how many interviews ran concurrently, and how did that compare to your usual serial pace?

If the first case produces usable output and the Insights Manager walks out the other side wanting to run the next one the same way, you have an anchor. That's when you expand to a second case team, and then to the full function.

(For a broader look at the shift happening across consulting services right now, see our analysis of AI voice agents for PE commercial due diligence.)

The unlock: parallel capacity

The reason this matters for the Insights function specifically — more than for a hedge fund research desk or a PE deal team — is that consulting primary research has a structural parallelism problem.

Every engagement runs against a fixed clock. The case team has six weeks. Interviews land on a best-effort basis inside that window. Insights allocates associates in a serial queue: case A this week, case B next week, case C the week after. Case teams live with that.

But the business doesn't have to. An AI agent running four parallel commercial DD interviews on a Tuesday afternoon is a completely different structural constraint than an associate running one interview at a time. The math on how many cases your Insights Center can actually support shifts. Headcount planning shifts. Expert-network budget shifts.

The Insights Centers that are winning on this aren't the ones asking "how do we replace our associates." They're the ones asking "how do we give our associates 3x the interview output without adding headcount." That's the real lever.

What's overhyped

Not everything in the AI-for-consulting conversation is real. Three things to watch out for:

"AI will replace your Insights function." It won't. The interpretation layer, the client relationship, and the case-team synthesis are all human work that compounds in value the more cases your function runs. What changes is the ratio of capacity-bound to judgment-bound work.

"AI generates deliverable-ready output directly." No it doesn't. AI produces transcripts, summaries, and structured data. A human turns those into the client deck. Anyone selling "AI generates the slide" is lying or has never shipped a CDD deck.

"One platform does everything." Primary research, synthesis, visualization, deck production, knowledge management, proposal writing — these are separate workflows and separate software categories. A single vendor covering all of them is almost certainly weaker at each.

Getting started

If you run an Insights function and you want to try this without betting the farm, here's the practical path:

  1. Audit one past commercial DD engagement. Count the interviews, the time they took, and the expert-network spend. This is your baseline.
  2. Pick the next CDD case as your pilot. Not a current one — give yourself a week of prep.
  3. Script the interviews the way you normally would. Do not over-engineer the scripts for AI. The AI should adapt to your existing question patterns.
  4. Run the first case in parallel with your normal Insights motion. AI-moderated calls on half the interviews, human-run on the other half. Compare outputs directly.
  5. Debrief with the case team, not just with Insights. The consumer of the research matters more than the producer when you're deciding whether to scale.

The firms that get this right aren't chasing a transformation story. They're adopting one workflow change that adds 2–3x parallel interview capacity to a function that used to run serial. That's it. That's the playbook.


InsightAgent runs AI-moderated expert interviews for strategy consultancies, research firms, and direct investors. See how the consultancies workflow fits together.

Ready to transform your expert interviews?

See how InsightAgent can help your team capture better insights with less effort.

Learn More