Harvard Business Review: Generative AI Is Reshaping Early-Stage Market Research

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 07/24/2025
In News

Generative AI is opening a new chapter in early-stage market research, offering companies a faster and more cost-effective way to test product ideas by simulating customer feedback, according to a new Harvard Business Review article by James Brand, Ayelet Israeli, and Donald Ngwe.

 

The research explores how large language models (LLMs) like ChatGPT and Gemini can act as “synthetic customers,” responding to product comparisons and estimating willingness-to-pay (WTP) — a process typically handled by time-consuming and expensive human studies. In trials across categories including toothpaste and tech devices, LLM-generated responses often mirrored those from actual consumers.

 

“Used responsibly, LLMs can flag weak ideas early and highlight promising directions before a single human survey is conducted,” the authors wrote.

 

By structuring queries in a style similar to conjoint surveys, researchers tested LLMs on tradeoffs like price, features, and configurations. While LLMs performed well in estimating average consumer preferences — such as RAM in laptops — they struggled with segmentation. For example, political and income-based preference differences were often exaggerated or inconsistent.

 

One of the study’s key findings: fine-tuning LLMs with proprietary customer data significantly improves performance. When trained on past survey results, models produced more accurate insight. Even correcting unrealistic enthusiasm for unusual product ideas like “pancake toothpaste.”

 

Still, researchers caution against replacing human research entirely. “Synthetic customers” can accelerate early-stage exploration. But they lack the nuance and variability of real human responses, particularly for emotional, demographic, or emerging market factors.

 

The biggest advantage is speed and scale. Traditional studies can take weeks and cost tens of thousands. LLM-based simulations can be run in hours, enabling companies to test dozens of product variations. That way they can reserve human validation for the most promising.

 

As the authors conclude: “The firms that learn to blend synthetic and human insights will lead the next wave of customer-centric innovation.”

 

Need Help?

 

If you have questions or concerns about how to navigate the global AI regulatory landscape, reach out to BABL AI. Their Audit Experts can offer valuable insight, and ensure you’re informed and compliant.

 

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter