Wharton Blueprint Offers Research-Backed Guide to Designing High-Impact AI Chatbots

Written by Jeremy Werner

Jeremy is an experienced journalist, skilled communicator, and constant learner with a passion for storytelling and a track record of crafting compelling narratives. He has a diverse background in broadcast journalism, AI, public relations, data science, and social media management.
Posted on 05/15/2025
In News

The Wharton School of the University of Pennsylvania, in partnership with research platform Science Says, has released a groundbreaking report titled “The Wharton Blueprint for Effective AI Chatbots.” This 60-page blueprint provides data-driven, actionable insights to help organizations create AI-powered chatbots that improve customer satisfaction, build trust, and drive business outcomes.

 

Drawing on cutting-edge behavioral science and marketing research, the blueprint identifies the most effective strategies for chatbot deployment across industries. From retail and healthcare to finance and government, the report outlines practical guidance for when to use AI chatbots, when to blend them with human oversight, and when to rely entirely on human agents.

 

A central finding of the report is that how a chatbot presents itself—whether human-like or machine-like—significantly impacts customer perceptions. For example, users prefer human-like chatbots for delivering good news or offering personalized services, such as flight upgrades or spa recommendations. In contrast, machine-like bots are more effective in situations involving sensitive or embarrassing topics, such as insurance claims or healthcare disclosures, where users value objectivity and non-judgment.

 

Another key insight is the importance of trust-building. Users are more likely to accept recommendations and interact with AI tools when they are labeled as “learning” or “constantly improving.” Speed also matters: fast responses are perceived as more accurate, competent, and trustworthy. In high-pressure environments, such as travel rebookings or emergency customer service situations, machine-like bots outperformed empathetic chatbots in user satisfaction by up to 15.7%.

 

The blueprint also warns against common pitfalls. Human-like chatbots can unintentionally amplify user dissatisfaction, particularly when customers are angry or frustrated. Overly “cute” chatbot avatars reduce trust in critical contexts like legal or medical advice, while humor can reinforce stereotypes and alienate marginalized users. Companies are urged to be intentional about AI tone, transparency, and use-case appropriateness.

 

Professor Stefano Puntoni, co-director of Wharton Human-AI Research and co-author of the report, emphasized that organizations should prioritize ethical design and avoid one-size-fits-all implementations. “AI’s value isn’t just in automation—it’s in building systems that understand when to step in and when to step back.”

 

As more companies integrate generative AI into customer-facing roles, Wharton’s blueprint aims to offer clarity in a fast-moving landscape—highlighting that effective chatbot design is not just about capability, but about context, psychology, and human-centered thinking.

 

 

Need Help?

 

If you’re concerned or have questions about how to navigate the U.S. or global AI regulatory landscape, don’t hesitate to reach out to BABL AI. Their Audit Experts can offer valuable insight and ensure you’re informed and compliant.

Subscribe to our Newsletter

Keep up with the latest on BABL AI, AI Auditing and
AI Governance News by subscribing to our news letter