News Release

Bad news? Send an AI. Good news? Send a human

News from the Journal of Marketing

Peer-Reviewed Publication

American Marketing Association

Researchers from University of Kentucky, University of Technology Sydney, and University of Illinois-Chicago published a new paper in the Journal of Marketing that examines the customer response and satisfaction implications of using AI agents versus human agents.

The study, forthcoming in the Journal of Marketing, is titled “Bad News? Send an AI. Good News? Send a Human” and is authored by Aaron Garvey, TaeWoo Kim, Adam Duhachek.

Are we more forgiving of an artificial intelligence (AI) agent than a human when we are let down? Less appreciative of an AI bot than a human when we are helped? New research examines these questions and discovers that consumers respond differently to favorable and unfavorable treatment at the hands of an AI agent versus another human.
 
Consumers and marketing managers currently are in a period of technological transition where AI agents are increasingly replacing human representatives. AI agents have been adopted across a broad range of consumer domains to handle customer transactions, including traditional retail, travel, ride and residence sharing, and even legal and medical services. Given AI agents’ advanced information processing capabilities and labor cost advantages, the transition away from human representatives for administering product and services is expected to continue. However, what are the implications for customer response and satisfaction? 
 
The researchers find that when a product or service offer is worse than expected, consumers respond better when dealing with an AI agent. In contrast, for an offer that is better than expected, consumers respond more favorably to a human agent. Garvey explains that “This happens because AI agents, compared to human agents, are perceived to have weaker personal intentions when making decisions. That is, since an AI agent is a non-human machine, consumers typically do not believe that an AI agent’s behavior is driven by underlying selfishness or kindness.” As a result, consumers believe that AI agents lack selfish intentions (which would typically be punished) in the case of an unfavorable offer and lack benevolent intentions (which would typically be rewarded) in the case of a favorable offer. 
 
Designing an AI agent to appear more humanlike can change consumer response. For example, a service robot that appears more humanlike (e.g., with human body structure and facial features) elicits more favorable responses to a better-than-expected offer than a more machinelike AI agent without human features. This occurs because AI agents that are more humanlike are perceived to have stronger intentions when making the offer. 

What does this mean for marketing managers? Kim says, “For a marketer who is about to deliver bad news to a customer, an AI representative will improve that customer’s response. This would be the best approach for negative situations such as unexpectedly high price offers, cancellations, delays, negative evaluations, status changes, product defects, rejections, service failures, and stockouts. However, good news is best delivered by a human. Unexpectedly positive outcomes could include expedited deliveries, rebates, upgrades, service bundles, exclusive offers, loyalty rewards and customer promotions.”

Managers can apply our findings to prioritize (vs. postpone) human to AI role transitions in situations where negative (vs. positive) interactions are more frequent. Moreover, even when a role transition is not entirely passed to an AI agent, the selective recruitment of an AI agent to disclose certain negative information could still be advantageous. Firms that have already transitioned to consumer-facing AI agents, including the multitude of online and mobile applications that use AI-based algorithms to create and administer offers, also stand to benefit from our findings. Our research reveals that AI agents should be selectively made to appear more or less humanlike depending upon the situation. 
 
For consumers, these findings reveal a “blind spot” when dealing with AI agents, particularly when considering offers that fall short of expectations. Indeed, the research reveals an ethical dilemma around the use of AI agents – is it appropriate to use AI to bypass consumer resistance to poor offers? “We hope that making consumers aware of this phenomenon will improve their decision quality when dealing with AI agents, while also providing marketing managers techniques, such as making AI more humanlike in certain contexts, for managing this dilemma,” says Duhachek.

Full article and author contact information available at: https://doi.org/10.1177/00222429211066972

About the Journal of Marketing 

The Journal of Marketing develops and disseminates knowledge about real-world marketing questions useful to scholars, educators, managers, policy makers, consumers, and other societal stakeholders around the world. Published by the American Marketing Association since its founding in 1936, JM has played a significant role in shaping the content and boundaries of the marketing discipline. Christine Moorman (T. Austin Finch, Sr. Professor of Business Administration at the Fuqua School of Business, Duke University) serves as the current Editor in Chief.
https://www.ama.org/jm

About the American Marketing Association (AMA) 

As the largest chapter-based marketing association in the world, the AMA is trusted by marketing and sales professionals to help them discover what is coming next in the industry. The AMA has a community of local chapters in more than 70 cities and 350 college campuses throughout North America. The AMA is home to award-winning content, PCM® professional certification, premiere academic journals, and industry-leading training events and conferences.
https://www.ama.org


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.