When algorithms go bad: How consumers respond
Researchers from University of Texas-Austin and Copenhagen Business School published a new paper in the Journal of Marketing that offers actionable guidance to managers on the deployment of algorithms in marketing contexts.
The study, forthcoming in the Journal of Marketing, is titled "When Algorithms Fail: Consumers' Responses to Brand Harm Crises Caused by Algorithm Errors" and is authored by Raji Srinivasan and Gulen Sarial-Abi.
Marketers increasingly rely on algorithms to make important decisions. A perfect example is the Facebook News Feed. You do not know why some of your posts show up on some people's News Feeds or not, but Facebook does. Or how about Amazon recommending books and products for you? All of these are driven by algorithms. Algorithms are software and are far from perfect. Like any software, they can fail, and some do fail spectacularly. Add in the glare of social media and a small glitch can quickly turn into a brand harm crisis, and a massive PR nightmare. Yet, we know little about consumers' responses to brands following such brand harm crises.
First, the research team finds that consumers penalize brands less when an algorithm (vs. human) causes an error that causes a brand harm crisis. In addition, consumers' perceptions of the algorithm's lower agency for the error and resultant lower responsibility for the harm caused mediate their less negative responses to a brand following such a crisis.
Second, when the algorithm is more humanized— when it is anthropomorphized (e.g., Alexa, Siri) (vs. not) or machine learning (vs. not), it is used in a subjective (vs. objective) task, or an interactive (vs. non-interactive) task—consumers' responses to the brand are more negative following a brand harm crisis caused by an algorithm error. Srinivasan says that "Marketers must be aware that in contexts where the algorithm appears to be more human, it would be wise to have heightened vigilance in the deployment and monitoring of algorithms and provides resources for managing the aftermath of brand harm crises caused by algorithm errors."
This study also generates insights about how to manage the aftermath of brand harm crises caused by algorithm errors. Managers can highlight the role of the algorithm and the lack of agency of the algorithm for the error, which may reduce consumers' negative responses to the brand. However, highlighting the role of the algorithm will consumers' negative responses to the brand for an anthropomorphized algorithm, a machine learning algorithm, or if the algorithm error occurs in a subjective or in an interactive task, all of which tend to humanize the algorithm.
Finally, insights indicate that marketers should not publicize human supervision of algorithms (which may actually be effective in fixing the algorithm) in communications with customers following brand harm crises caused by algorithm errors. However, they should publicize the technological supervision of the algorithm when they use it. The reason? Consumers are less negative when there is technological supervision of the algorithm following a brand harm crisis.
"Overall, our findings suggest that people are more forgiving of algorithms used in algorithmic marketing when they fail than they are of humans. We see this as a silver lining to the growing usage of algorithms in marketing and their inevitable failures in practice," says Sarial-Abi.