ChatGPT statements can influence users' moral judgments
Human responses to moral dilemmas can be influenced by statements written by the artificial intelligence chatbot ChatGPT, according to a study published in Scientific Reports. The findings indicate that users may underestimate the extent to which their own moral judgments can be influenced by the chatbot.
Sebastian Krügel and colleagues asked ChatGPT (powered by the artificial intelligence language processing model Generative Pretrained Transformer 3) multiple times whether it is right to sacrifice the life of one person in order to save the lives of five others. They found that ChatGPT wrote statements arguing both for and against sacrificing one life, indicating that it is not biased towards a certain moral stance.
The authors then presented 767 U.S. participants, who were on average 39 years old, with one of two moral dilemmas that required them to choose whether to sacrifice one person's life to save five others. Before answering, participants read a statement provided by ChatGPT arguing either for or against sacrificing one life to save five. Statements were attributed to either a moral advisor or to ChatGPT. After answering, participants were asked whether the statement they read influenced their answers.
The authors found that participants were more likely to find sacrificing one life to save five acceptable or unacceptable, depending on whether the statement they read argued for or against the sacrifice. This was true even the statement was attributed to a ChatGPT. These findings suggest that participants may have been influenced by the statements they read, even when they were attributed to a chatbot.
Eighty percent of participants reported that their answers were not influenced by the statements they read. However, the authors found that the answers participants believed they would have provided without reading the statements were still more likely to agree with the moral stance of the statement they did read than with the opposite stance. This indicates that participants may have underestimated the influence of ChatGPT's statements on their own moral judgments.
The authors suggest that the potential for chatbots to influence human moral judgments highlights the need for education to help humans better understand artificial intelligence. They propose that future research could design chatbots that either decline to answer questions requiring a moral judgment or answer these questions by providing multiple arguments and caveats.
More information: Sebastian Krügel, ChatGPT's inconsistent moral advice influences users' judgment, Scientific Reports (2023). DOI: 10.1038/s41598-023-31341-0. www.nature.com/articles/s41598-023-31341-0
Journal information: Scientific Reports
Provided by Nature Publishing Group