This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

ChatGPT statements can influence users' moral judgments

chatgpt
Credit: Pixabay/CC0 Public Domain

Human responses to moral dilemmas can be influenced by statements written by the artificial intelligence chatbot ChatGPT, according to a study published in Scientific Reports. The findings indicate that users may underestimate the extent to which their own moral judgments can be influenced by the chatbot.

Sebastian Krügel and colleagues asked ChatGPT (powered by the artificial intelligence language processing model Generative Pretrained Transformer 3) multiple times whether it is right to the life of one person in order to save the lives of five others. They found that ChatGPT wrote statements arguing both for and against sacrificing one life, indicating that it is not biased towards a certain moral stance.

The authors then presented 767 U.S. participants, who were on average 39 years old, with one of two that required them to choose whether to sacrifice one person's life to save five others. Before answering, participants read a statement provided by ChatGPT arguing either for or against sacrificing one life to save five. Statements were attributed to either a moral advisor or to ChatGPT. After answering, participants were asked whether the statement they read influenced their answers.

The authors found that participants were more likely to find sacrificing one life to save five acceptable or unacceptable, depending on whether the statement they read argued for or against the sacrifice. This was true even the statement was attributed to a ChatGPT. These findings suggest that participants may have been influenced by the statements they read, even when they were attributed to a .

Eighty percent of participants reported that their answers were not influenced by the statements they read. However, the authors found that the answers participants believed they would have provided without reading the statements were still more likely to agree with the moral stance of the statement they did read than with the opposite stance. This indicates that participants may have underestimated the influence of ChatGPT's statements on their own moral judgments.

The authors suggest that the potential for chatbots to influence human moral judgments highlights the need for education to help humans better understand . They propose that future research could design chatbots that either decline to answer questions requiring a moral judgment or answer these questions by providing multiple arguments and caveats.

More information: Sebastian Krügel, ChatGPT's inconsistent moral advice influences users' judgment, Scientific Reports (2023). DOI: 10.1038/s41598-023-31341-0. www.nature.com/articles/s41598-023-31341-0

Journal information: Scientific Reports

Citation: ChatGPT statements can influence users' moral judgments (2023, April 6) retrieved 23 September 2023 from https://phys.org/news/2023-04-chatgpt-statements-users-moral-judgments.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

ChatGPT can't lie to you, but you still shouldn't trust it, says philosopher

11 shares

Feedback to editors