This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Study: More complaints, worse performance when AI monitors employees

Surveillance tools
Credit: Pixabay/CC0 Public Domain

Organizations using AI to monitor employees' behavior and productivity can expect them to complain more, be less productive and want to quit more—unless the technology can be framed as supporting their development, Cornell research finds.

Surveillance tools, which are increasingly being used to track and analyze , , vocal tone and verbal and written communication, cause people to feel a greater loss of autonomy than oversight by humans, according to the research.

Businesses and other organizations using the fast-changing technologies to evaluate whether people are slacking off, treating customers well or potentially engaging in cheating or other wrongdoing should consider their unintended consequences, which may prompt resistance and hurt performance, the researchers say. They also suggest an opportunity to win buy-in, if the subjects of surveillance feel the tools are there to assist rather than judge their performance—assessments they fear will lack context and accuracy.

"When and other advanced technologies are implemented for developmental purposes, people like that they can learn from it and improve their performance," said Emily Zitek, associate professor of organizational behavior in the ILR School. "The problem occurs when they feel like an evaluation is happening automatically, straight from the data, and they're not able to contextualize it in any way."

Zitek is the co-author of "Algorithmic Versus Human Surveillance Leads to Lower Perceptions of Autonomy and Increased Resistance," published June 6 in Communications Psychology. Rachel Schlund, Ph.D. '24, is the first author.

Algorithmic surveillance has already induced backlash. In 2020, an swiftly dropped a testing productivity software to monitor employee activity, including alerting them if they took too many breaks. Schools' monitoring of virtual tests during the pandemic sparked protests and lawsuits, with students saying they feared any movement would be misinterpreted as cheating.

On the other hand, people may see algorithms as more efficient and objective. And research has found that people are more accepting of behavior-tracking systems such as smart badges or smartwatches when they provide feedback directly, instead of through someone who might form negative judgments about the data.

In four experiments involving nearly 1,200 total participants, Schlund and Zitek investigated whether it matters if people or AI and related technologies conduct surveillance, and if the context in which it is used—to evaluate performance or support development—influences perceptions.

In a first study, when asked to recall and write about times when they were monitored and evaluated by either surveillance type, participants reported feeling less autonomy under AI and were more likely to engage in "resistance behaviors."

Next, simulating real-world surveillance, a pair of studies asked participants to work as a group to brainstorm ideas for a theme park, then to individually generate ideas about one segment of the park. They were told their work would be monitored by a research assistant or AI, the latter represented in Zoom videoconferences as "AI Technology Feed."

After several minutes, either the human assistant or "AI" relayed messages that the participants weren't coming up with enough ideas and should try harder. In surveys following one study, more than 30% of participants criticized the AI surveillance compared to about 7% who were critical of the human monitoring.

"The reinforcement from the AI made the situation just more stressful and less creative," one participant wrote.

Beyond complaints and criticism, the researchers found that those who thought they were being monitored by AI generated fewer ideas—indicating worse performance.

"Even though the participants got the same message in both cases that they needed to generate more ideas, they perceived it differently when it came from AI rather than the research assistant," Zitek said. "The AI surveillance caused them to perform worse in multiple studies."

In a fourth study, participants imagining they worked in a call center were told that humans or AI would analyze a sample of their calls. For some, the analysis would be used to evaluate their performance; for others, to provide developmental feedback. In the developmental scenario, participants no longer perceived algorithmic surveillance as infringing more on their autonomy and did not report a greater intention to quit.

The results point to an opportunity for organizations to implement algorithmic surveillance in ways that could earn subjects' trust instead of inspiring resistance.

"Organizations trying to implement this kind of need to recognize the pros and cons," Zitek said. "They should do what they can to make it either more developmental or ensure that people can add contextualization. If people feel like they don't have autonomy, they're not going to be happy."

More information: Rachel Schlund et al, Algorithmic versus human surveillance leads to lower perceptions of autonomy and increased resistance, Communications Psychology (2024). DOI: 10.1038/s44271-024-00102-8

Journal information: Communications Psychology

Provided by Cornell University

Citation: Study: More complaints, worse performance when AI monitors employees (2024, July 2) retrieved 2 July 2024 from https://phys.org/news/2024-07-complaints-worse-ai-employees.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Autism can be predicted from routine developmental surveillance data

0 shares

Feedback to editors