Study: AI Systems That Are Perceived to Be Helpful Lower User Frustration and Improve Trust
A new study has found that when users perceive an artificial Intelligence (AI) system to be a helper, their frustration level lowers and their trust in the system increases. The findings by Won-Ki Moon, University of Florida College of Journalism and Communications Advertising assistant professor, and a team of researchers are featured in “When the Machine Learns from Users, is it Helping or Snooping?” to be published in Computers in Human Behavior, Volume 138, January 2023.
According to the authors, “The current study reveals that an interface indication that the algorithm is learning from the user is a powerful cue: it not only increases trust and intentions for future use by triggering the helper heuristic (reducing feelings of frustration) but also avoids privacy concerns (by being transparent). However, the cue did not trigger the benefit heuristic, which is negatively associated with feelings of frustration.”
They add, “[The result] paves the way for research on people’s experiences of media systems that constantly learn from users. Furthermore, it promotes the design of more transparent AI systems that convey their benevolence and earn users’ trust in a socially responsible manner.”