Study: How Non-Experts Share Information About Technology Risks
A new study has found that cultivating trust in artificial intelligence (AI) requires an understanding of how the general public perceives and shares news about AI and not just the perspectives of experts and policymakers.
The findings by Jay Hmielowski, University of Florida College of Journalism and Communications Public Relations assistant professor, and colleagues are featured in “The Indirect Effects of Episodic-Thematic Framing on Information Sharing About the Economic Threat of Artificial Intelligence” published in Communication Studies on Sept. 13.
The authors examined a model predicting AI information sharing online from an interaction of framing strategies, through psychological proximity to the impacts of AI and perceived AI risk.
The researchers write: “Similar to other media-effects studies, we found that people are activated to share information about technological risks with others online when they perceive the risk as having potentially severe consequences, and themselves as being susceptible. Through the lens of the empathy prospect model, perceiving such heightened risk possibly signals serious need, motivating individuals to engage in helpful social behavior like sharing information to warn others of potential hazards.”