Jasmine McNealy Pens Article on the Possible Impacts from Algorithmic Systems
Jasmine McNealy, University of Florida College of Journalism and Communications associate director of the Marion B. Brechner First Amendment Project and associate professor, Media Production, Management, and Technology, is the author of “Before the Algorithm, What’s in the Imagination?” published in Computing Machinery Interactions, Volume 29, Issue 3.
In the article, McNealy focuses on algorithmic systems which are defined as “ways of organizing, clustering, arranging, and classifying concepts and of establishing complex relationships between them.” She shares insights on mitigating harm at the beginning of system creation and finds that investigating threats should not be a deterrent to mitigating possible harms.
She writes that several questions should be considered before creating and deploying a technology and the possible impacts of encoding of ideals. They include:
- What is this supposed to solve?
- What and who is the product, service or process?
- Who is or is not supposed to be included?
- Who is responsible for inclusion or exclusion?
“Instead of the ‘supposed to’s,’ system creators and deployers should consider the ‘must be’s,’ central factors related to algorithmic systems,” writes McNealy, “The first ‘must be’ is an identification of the ideal, and then a reorientation from a system that assesses only proximity to the ideal/deviance to one focused on possible impacts to the most vulnerable. But this must happen before the system is built, at the ideation stage, and continue throughout the iterative creative process.”
She adds, “Of course, there must be continuous evaluation and auditing of algorithmic or decision systems. But auditing and transparency are reactive; we need proactive policy requiring system creators to meet safety and impact standards set with the input of community and advocacy organizations. At the same time, we should not be attempting to assess acceptable levels and kinds of harm, acceptable loss of life, or loss of opportunities. No harm is the ideal; as little harm as possible is the goal.”
According to McNealy, “It is not enough to recognize harm after the threat has been realized. Instead, because algorithmic systems have the potential for such consequential and long-term impacts, creators must be responsible for predicting the possible outcomes, then imagining and creating something different. There is still a lot to be done.”