AI at CJC

New Research Will Explore How Marginalized Communities are Addressing AI

Jasmine McNealy
Jasmine McNealy

With a 2020 Google Award for Inclusion Research in hand, Jasmine McNealy is in the preliminary stages of a new research project to study how marginalized and vulnerable communities are being proactive and educating themselves about the opportunities and threats of artificial intelligence (AI).

McNealy, associate director of the Marion B. Brechner First Amendment Project and associate professor in the Department of Telecommunication, has spent years researching issues surrounding privacy as it relates to AI and emerging technologies and worked on a project dealing with emerging surveillance tools. “AI is a kind of surveillance,” she said.

“With now almost normalized use of AI and machine learning and decision systems for various kinds of major life and societal things, it’s a perfect progression from privacy study to thinking about what artificial intelligence means for people,” she said.

She also wants to know what people are doing to “ensure the outcomes they are getting with respect to systems deployed on them, for example AI used to determine loans, health care or admissions. We have seen really discriminatory outcomes from AI. What are these communities doing to help each other against these kinds of systems?”

“We start from a deficit in thinking about the human aspects of this and what it means for society. We have to play catchup. I think we are behind the ball in considering first the possible harms and then whether we even want to have technology like this.”

(In a separate article, McNealy argues that emerging information technology is amplifying “otherness” through neglect, exclusion, and disinformation, all of which have significant consequences.)

In her latest study, McNealy, who also is a Consortium on Trust in Media and Technology Scholar, is exploring how these vulnerable communities are organizing, how they are teaching themselves about AI, what do they consider is AI, how people are sharing that knowledge and warning other folks.

She plans to conduct interviews and hold workshops to understand where people stand and what practices they already have in place to protect themselves from such systems. She hopes to uncover where people think these systems are deployed and how they think they are being used against them.

“A really important aspect of doing this is that it matters, and it makes a difference,” she said. “I’m particularly interested in … doing research relevant to how people actually live their lives, navigate and experience the world, and how we can make that a better experience, particularly for marginalized and vulnerable communities.”

If we focus our energy on those communities and make life better for them, “it’s better for all of us.”

She hopes her research will be studied and used by:

  • Communities and community organizations that might apply some of the activities that result from the study.
  • Advocates and policymakers who are studying the concerns of people touched by algorithmic systems to make better policy and law related to AI.
  • Organizations, government and corporations that can “modify, change or stop certain uses of technology based on possible harm that are coming out of those uses of technology.”

 

 


Tagged as: , , , , ,