AI   Communication and Technology  

Dr. Jasmine McNealy: Data Trusts, Equitable AI and Amplifying “Otherness”

Jasmine McNealy, University of Florida College of Journalism and Communications (UFCJC) Associate Director of the Marion B. Brechner First Amendment Project and Telecommunication Associate Professor, was interviewed on Sept. 16, 2020 about her research on data trusts for the Consortium on Trust in Media and Technology. She also discussed equitable AI, communicating AI and how technology is amplifying “otherness.”

Below the video is an edited transcript of that interview.

 

Q: You’re working on research for the Consortium on Trust in Media and Technology, looking at the infrastructure of governing data specifically as it relates to data trust. Why is developing infrastructure for governing data important?

Jasmine McNealy: The thing about infrastructure and why developing an understanding of infrastructure for data, whether it’s in the form of a data trust or any other kind of infrastructure for data governance, is that infrastructure allows an organization or an individual to demonstrate their values as it relates to data.

Currently, there are several organizations that have, I want to say, monopolies over data, data collection, aggregation, and use, and then access. And the infrastructure they’ve created demonstrates their values. And many of those values have to do with making money but also surveillance. So, what does that mean then for the people who are implicated by the data in those systems?

Looking at different governance forms of infrastructure allows us to look at the different ways organizations and people are framing their values with respect to governance, and then the implications for different kinds of people. It allows us to then make recommendations on what’s best for people, or at least which mitigates the very disparate negative impacts that current infrastructure and current governance schemes are having on various communities and people.

Q: What is the nature of your research related to this?

McNealy: The nature of my research is multidisciplinary. It’s legal research, but it’s also social science research. I look at cases. I look at legislation. I look at policy. But I also look at how people actually behave with respect to data, data collection, technology, how organizations actually behave. Doing so allows for a richer discussion of what’s happening, a better understanding than just the normative of law, which says, “Don’t do this. Don’t do that, or we’re going to punish this, or we’re going to punish that.” We know that people don’t behave like algorithms. We know that people don’t behave rationally. Therefore, we have to look at how people actually behave to see how to best allow a law and policy to function.

Q:  How does this relate to trust in media and technology?

McNealy:  We have a dearth of trust in media technology currently. And that’s because of how media organizations, technology organizations, and governments using and deploying technology have not communicated to people about how technology is being used and the data that’s being collected. But they also don’t understand how people actually behave and/or they deceptively use how people behave against them to collect data, to surveil them, to make decisions about them that have lasting impacts. It’s important then for us to find ways for people to be able to participate more in the governance process with respect to data and technology and also make huge changes in these organizations that have a huge amount of power over how we live our lives.

Q: What other areas of privacy are you exploring in your research?

McNealy: There are a lot of other areas connected to privacy that I explore in my research. I would say privacy is one aspect. But perhaps the overarching question is, how do we get technology and policy to better reflect how media and technology are emerging, and also to better protect people when there are emerging schemes or technologies that have possible negative impacts across all different kinds of contexts?

It’s important whether it’s in health or genetics, whether it’s in corporate or organizational forms like social media and whether it’s in government. Say a city moves to a cashless public transportation system. What are the implications of that? I’m looking at how people actually behave, what it means for technology and data, data collection, and then how do we mitigate, or quite frankly, stop these disparate negative impacts.

Q:  You recently co-edited a special issue of Communication Studies that focused on communicating artificial intelligence. What were some of the key insights for you that came out of that special issue?

McNealy: One great thing about that special issue is that we got an international segment of authors and contributors. We had people from South Korea. We had people from Spain and other parts of Europe. We had folks from the United States. That allowed us to look at various different contexts with respect to artificial intelligence and communication. We had how journalism was framing things, how people reacted to coverage of artificial intelligence. We had the Dutch press. We had articles about chatbots, which is always interesting, I think, for students in colleges of communication because chatbots are things that people could make themselves rather easily right now.

I think all of the articles were important and interesting because they offer different insights into different aspects of artificial intelligence, which is kind of a broad field right now. What we wanted to do is offer a serving of different topics under the intersection of artificial intelligence and communication studies, which offers a lot of the different people in the broad field of mass communication or communication studies to participate and contribute.

Q: You are serving on a UF task force on Equitable AI. What does Equitable AI mean to you and what should industry or the academy be doing about that?

McNealy: Here are my thoughts on equitable artificial intelligence. You can’t start from the perspective that we need to make a technology equitable, because technology reflects society. Technology reflects the creators. Technology reflects the systems in which they work or behave. It reflects the data upon which it is trained. So when we’re starting at how do we make technology more equitable, we’re starting already at a deficit.

The problem is how do we look at the system in which the technology is going to work or be active or behave and try to make that system more equitable? In my opinion, that’s what needs to happen. We can do all we want with respect to attempting to clean up data, make data less “garbagey.” But I think iteratively, when we are creating something, we need to find out what is the system in which it is going to function and do everything we can to try to fix that system and, to a lesser extent, that technology. Of course, the technology is important, but it just amplifies an already probably messed up system.

Q:  In your article on “Amplifying Otherness,” you discussed how information technology is amplifying otherness through neglect, exclusion and disinformation. How is technology having a negative impact on certain communities?

McNealy: I think technology is amplifying certain negative impacts on marginalized and vulnerable communities in several ways. We can look at the effects of elections in the United States and across the globe and the amplification of misinformation and disinformation campaigns using social media. While not everybody is on social media, social media can still have a significant impact in the spread of information, in the boosting of information. We’ve seen that have a tragic impact in Myanmar as far as the Rohingyan massacre and other places where social media has amplified disinformation and stoked hatred. And the results have been tragic. People have died.

And artificial intelligence also is having an impact on racism, sexism, homophobia. Amazon, for example, used a hiring algorithm that somehow jettisoned all the applications by women. Why? Because it had been prototyped in such a way that men turned out to be the prototype, the kind of person that they were looking for. We think about things like bail. We think about criminal convictions. We think about things like loans and mortgages, and people are being negatively impacted by the use of algorithms in those schemes.

So, we already have a problem. We insert technology, and it amplifies the problem because people think, “Oh, technology is neutral,” and it’s not. And then people double down on it not wanting to stop using the technology because we like to be efficient. We like to use tools. That’s the nature of humans. But these extensions of ourselves are just making the problem worse, and we still haven’t fixed the underlying root cause. Until that happens, technology can continue to amplify these tragic and terrible events and systems that we have in place already.

Q:  Are there things that technology companies should be doing to try to mitigate that?

McNealy: One of the things I always say when I’m speaking to a product management class, an engineering class, in several of the panels I’ve been on, is, first of all, it is an iterative process. We know that creation of technology is iterative, that we go through several rounds, and we continue to try to make something better.

One of the first things from the outset, even at the ideation stage, is we need to be looking at who is affected by the product and not just the audience for the product or technology or the service. That means who could ever possibly be touched by this technology and what can happen to them and what is our responsibility to them with respect to those impacts.

And then we need to design based on those impacts. And, you know what, if a product through several rounds will always negatively impact groups of people, maybe that product shouldn’t be made. That’s something to consider too. Just because it could be great for some people, doesn’t mean that it shouldn’t be prohibited, particularly if there’s a negative impact on other folks.

Q: What types of questions would you like to explore in your research going forward?

McNealy: There are a lot of things I’d like to explore, including how can we make good law and good policy with respect to technology, and how can we make technology actually work for people instead of against people.

Also, I’m interested in thinking about how we categorize organizations and what that means for law. Can we look at how certain categories of organizations can get away with more than other organizations, and what should be done about that? Also looking at things like, how do we change our approach to governance of data and governance and technology? What needs to happen? What are the things that we’re missing or ignoring or neglecting that could help us be better at governing these areas?  Those are pretty broad, but there’s a whole lot under there. I think it’s a pretty rich area to explore.

Q: Why did you choose to pursue this line of research?

McNealy: I think it’s really important with respect to any kind of research being conducted that we consider not just things that we’re interested in, but also research that we’re interested in that could have a huge impact on society and not just within the academy. What we really want to know is what this means for people. At an institution like the University of Florida, which is a great big, old public land-grant institution, our mission is to have and create research that’s public facing, that has impact, that changes lives. And I don’t mean that it has to be super huge, but research matters, right?

With any research, and that’s for undergraduate students all the way up through faculty and staff, that we are always considering what are the implications of our research. How can we get this out to the public? How can we help or assist in making life better?

 

Posted: September 22, 2020
Tagged as: , , , ,