There is No Trolley Problem
On May 13, Jasmine McNealy, University of Florida College of Journalism and Communications Telecommunication associate professor and and associate director of the Marion B. Brechner First Amendment Project, discussed ethical issues related to AI at UF’s AI Town Hall. A transcript of the video is below the video.
Hello, my name is Jasmine McNealy. I’m an associate professor in the college of journalism and communications here at the University of Florida.
I want to submit to you a provocation related to artificial intelligence and society. Since we’re talking about the advances that are possible, the research that’s going to happen, but also the implications for society, with respect to all the things that we call artificial intelligence.
I want to leave you with this provocation right here, and that is there is no trolley problem. A trolley problem is one of those thought experiments from ethics and psychology, which begs the question of what would you do in the situation where a technology, a trolley, goes out of control? Would you kill five people or would you kill one person? I submit to you that you shouldn’t be placing a trolley that you know could go out of control on the tracks in the first place.
That’s what we have with many of the applications of artificial intelligence. Let me post to you, or show you, three examples where artificial intelligence, algorithmic systems, machine learning systems are going out of whack and we have the research that says that.
Number one, last year with the pandemic, people weren’t allowed to have large-scale testing altogether. This happened in the UK where the GCSEs, or the A levels, are really important for people’s lives. Instead, they use an algorithm and they skipped over the possibly biased grades that teachers gave. Now, the implications were huge. We knew that several people got downgraded, losing spots.
A second example is this facial recognition used by several states during the pandemic, of course. That tries to match your face to your ID to make sure you’re not committing unemployment fraud. The problem is facial recognition technology is error prone. Particularly as people get darker, they appear more feminine and then between certain age groups like 18 and 30. There’s research to back this up.
Finally, that’s let not leave health out. There are healthcare algorithms being deployed to determine who gets treated first, who gets triaged first, who gets to be placed where on things like transplant lists, but it’s based on data that we know is involved in systems that are traditionally biased against particular kinds of communities already. The algorithm is of course making really racist determinations.
I submit to you there are other questions that we need to ask. Number one, what is the actual problem?
Two, in attempting to deploy technology like artificial intelligence, are you just creating more problems? We want to solve things. We want to use technology presumably to make it more effective and efficient, but does that actually just hurt more people? Does it actually just create more problems? Even if it is a shiny new toy.
What are the actual outcomes? Look, all of the research that has come out related to artificial intelligence, but other technologies have stated that look society, even before we put technology into place, society has the social problems, but how does technology amplify those problems?
There are several studies, lots and lots of studies that will give you the answer before we deploy technology. Even if that technology seems promising, we need to step back and look at those.
Finally, and probably more importantly, we need to look at who is being neglected by the use of the technology. What is the human impact? The deployment of all kinds of technologies have their rationale, have their audiences so to speak, but who’s left out of that audience? Also, if they’re left out, that could mean they’re still being impacted by the use and deployment of that technology.
Whether it’s us as a university or some other government entity, or a corporation, or a civil society, we need to be really careful about the deployment of artificial intelligence systems. Even if we say this is for the best. We really need to answer the question, the best for whom? And the best in relation to whom? But more importantly, we need to find out the impacts.