Episode Transcript
[00:00:00] Speaker A: Imagine a justice system built on rigorous evidence, not gut instincts or educated guesses about what works and what doesn't.
More people could access the civil justice they deserve.
The criminal justice system could be smaller, more effective, and more humane.
The Access to Justice Lab here at Harvard Law School is producing that needed evidence. And this podcast is about the challenge of transforming law into an evidence based field.
I'm your host, Jim Griner, and this is Proof Over Precedent.
This week we're bringing you a student voice.
[00:00:38] Speaker B: Hello everyone. Welcome back to the Proof Over Precedent podcast, which is a part of the Access to Justice Lab at Harvard Law School.
My name is Leanne Poarch and I'm a current one L at hls. And today I have the pleasure of interviewing my friend Strong MA on his recent blog post where we talk a lot about the public perception of fairness with recent developments in technology and the law. And so Strong, thanks for being here today. Would you like to introduce yourself?
[00:01:14] Speaker C: Yes, of course. Thank you so much for talking to me about my blog post.
Hi everybody, I'm Strong. I'm a 2L at Harvard Law School, also a part of the Access to Justice Lab. I'm really excited to talk to you about this blog post because I'm just generally interested in in this nexus between emerging technologies and the practice of law. And also I actually wrote my first blog post on how these emerging technologies and the public perception of them affected the availability of various reforms to the justice system that California proposed.
[00:01:45] Speaker B: That is all such fascinating information and I really loved reading your blog post. I think it just covers a really fascinating relationship between public perception and justice and how you really need to have both of those come together in order to make any meaningful change. And so I think that's a really timely conversation with all of the developments in the space. And so I'm excited to dive into it with you today. So this New York Times article that really inspired this blog post, would you mind giving us a little bit more details into what happened and how that relates to the legal field and access to justice?
[00:02:29] Speaker C: Yeah, of course, yeah. So this New York Times article was very interesting. It was basically an investigation into the increasing use of departures from the organ transplant list. Essentially, the organ transplant list has always been a algorithmically decided list. Usually the ranking on the list is what controls, like the order of how the organ is offered to people. But the New York Times article found that there's been an increase in the past few years of human made decisions that depart from the list. And these decisions were justified by saying that they needed to get organs to people faster. And sort of the reactions like New York Times interviewed folks for the article and the reactions were sort of across the board. This human decision making is actually incredibly unfair. That we have this list that departing from it based off human decisions felt very, very unfair, especially for those who are skipped.
[00:03:28] Speaker B: That's a really fascinating thing that the New York Times and also you pulled from this data set.
How have you seen kind of the response to the use of algorithms in medicine? What are people doing and maybe how are people responding to it?
[00:03:45] Speaker C: Yeah. So the reaction to the. The use of algorithms and AI in medicine, like writ large, not just organ donations.
Looking into it, there's been a handful of public opinion polls and sort of small scale research studies on how people perceive them. And the results to me are pretty surprising, especially because of the seriousness of these decisions. Right. It seems that there is a decent amount of support for their use and the perception of their use as being sometimes more fair than humans or even often more fair than humans.
[00:04:19] Speaker B: That's really fascinating.
Especially kind of. You see the majority of people being receptive, if not excited by the use of AI and algorithms in the medical field.
[00:04:32] Speaker C: Yeah, exactly.
And I think that's sort of one of the reasons why I find the resistance to AI in the legal context super interesting too.
Especially because in comparison with the law, intuitively you would think that medicine to people have higher stakes than legal consequences. Obviously both have various high stakes, but when I was doing this blog post and researching it, almost every study I found about how people perceive the use of AI in the law was much more negative than in comparison with the medicine.
You have tons of studies that for bail decisions and early prison release that the majority of people viewed algorithms to be less fair than humans.
I think that's a general landscape of just a lot of studies that are quite negative on AI and the use of law. I think there is one interesting study I wanted to focus on, which was a 2022 study polling American preferences for algorithms versus humans and decisions in different contexts.
And that one was interesting because it directly compared a hypothetical medical context and a hypothetical legal context.
Basically it asked respondents, would you rather have an AI or a human decide whether you're going to be entered into a clinical trial for a medical therapy that you need, as well as whether you're legally liable for a civil traffic offense and like a pretty high traffic offense as well.
[00:06:02] Speaker B: That's really fascinating.
That's just a great concept for a study. I'm curious, what were the results of It, Yeah.
[00:06:09] Speaker C: So that study actually saw a majority preferred algorithms across all of the contexts, all four contexts. So 52.2% there was. In, like, specifically in a medical context, it was slightly lower, 50.2%, but it's still sort of a majority.
And then you could directly see that in the legal context, it was lower than medical context. Only 44% preferred algorithms in a legal context.
So one is, this study is a piece of evidence for my broader thesis that there is an actual distinct difference between how we perceive the use of AI in medicine and law and there's a decrease in trust in law. And the second point is that even for the legal context, 44% preferred algorithms. And that's, you know, quite a high number considering all these other studies that showed that, like, people didn't trust it to be fair enough.
[00:07:05] Speaker B: That's really fascinating findings. And I think something that is great context, as we think of possible reforms to challenges we face in the legal field, it just might take some reshaping of the narrative.
[00:07:20] Speaker C: Yeah, for sure. And, you know, speaking of narratives and perceptions, I actually would really love to spend the last part of this podcast just sort of thinking about how we perceive items ourselves. Right. I was writing a lot about public opinion polling in these studies, but I would love to get sort of your perspective on it.
And before I do that, actually, I got to have a short conversation with Professor Greiner.
[00:07:46] Speaker D: Wonderful.
[00:07:47] Speaker C: Yes. For those listening, Professor Jim Greiner is a professor here at Harvard Law School. He also runs the Access to Justice Lab. And so I had a really great conversation with him about how he perceives the fairness of algorithms. So let's take a listen to that.
Hi, Professor Greiner. I really appreciate you taking some time off to speak to me about my blog post and generally just give your thoughts on your perception of the fairness of algorithms and its use in the law and other contexts. So I just wanted to ask, sort of in general, how do you think about the use of algorithms and AI in decision making systems in comparison to humans?
[00:08:31] Speaker D: Yeah, I guess what I think of them so far, from what I can tell is, and my views might change on this as more evidence emerges, but what I think of them right now is that algorithm decision making systems are terrible, horrible, no good, and very bad.
And they're so terrible, horrible, no good, and very bad, that they're almost as bad as unguided human decisions, which is what we would use if we didn't use algorithms. And so to me, the real question is, which poison would you rather drink?
And so far from what I can tell, there's not a lot of difference between the two.
But of the two, algorithmic decision making systems are easier to fix.
So they both start out pretty badly broken. Both systems start off really badly broken, but one of them is easier to fix.
Not that we've mastered the art of fixing them in all situations yet, but that I'm more optimistic, I'm more optimistic that we can fix the algorithm decision making systems as opposed to human ones. One thing that you might be able to do is if you have a human decision maker with some kind of override or some kind of appearance or actuality of final decision making authority, you might be able to preserve some of the non accuracy benefits of having a human decision maker, such as responsibility or the ability to plead to the decision maker, and as a kind of procedural due process type argument, which might avoid infantilizing the human race by saying, but you know, because if we never give human beings anything consequential to do, then we'll probably stink at, at making decisions because we're out of practice and that would be really bad. Right?
[00:10:17] Speaker C: Yeah.
[00:10:18] Speaker D: And so you might be able to preserve some aspects of that if you keep a human in the loop in some consequential way, maybe even as the final decision maker.
So again, there's a lot to be explored there.
[00:10:29] Speaker C: Yeah, for sure.
I think that's totally interesting. Or just like figuring out where the human element comes in in the system where ostensibly we feel like we can systematically fix some things with more automated decision making. I suppose I just have one final question or ask for thought from you is why lawyers, law school students, legal commentators, in some instances vigorously reject the use of algorithms and AI even when there may not be the most solid or concrete evidence counseling against them.
[00:11:02] Speaker D: Yeah, I don't have any evidence to share with you. My speculation is that it's a combination of a feeling of loss of control and then along with that a feeling that something is threatening your identity.
Because what we revere in law, and I think what people who go into law tend to revere, so not only do we sort of build it, but we also attract people who think this way already, is this idea of professional judgment. We revere this idea of human professional judgment, this idea that, that a human mind can assimilate a bunch of facts and apply experience and apply deep learning to produce a good solution to solve a problem.
And if it turns out that all that is is just something that we can put into a machine and have a machine do the same thing, then we're not all that special.
[00:12:07] Speaker C: Yeah.
[00:12:07] Speaker D: And I think it's actually first for many people, it's more the loss of the feeling that we're something special than it is a economic. Than it is an economic threat. I think it's actually deeper than that.
I think it's more than just economics. Right.
[00:12:22] Speaker C: Yeah. Something almost even like, similar to the response to AI generated art and commissioned artists. Right. And there is definitely the economic portion of it, but also, again, that. That feeling of there's something uniquely special in a human about that ability to perform that work.
[00:12:41] Speaker D: Right.
[00:12:43] Speaker C: Yeah, I think that totally makes sense.
[00:12:44] Speaker E: Yeah.
[00:12:45] Speaker D: And so I just. It feels like some of the resistance may be coming from that identity threat as well, you know, because if we're. If we're not specialists, human judgment in tricky, consequential situations, who are we? Right. And then we have to figure out who we are.
[00:13:02] Speaker C: Yeah.
[00:13:03] Speaker A: All right.
[00:13:04] Speaker C: Thank you so much, Professor Greiner. It was great talking to you about this.
[00:13:06] Speaker D: Great talking to you too.
[00:13:13] Speaker C: I also got the opportunity to have a conversation with another HLS professor, Professor John Hanson. His scholarship focuses on systemic justice, legitimacy of the legal system, and a connection between the law and the mind sciences. So I thought he offered a good, really great perspective and thoughtful critique of the use of algorithms and AI in the legal system while still remaining open minded to their potential for reform.
Hi, Professor Hanson. Thank you so much for taking some time to join me and talk a little bit more about the subject of my blog post about the perceived fairness of algorithms. So, I mean, just like to start off, how do you feel about algorithms or AI and having them decide important questions in the law, at least compared with humans, just like sort of your gut reaction to their fairness.
[00:14:04] Speaker E: I'm without much expertise or any expertise in this area as just a human on the planet. I feel like AI is frightening as a technology, in part because of its power and in part because I don't think we know fully what its consequences will be, not just on decision making, but on our culture, on the practice of law, on a whole bunch of questions where I think it matters. And to the extent that I see AI in other contexts, I see what it's producing as problematic. And it's hard not to imagine that will manifest in ways that I haven't anticipated but have my guard up.
[00:14:46] Speaker C: Yeah.
[00:14:46] Speaker E: With that said, I also feel like the legal system is, as a starting point, it is very biased and tends to produce in concealed ways many of the systemic problems that we face as serving as a kind of Quiet or invisible conduit for power, existing power.
And it feels to me like AI's problems are several in this, in this relationship. And one is that to the extent that AI simply reproduces many of those harms, but it will do so in a way that gives perhaps the public, and to your point, the decision maker, a false sense that they are being neutral or unbiased.
I think that the legal system is already dressed in garb and masks and various sort of facades intended to create the impression that the legal system is neutral. Really important decisions are being made having real impact on people's lives that creates a discomfort in the part of those who are subject to those outcomes and those deciding them.
And I worry about AI serving as a kind of palliative, another way in which the law will assure itself that there is nothing important to see here for sure. My concerns really are humans in the legal system is not sufficiently self examined and AI I think could worsen those types of problems.
[00:16:36] Speaker C: One more tool in the toolbox to obviously the real, exactly real systemic underlying injustices that are happening. I think that's, I think, very fair and I think I'm sort of with you on that one as well. Just as a final thought, I think what you mentioned about AI or algorithms, like reproducing the same problems that humans have when making decisions is something that I very much agree with and I've seen doing some research for my blog post.
One sort of response is that it may reproduce the same problems, but those problems, at least for algorithms, may be easier to fix. If you understand the biases, you can tinker with the system and algorithms, you may be able to do it on a broader scale, more controlled, and you might have more data on how it might work.
And that's sort of the argument for these decision making systems is that they may be just as bad now, but you might be able to fix them easier at least, or at least more consistently.
[00:17:36] Speaker E: I like that as an argument. I find it quite appealing and it very well may be true. And as I think about many of the injustices that have sort of plagued us for a long time, and arguably at increasing rates of light, I do see that the legal system is part of the problem, but it's ill equipped to respond in part because many of the biases happen in kind of one off decisions where the reasoning is case specific. It's only when you can step back to see, oh, across the landscape of many of these individual decisions do you see the biases unfolding in terms of larger consequences and to your point, perhaps AI is one of the ways in which where we see those larger consequences, we can tweak the algorithm and make a kind of structural change to remove the individual and the individualization of the problem to produce what are better consequences going forward. And that does strike me as appealing. But each time we make that adjustment to reassure ourselves that we are solving the bias, I do worry that we heighten the legitimacy of the system and fail to fail to look with as much suspicion the next time. The other thing I will say is that I think a lot of the biases are deeper than most of the proponents of AI likely imagine. And I worry that if we turn our system over to people who are technologically sophisticated but have not really devoted themselves to thinking about these issues and how they unfold, really in subtle ways through our legal system, they will use their sort of authority, their technical know how and the kind of the obscurity that that creates in the system to reassure the public that there's nothing to see here and believe it themselves. In that way too, I think AI reproduces some of the problems of the legal system itself in its inaccessibility.
Not just who can get a lawyer, but who can even understand what's going on.
[00:19:56] Speaker A: Yeah.
[00:19:56] Speaker E: And the faith that we ask the public to take in what we're doing, that's a problem. I don't want to see that enhanced.
[00:20:02] Speaker C: Yeah, that's incredibly interesting. Thank you. Well, again, thank you so much for spending some time talking about this. I think these are super important perspectives that I personally was, I think, reaching towards, but missing such a good articulation of. So I really appreciate having to talk to me about it.
So, coming back to our conversation, Lian, after hearing these different perspectives and trying to melt them together, I really wanted just to end this last part of the podcast with asking you about how you feel about the role of algorithms, AI, in the legal system.
[00:20:41] Speaker B: Yeah, absolutely.
So, full disclosure, I think a lot of my perspective has been shaped by Professor Greiner, both sharing kind of his to human decision makers in access to Justice Lab, but also in criminal procedure and different things like that. I think it's really great or really insightful what he said about algorithms being horrible decision makers, perhaps almost as bad as humans. I think that kind of sums up a lot of my view on the topic.
I think writ large, algorithms make a lot more sense. It's something you can trust, it's predictable, it's far more efficient.
No one has time to wait around for their trials, doing them via AI. Or at least having that alleviate some of the blockage in the system, I think would be a great benefit, even to that point.
[00:21:45] Speaker C: One of our classmates, Michael, wrote a blog post on how Brazilian courts are currently taking advantage of AI to try to clear up some of their very overcrowded dockets. So that's already happening.
[00:21:57] Speaker B: Absolutely. And Michael did an excellent job at covering that topic. And so everyone should go read that blog post as well. But even faced with these great efficiency benefits, kind of my gut reaction to hearing Michael's research was, you know, if I was the defendant, I feel like there's a part of me that could play the adversarial system to my advantage to gain a better result.
And so I think that's like a very American instinct. It might play a little bit into why you see, kind of with an American audience, a little bit more of a, oh, I'd rather have a human decision maker than a computer, than maybe in some other cultural contexts. But I.
All in all, I would probably prefer the algorithm, but there is that little. That little hubris in me there.
[00:23:00] Speaker C: Yeah, totally. I think that's totally fair and sort of something that I was thinking about a lot as well. We've set up this system to be very adversarial, and I think it's very human to want to be able to have a human to convince in some ways that may not always make the right decisions.
And this whole part of the reason we're in law school is this whole sort of mythologizing of the role of the lawyer, the importance of every case and each fact, and being able to argue everything and sort of algorithms boil it down to a lot more simple than that. And perhaps that's sort of why we're adverse to it.
[00:23:38] Speaker B: I think also kind of the art of advocacy and kind of. Yeah. The skills you learn in law school, or even just wanting to be able to advocate for yourself in front of human decision maker.
There's a lot of control that you're able to have over the situation. For the majority of defendants, it's still not a lot of control, but, you know, I think it kind of maybe plays into the notion of, like, you have your day in court, which might be lost slightly if you were facing an algorithm.
[00:24:13] Speaker C: Right. I think that's totally correct. And just to put it all together with both Professor Greiner and Professor Hanson's perspectives, I think, to me, AI has a lot of potential, but also a lot of risks associated with it that we need to think really hardly about. And the potential for AI to fix and be more fair is exactly also why it has the potential to be so dangerous. Right. That if we use it uncritically, we might just be putting it out there, saying we've made the legal system more fair and it actually is obscuring or obfuscating real systemic issues that are currently still existing. And so we want to be really thoughtful about who fixes AI, what the human role is in it, how we're trying to fix AI, and how transparent it is to the public so that we both have a more transparent legal system, but also more accountable one.
But just to close out, I really want to thank you, lan, for, you know, having a conversation with me on this and being such a thoughtful partner and reflecting on these issues.
[00:25:19] Speaker B: Absolutely. No. Thank you for coming on the podcast. Strong.
This is absolutely fascinating research and I think, I hope this isn't the end of our conversation and your work on this question,
[00:25:39] Speaker A: Proof Over Precedent is a production of the Access to Justice Lab at Harvard Law School.
Views expressed in student podcasts are not necessarily those of the A J Lab.
Thanks for listening.
If we piqued your interest, please subscribe wherever you get your podcasts. Even better, leave us a rating or share an episode with a friend or on social media.
Here's a sneak preview of what we'll bring you next week.
[00:26:05] Speaker B: So if the study turns out the way you hope, what policies do you think?
What policy effects do you hope will come about?
[00:26:16] Speaker F: I think a few things, hopefully. One is if we show that we are reducing unnecessary DSS referrals, that is important for the Department of Social Services because they're already a very burdened system. The child welfare system is incredibly overburdened by these unnecessary referrals, right?
Large percentages, as you know, of referrals to the Department of Social Services for abuse and neglect are just not necessary. They're unsubstantiated, but DSS still has to prophesy them.