Episode Transcript
[00:00:00] Speaker A: Imagine a justice system built on rigorous evidence, not gut instincts or educated guesses about what works and what doesn't.
More people could access the civil justice they deserve.
The criminal justice system could be smaller, more effective, and more humane.
The Access to Justice Lab here at Harvard Law School is producing that needed evidence. And this podcast is about the challenge of transforming law into an evidence based field.
I'm your host, Jim Greiner, and this is Proof Over Precedent.
This week we're bringing you a student voice.
[00:00:38] Speaker B: Hi, I'm Elizabeth Guo. I'm a student in the Access to Justice seminar at hls. I'm joined today by my classmate Sydney Nelson. We'll be speaking about a blog post that I wrote, the first of two about artificial intelligence and the unauthorized practice of law, where upl.
[00:00:54] Speaker C: Awesome.
[00:00:54] Speaker D: Thanks so much for having me. Elizabeth, can you first share what inspired you to write about AI and upl?
[00:01:01] Speaker B: Sure. Thanks Sydney. I've seen many news articles over the last year about lawyers using AI, judges using AI, and also pro se litigants using AI instead of using a lawyer.
And by the way, to clarify what I mean by AI, I'm using that as a shorthand for general purpose large language models or LLMs like ChatGPT or Claude. And the idea of pro se litigants using AI was interesting to me for a couple of reasons. So one, it could completely change the game for pro se litigation.
Ten years ago, these litigants were using Google court websites, maybe asking courthouse staff and friends and family to try to cobble together their court filings, but it could be really difficult to piece together all of the information they needed.
So 10 years ago, in 2016, there was a study called Cases Without Counsel that looked at self representation in the US and some of the participants said that they had found Google very helpful. But then others said things like court websites were, quote, hard to navigate or that when they went in person to ask court staff, they were any questions that they asked, they said, quote, we're not giving legal advice. Well, now these litigants can turn to AI. And they are.
A 2025 NBC article interviewed a litigant who used ChatGPT to appeal an eviction and won. And she said, quote, I can't overemphasize the usefulness of AI. In my case, I never ever could have won this appeal without AI.
[00:02:34] Speaker D: Oh wow. So the litigant literally won because she used AI to appeal her decision. That sounds like it's completely changing the game.
[00:02:41] Speaker B: Yes, exactly.
So the second thing that I was interested in is that the very nature of these AI tools is fascinating. So you can pay for a premium version, but you can also use the free versions of ChatGPT, Claude, or other platforms. So it's low to no cost, they're easy to access, it's one website, and you can ask your questions using plain language. And these models are quite good and only getting better. There aren't as many legal benchmarks compared to benchmarks for math or programming, but GPT4 already passed the bar exam back in 2023 and scored in the 90th percentile. That was three years ago. There's another benchmark called legal bench that measures abilities like issue spotting, rule recall, legal outcome prediction and interpretation. And as of February 2026, Google's Gemini 3 Pro and Flash, as well as GPT 5 and 5.1 were all scoring above 85%.
So that's where we are from a technological perspective.
And then the third aspect of this, I was wondering, well, legally, can you even use AI as a lawyer? Would the AI then be engaging in unauthorized practice of law?
[00:03:53] Speaker D: Okay, so unauthorized practice of law or upl. Can you walk us through what that means?
[00:03:59] Speaker B: Absolutely.
So all 50 states and D.C. have UPL rules, or unauthorized practice of law.
In broad strokes, that means you can't practice law if you're not a lawyer.
But states vary widely in how exactly they define UPL. All 50 states and DC have UPL rules or unauthorized practice of law.
In broad strokes, that means you can't practice law if you're not a lawyer.
But states vary widely in how exactly they define upl. For example, Alaska has a very narrow definition.
You'd have to be holding yourself out as an attorney when you're not one in order to be liable.
By contrast, Georgia has a pretty broad definition, so you could be liable as long as you're not a duly licensed attorney.
The national center for State Courts in a policy paper, calls it a, quote, state by state patchwork of regulation, and says that enforcement in reality is infrequent in ad hoc.
But a party could be subject to either civil penalties or criminal sanctions for UPL violations, depending on the jurisdiction.
[00:05:10] Speaker D: I gotcha. So it sounds like enforcement is pretty rare, but still, what kinds of actors tend to be pro UPL enforcement versus anti UPL enforcement?
[00:05:20] Speaker B: That's a great question.
So UPL rules actually started emerging in the US around the Great Depression after waves of litigation from state bar associations.
Nora Freeman Engstrom and James Stone wrote about how these state bars were going after entities like auto clubs who had been providing legal services for people.
So many UPL actions were and have been brought by state bar associations.
And then on the other side are access to justice advocates who argue that UPL creates unnecessary barriers for people to access legal help.
So Angstrom and Natalie Knowlton have written about how UPL restricts litigants from getting help and also restricts creators of legal tech from building litigant facing technology. It disincentivizes them from creating that kind of innovative tech.
[00:06:11] Speaker D: Okay, so I'm hearing some of the negative things of upl, but what's the argument in favor of upl?
[00:06:18] Speaker B: Right. The main idea is consumer protection. The idea is we want to protect consumers by only letting them be represented by official licensed attorneys. Otherwise they'd be getting unqualified, incompetent legal representation.
However, Professors Angstrom and Stone do cast some doubt on whether that was really the motivation of state bar associations to begin with.
But in any case, that's the essential debate. So you have consumer protection on the UPM side versus access to justice.
[00:06:50] Speaker D: Okay, interesting. So now AI has entered the fray. What's the problem there exactly?
[00:06:57] Speaker B: The problem is in those cases of pro se litigants relying on AI, it might look like AI is practicing law, but AI isn't a lawyer. So isn't that maybe unauthorized practice of law? And if so, can AI providers so think OpenAI and Anthropic be held liable for that?
And by the way, we've seen a version of this film before.
State committees and bar associations have in the past pursued UPL theories against software. So I'm thinking of Quicken Family Lawyer back in the 90s, websites like LegalZoom in the 2010s and apps like Do Not Pay in the last five years, these all provided some kind of legal assistance.
[00:07:41] Speaker D: Okay, so AI versus upl, what's the verdict?
[00:07:46] Speaker B: Great question. So no court has really addressed the problem yet. So it's kind of an open question. The national center for State Courts wrote that, quote, AI companies face operating in legal gray zones chilled by the threat of UPL enforcement that is inconsistent across state lines in both prohibitive language and murky mechanisms of enforcement.
However, I think on the whole, state UPL laws will likely have some space to permit general purpose AI, at least as a resource that's available to pro se litigants.
And there are a few factors to consider.
So first is the classic distinction between legal advice and legal information when it comes to UPL generally, courts have said that if you're not a lawyer, you cannot give legal advice, but you can give Legal information.
And these general purpose AI providers have been positioning themselves as being on the right side of that line.
They've built into their usage policies that it's on the end users to use their tools properly and not use them to obtain legal advice. So I'll just list a few examples.
OpenAI has in its usage policies that you can't use ChatGPT to get legal advice without the involvement of a licensed attorney.
Anthropic says that legal questions are high risk use cases. So if you're going to use CLAUDE for legal interpretation or legal guidance, and you're going to then present those model outputs to individuals or consumers, you need to have a human in the loop, more specifically a lawyer in the loop, and disclose that you used AI.
Google also has in its generative AI Terms of Service a disclaimer against using its services for legal advice.
It says the content is for quote, informational purposes only and is not a substitute for advice from a qualified professional.
[00:09:46] Speaker D: Okay, so it sounds like there's some like terms of service that says and distinguishes between legal information and legal advice. But like, what if I really wanted to get legal advice, could I get ChatGPT or Claude to give me it?
[00:10:00] Speaker B: Right, So I was also really curious about and interested in that question.
And I think that part is not so clear.
Some people online have claimed to find workarounds like if you instruct ChatGPT to say, act like my lawyer.
Last week I tried asking the free version of ChatGPT, I'm involved in a legal dispute. Can you tell me whether to settle or proceed to trial?
And to its credit, ChatGPT said it couldn't tell me what I should do.
But then it gave me all these factors to consider, like the strength of my case and financial analysis and my risk tolerance. And then it asked me for more details about my case, like how much money is at stake and what has my lawyer said about my odds. And then finally it said it could help me think through things more concretely given that extra information.
So given that, maybe you can decide for yourself whether you think that looks more like information or advice.
[00:11:00] Speaker D: Okay, that's super interesting thinking about that description and distinction between legal information and advice. Are there any other factors to consider here?
[00:11:10] Speaker B: Great question. So geography is another one. I mentioned this earlier, but some states have really narrow UPL laws like Alaska, where you have to be holding yourself out as an attorney to count as practicing law. Those are called holding out requirements.
So you might think in jurisdictions like those, if the AI provider includes some kind of disclaimer or usage policy that says something like this tool is not a substitute for an attorney, or this tool is not to be used for legal advice, then maybe there's a good chance it'll be okay.
Also, some jurisdictions like Texas say in their UPL laws that practice of law doesn't include the design or creation of websites or software if those products quote clearly and conspicuously include a disclaimer that they are not substituting for an attorney's advice.
So if AI is treated like websites or software, that might also apply.
And as I mentioned, these frontier models are already putting disclaimers like that in their usage policies. So they seem to at least facially say they can't give legal advice when prompted and users shouldn't ask for it.
But in jurisdictions with stricter UPL laws, I think it's more of an open question.
And then some other things to think about.
Does a litigant's involvement with an LLM really look like a traditional attorney client relationship?
Ed Walters has an article about how these AI products don't involve engagement letters, don't involve conflict checks, or don't involve attorney client privilege.
And on that last point, the privilege, a recent ruling by Judge Rakoff in the Southern District of New York seems to support that.
He ruled that the written exchanges between a criminal defendant and Claude were not protected by attorney client privilege, even though the client had input information he had learned from counsel into Claude, and then afterward he shared Claude's outputs with counsel.
And then finally, anecdotally, it seems to me that if you look at the reasoning of some of the judges who have dealt with UBL cases, they seem to be particularly suspicious when there were also human non lawyers involved in addition to the legal tech.
So that was the case with upsolve, which was a New York nonprofit that both used software and trained non lawyer volunteers called Justice Advocates.
And Last year, the 2nd Circuit focused on those Justice Advocates and held that New York's UPL statutes applied to them, but that that was regulation of their speech, but then also that the regulation was content neutral.
So earlier the district court had entered a preliminary injunction against the enforcement of UPL's statutes, but the Second Circuit vacated that and remanded, saying the level of scrutiny should have been lower, only intermediate scrutiny.
Another example was with LegalZoom. LegalZoom was an online document preparation service that had non lawyer people review the documents at various stages.
In 2011, a Missouri court denied summary judgment for Legal Zoom on the issue of upl. And in the order, the court wrote, quote, Legal Zoom's legal document preparation services goes beyond self help because of the role played by its human employees, not because of the Internet medium. LegalZoom employees intervene at numerous stages.
But anyway, that's not the case with LLMs, which is another small plus in their favor. It's just ChatGPT, not ChatGPT plus human non lawyer helping you with your legal questions.
[00:14:57] Speaker D: Okay, so I've heard you said a couple of factors that things to think about, including geography and strictness of UPL laws, a similarity to attorney client relationship, and then the lack of additional human involvement in some of these cases. Are there other defenses that AI providers could raise?
[00:15:14] Speaker B: Yes, although I'll mention again that these theories haven't yet been tested in the courts with respect to AI and UPL specifically, so it's unclear how effective these would really be in the wild, so to speak.
But first, the AI provider could try a First Amendment argument.
[00:15:32] Speaker D: Okay, and how strong do you think that argument is around the First Amendment?
[00:15:37] Speaker B: I think there are definitely some hurdles to clear. So you'll recall for upsolve, the First Amendment issue was with respect to the human justice advocates, but if you have just an LLM, I think the first question is are AI outputs even protected speech?
So recall for upsolve, the First Amendment issue was with respect to the human justice advocates.
But if you just have an LLM, I think the very first question to ask is are AI outputs even protected speech at all?
On the creator side, I think it's questionable.
Some scholars have argued yes, but at least one district court in an early ruling in the character AI litigation was, was not prepared to say so.
However, there might be something in the First Amendment rights of users to receive AI outputs as speech. That district court I just mentioned did acknowledge users First Amendment rights to receive the chat box speech.
And Eugene Volok has described the rights of listeners are likely the strongest reason for First Amendment protection of AI outputs. And that dovetails with the idea that pro se litigants perhaps have a First Amendment right to receive LLM outputs.
Okay, interesting.
[00:16:54] Speaker D: So there's this difference between AI outputs as protected speech versus the right of users to receive AI outputs as speech. We'll definitely have to keep an eye out for more cases on AI in the First Amendment. So are there beyond that, are there any other theories maybe from older tech and UPL cases that you think might also work?
[00:17:13] Speaker B: Another great question. And so yes, AI providers could also try to assert antitrust theories saying that state bars are blocking competition for and monopolizing legal services.
So legal Zoom, which had made an antitrust argument against the North Carolina Bar, eventually survived after making an agreement with the Bar association.
And in 2023, the DOJ Antitrust Division wrote a letter to the North Carolina General assembly saying, quote, unduly broad restrictions on the practice of law impose significant competitive costs on consumers, workers and innovation. And that in the absence of evidence of legitimate and substantiated harms to customers, restraints on competition in the market for legal services should be narrowly tailored to avoid unnecessarily limiting competition. And so that thinking might also apply for AI and upl.
Okay, super interesting.
[00:18:09] Speaker D: Well finally we've been talking about general purpose AI like ChatGPT and Claude, but I've also been hearing about legal AI tools like Harvey and Megora. Is there a difference between general purpose AI and specialized legal AI is one more at risk of UPL violation than the other.
[00:18:28] Speaker B: Right? So you might think of categorizing different types of legal AI and absolutely, you're right. Those high end commercial services like Harvey and Logora are one type, but those are largely marketed as tools to aid practicing licensed lawyers.
So there may be other professional conduct rules implicated, but non lawyer UPL will probably not be the focus.
And then some courthouses have developed chatbots on their court websites. These court endorsed chatbots are relatively limited in scope and I think probably built to comply with UPL rules.
So for example, I tried asking the Maricopa County Superior Court's chat box Clio the same question I asked ChatGPT. So I'm involved in a legal dispute. Can you tell me whether to settle or proceed to trial?
And this is what Clio said back to me.
Online payments are now being accepted for criminal payments and restitution, billing and filing fee deferrals and non criminal court ordered fees. Here link. You can also check your balance. For links to more detailed information. Please let me know what type of payment you would like to make.
So in all I think less capability. It didn't really answer my question, but it also seems less at risk of veering toward legal advice compared to ChatGPT.
I think it's the commercial tools that are aimed at the general public. So one that comes to mind is Do Not Pay which billed itself as a robot lawyer that are most at risk.
The California State Bar did investigate Do Not Pay for uplifting and then the FTC also got involved for alleged FTCA violations.
But compared to general purpose AI tools, I think these legal AI tools are more at risk. And one reason is that general purpose AI tools can say I think more plausibly we're just here to give general information.
[00:20:28] Speaker D: Right?
[00:20:29] Speaker B: We're not purporting to be any kind of legal service provider.
And another reason is that these legal AI tools often are designed and used to prepare legal documents, and that's in the danger zone for UPL liability. Many states, North Carolina is one, explicitly name preparation of legal documents as a red flag element in their UPL statutes.
[00:20:53] Speaker D: Okay, wow, this is a lot to think about, but you mentioned that this was the first of two. So what's the topic of the next blog?
[00:21:02] Speaker B: Right, so today we've talked about what UPL is and descriptively or perhaps predictively, why current UPL rules probably do permit general purpose AI.
And in my next post in podcast, I'll go into the normative question of whether UPL rules should permit general purpose AI as a preview. I think the answer should be yes. But that is all for today's podcast and thanks so much for joining, Sydney, and thank you all for listening.
[00:21:31] Speaker A: Proof Over Precedent is a production of the Access to Justice Lab at Harvard Law School.
Views expressed in student podcasts are not necessarily those of the A J Lab.
Thanks for listening. If we piqued your interest, please subscribe wherever you get your podcasts. Even better, leave us a rating or share an episode with a friend or on social media.
Here's a sneak preview of what we'll bring you next week.
[00:21:56] Speaker C: When I was first teaching upl, we really focused on the unauthorized practice of law by a natural person or maybe an entity, you know, a legal person. And then there was a little bit of a shift that started coming with respect to software. And then the software got a little bit more sophisticated and we saw some cases there. And now we are at generative AI, of course. And so, you know, for a long time this has been about UPL as it relates to persons. And now we're really, we, we've introduced this new thing, or quasi person, I guess some people might say. But I see really with the access to justice movement and the way I became engaged with it is thinking about UPL as a barrier to access to justice, which I would argue that it is despite very well meaning intentions behind our unauthorized practice of law rules. And I would say that today where we are is we have these two forces pushing on the unauthorized practice of law, and that is new types of legal service providers. And then of course, generative AI.