Hi Everyone, thank you for stopping by to read the latest issue of Law Botics. In this issue we explore some of the impediments to the adoption of AI in the legal profession and an overview of the issues involved. Also, some fascinating research news announcing that an AI system passed a law school exam, albeit with a mediocre grade. What do you call a Harvard Law graduate who graduated with a 2.5 GPA? A lawyer : ) We are certainly in the first few pages of the first chapter of this growing story, with technology, in my opinion, simply outpacing the profession’s ability to proactively address its implications. Enjoy. Best, Stephen
This week it was reported that the popular AI legal app, DoNotPay, cancelled its very public plans to use its alleged AI system in an actual courtroom setting. As you may recall reading in the press, DoNotPay planned to utilize its AI system to defend a real life client in front of a real life traffic court judge. Specifically, the consenting volunteer would wear Apple AirPods or a similar ear piece where the AI system would be able to hear the judge and in real time, “whisper” to the client exactly what to say in arguing his or her case.
Such a fantastical offer and publicity stunt caught the attention of the public at large (which undoubtedly was at the heart of DoNotPay’s intent), but also the attention of the applicable state bar association and local prosecutors.
In a tweet issued by its CEO, Joshua Browder, DoNotPay announced that it had cancelled its court efforts given that the CEO was threatened with criminal prosecution by the state bar. The bar association’s specific violations, and its identity, were not identified in the tweet, but one can easily surmise applicable violations of court rules, logistical impracticality, and unauthorized practice of law issues were forefront as to their legitimate threats. Such a swift and un-flinching response by the bar in question should not have come as a surprise to the brash CEO and his popular company.
Good morning! Bad news: after receiving threats from State Bar prosecutors, it seems likely they will put me in jail for 6 months if I follow through with bringing a robot lawyer into a physical courtroom. DoNotPay is postponing our court case and sticking to consumer rights:
— Joshua Browder (@jbrowder1) January 25, 2023
This event is noteworthy for two reasons. First, the potential length in which AI may disrupt how the practice of law is conducted in the future and its permissible uses. Second, the regulatory and legal pushback that will naturally come from the legal profession until such time that the applicable bar associations can fully digest the magnitude and potential benefits and risks this new technology poses and issue a more detailed rule framework.
While many believe, including myself, that AI systems can bring many benefits to the practice of and understanding of the law, such as greater public access, increased efficiency and improved decision-making, it also raises ethical considerations and issues that must be addressed. The following is a high-level overview of some of those concerns that loom in the future.
A primary ethical concern is the accountability of AI systems. In many cases, it is difficult to understand how an AI system reached its output or decision, making it difficult to hold anyone accountable for its actions. This lack of accountability can be particularly problematic in the legal system, where advice or decisions can have serious consequences for individuals. Who is ultimately responsible for such problems and mistakes? The AI system or vendor, the supervising attorney, or both? The ABA’s House of Delegates adopted at its Annual Meeting in August 2019 Resolution 112 which urged “courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.”
It will be critical that systems and best practice processes are adopted to ensure that such “control and oversight” is being maintained and can be demonstrated and reported. Vendors will likely need to provide reporting and audit functions to effectuate such transparency, controls and oversight as part of their solutions.
Confidentiality is also a significant concern when using AI in the legal profession. AI systems often require access to large amounts of personal data for training purposes, which can raise concerns about confidentiality and security. This is particularly true when the data used to train an AI system is using information about a client ( and their details) who may not have consented to its use. Use of “chatbot” systems by lawyers in connection with existing clients may run afoul of their confidentiality obligations as required by the applicable ethical rules, including MRPC 1.6(a) of the Model Rules of Professional Conduct, as well as threaten the protections afforded by the attorney-client privilege. These are serious issues that will require serious solutions.
Finally, there is the question of when the AI system has independently engaged in the unauthorized practice of law. This may sound like treating the AI as a self-actualized entity or person which sounds like an imaginary moment out of a sci-fi movie or novel, but the courts have already have began to address this issue.
In the matters brought against Legal Zoom and more recently against Upsolve, the applicable courts identified and discussed the underlying judicial principles in making such a determination on a case by case basis in examining the role of automation software in the practice of law. Such principles will be tested again I suspect as generative AI systems become more mainstream. This undesirable outcome in protecting the public from incompetent and unethical lawyers (whether human or not) is at the heart of the policy basis for the self-regulation of the law profession. To me, the interesting question on the horizon will be what is the answer when an AI system is just as or even more competent and ethical than the average lawyer?
We will be diving deeper into each of these concerns in future issues of Law Botics, but it is suffice to say that these hurdles will need to be overcome in the coming years as we continue to see the rapid advancement of these systems being deployed and used in other professions.
In the end, despite the recent predictable setback by DoNotPay, AI has the potential to become more prevalent in the legal profession, it is important to consider the ethical considerations and issues that it raises as urged by the ABA in 2019. This includes the potential for accountability, bias, confidentiality and the unauthorized practice of law implications. While DoNotPay clearly overreached with its stunt-like attempt, legitimate uses of AI are on the horizon that will be able to create benefits that the marketplace and society will expect. How quickly we move toward that new world will be somewhat based on our own willingness as a profession to examine and diligently address these issues.
This Week In the News:
It may not have been an “A” student, but AI system passes law exams. Read more.
Legal expert believes that AI will transform the legal industry, if the lawyers don’t stand in the way. Read more.
Gibson Dunn publishes 2022 review of comprehensive AI regulatory and legislative developments (including proposed federal and state bills). Read here.
Contract management vendor announces integration of AI to enable interactive conversations about contract analysis and insight. Read more.
Hope you enjoyed this latest of Law Botics. Please feel free to share it with your colleagues and others.
Until next week.