Has Artificial Intelligence Made Attorneys Irrelevant? Not so fast.
By: Chase H. Lawson and Colton L. Adams*
Over the past several years, generative artificial intelligence (“AI”)—programs based on complex mathematical systems that generate outputs by analyzing enormous amounts of digital data[1]—has rapidly reshaped many industries, including across the legal profession. From platforms like LexisNexis’ launch of Protégé to Westlaw’s acquisition of Casetext/CoCounsel, AI-powered tools have become readily available. However, these AI tools are not only available to lawyers; platforms like ChatGPT, Google Gemini, Grok, and many others have also become widely accessible to the public, helping with everything from scheduling and research to writing and brainstorming. As a result, legal professionals and non-lawyers are increasingly turning to AI for tasks once handled exclusively by humans.
This raises important questions: How reliable are these tools, particularly regarding legal matters? And more importantly, can these tools truly replace a trained attorney’s judgment, experience, and thinking?
In February 2025, a New Jersey attorney was reprimanded for relying on AI in place of his professional judgment. The attorney had used ChatGPT to draft two separate motions filed in the Eastern District of Pennsylvania.[2] Upon review, the Court quickly realized that the motions cited fake cases (referred to as “hallucinations”) and overruled or inapposite cases that did not stand for the propositions asserted.[3] When questioned about the mishap, the attorney stated that never in his “wildest dreams” did he think AI would fabricate legal citations.[4] Nevertheless, U.S. District Court Judge Kai N. Scott found that the attorney violated Rule 11 of the Federal Rules of Civil Procedure by submitting the motions, reasoning that “without prudential scrutiny, use of artificial intelligence can turn into outright negligence.”[5] Judge Scott pointedly explained that if the attorney had taken “elementary steps to verify the cases, he would have learned that [the cases] are as artificial as the intelligence behind them.”[6]
Unfortunately, this incident was not an isolated one. The New Jersey attorney is just one of several recent cases where legal professionals have faced consequences for improperly relying on AI to draft court filings.[7] In Mata v. Avianaca, Inc., attorneys from a New York law firm were also sanctioned after submitting a brief that contained non-existent judicial opinions with fake quotes and citations generated by ChatGPT.[8] While these are just a few examples, these cases highlight the potential pitfalls of using AI in legal work without proper oversight.
However, the question remains: Can artificial intelligence be useful to attorneys and non-lawyers when it comes to legal issues, or does this mean it has no place in legal practice? The truth is that AI tools may offer valuable assistance to attorneys and non-lawyers, but they must be used with care. People should exercise caution when using AI and not treat it as a replacement for professional judgment.
Can People Trust AI to Give Them Legal Advice?
People, especially those without a legal background, should not rely on AI tools for legal advice. While platforms like Google Gemini, ChatGPT, and Grok are all AI tools that allow users to ask questions and receive generative answers in real time, the accuracy of these tools can be inconsistent. For example, these generative AI tools cannot always reliably determine which provisions are needed in a contract, ensure that cited caselaw is valid and relevant, or guarantee that the advice follows the applicable rules of procedure. Generative AI tools are known to occasionally “hallucinate” by producing inaccurate or entirely fabricated information.[9] These hallucinations often occur because AI tools do not have information to give the correct answer.[10] More specifically, AI tools--namely large language models such as Google Gemini, ChatGPT, and Grok--can only generate responses and information based on patterns in the data they were trained on and information given by their developers, and on the prompts they receive from people using the program.[11] These AI tools do not “know” the law and lack real-time access to up-to-date legal databases or the nuanced judgment required to interpret legal rules in context. As a result, they do not always have the information they need to answer questions correctly, and may provide incomplete, outdated, or flatly incorrect answers—especially when asked legal questions.
For these reasons, people should not rely on AI as a substitute for legal counsel to help them through legal proceedings, disputes, or even contract drafting. Although hiring an attorney may involve upfront costs, it is often far more cost-effective in the long run. A well-drafted contract or legal document by a qualified attorney, as opposed to one found online or generated by an AI tool, can help prevent costly litigation or other issues down the road. These professionals are much better equipped to handle legal matters. They can use their legal skills, knowledge, and experience to help people navigate the legal landscape much better than any AI tool can.
A Warning to Lawyers Who Use AI in Practice
AI can be a helpful tool to practicing attorneys, but only when used carefully and responsibly. In the evolving legal world, AI tools like LexisNexis’s Protégé and Westlaw’s CoCounsel can help find cases and assist with routine tasks, but lawyers must be cautious when evaluating any outputs generated by these tools and ensure they are legally sound. Simply put, AI can supplement legal work when used responsibly, but it cannot replace the attorneys’ professional judgment.
As Harvard Law School Professor David Wilkins put it, a lawyer who uses AI should “review it — just as any good senior lawyer will review the work of their juniors before sending it out in the world.”[12] In other words, attorneys must diligently verify the accuracy and reliability of AI-generated content before incorporating it into any legal work.
To do this, attorneys can do a couple of things. First, attorneys should ensure that any cases cited by the AI tool are real and support the legal propositions for which they are cited. AI “hallucinations”—fabricated cases, quotes, or citations—remain a known issue with many generative tools. By catching and correcting these errors, attorneys uphold their ethical duties and ensure the accuracy of their work. Second, attorneys should always ensure that any case cited by the AI tool is still good law. Some AI tools may suggest using cases that have been overturned or otherwise invalidated, mistakenly presenting them as authoritative because they believe it to still be good law. However, that is not always true, and attorneys must ensure they rely on good law to advance their clients’ interests. These are just a few practices that will help ensure that attorneys are using AI tools in a responsible manner.
Furthermore, the ABA’s Model Rules of Professional Conduct clarify that lawyers must provide competent representation to their clients.[13] Under Rule 1.1, competent representation includes exercising the “legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”[14] Additionally, it is the responsibility of every attorney to remain informed about “the benefits and risks associated” with the technologies they use to provide their clients with legal services.[15] In short, understanding how to use AI tools—and, more importantly, when not to rely on them—is now part of the modern attorney’s duty.
Final Thoughts
Former U.S. President John F. Kennedy once said, “Automation does not need to be our enemy. I think machines can make life easier for men, if men do not let machines dominate them.” That sentiment holds today more than ever before, as artificial intelligence becomes increasingly embedded in our personal and professional lives.
For non-lawyers, relying on AI to draft contracts or navigate legal issues can be extremely risky, as errors in document drafting, legal filings, or even litigation strategy can have severe consequences. That is why it is essential to consult a qualified attorney who can offer strategic legal advice, ensure compliance with applicable law, and advocate for your interests. While AI may seem like an easy shortcut, it cannot replace the insight, training, and judgment of an experienced legal professional.
For attorneys, the message is equally clear: AI can be a useful tool, but only when used thoughtfully and responsibly. Lawyers must continue to exercise independent professional judgment when integrating AI into their practice and must ensure that any work involving AI is legally accurate and ethically compliant. Ultimately, technologies like AI should enhance law practice, not undermine it or replace independent legal analysis.
AI is likely here to stay, but so is the need for sound legal counsel. No matter how advanced the technology becomes, it cannot replace the nuance, ethics, and human judgment at the heart of effective legal representation. In the end, AI may help attorneys work smarter—but it has not made them irrelevant.
* Mr. Lawson, our guest blogger, is a rising third-year law student at Belmont University College of Law, Class of 2026. He worked as a summer associate with Meridian Law in the summer of 2025. Mr. Adams is an associate attorney at Meridian Law.
[1] See, e.g., Cade Metz & Karen Weise, A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse, N.Y. Times (May 6, 2025), https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html.
[2] See Bunce v. Visual Tech. Innovations, Inc., No. 23-1740, 2025 WL 662398 at *1 (E.D. Pa. Feb. 27, 2025).
[3] Id.
[4] Id. at *7.
[5] Id. at *9-10.
[6] Id. at *8.
[7] See, e.g., Larry Neumeister, Lawyers submitted bogus case law created by ChatGPT. A judge fined them $5,000, AP News (June 22, 2023, 5:16 PM), https://apnews.com/article/arti https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381cficial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c.
[8] Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023).
[9] Connor Murray, Why AI ‘Hallucinations’ Are Worse Than Ever, Forbes (May 6, 2025 1:12 pm), https://www.forbes.com/sites/conormurray/2025/05/06/why-ai-hallucinations-are-worse-than-ever/.
[10] Id.
[11] Id.
[12] Jeff Neal, The legal profession in 2024: AI, Harvard Law Today (Feb. 14, 2024), https://hls.harvard.edu/today/harvard-law-expert-explains-how-ai-may-transform-the-legal-profession-in-2024/.
[13] 2023 Model Rules of Professional Conduct 1.1.
[14] ABA Comm. On Ethics & Pro. Resp., Formal Op. 512 (2024).
[15] Id.
*Disclaimer: The information in this blog post (“post”) is provided for general informational purposes only and may not reflect the current law in your jurisdiction. No information contained in this post should be construed as legal advice from Meridian Law, PLLC, or the individual author, nor is it intended to be a substitute for legal counsel on any subject matter. No reader of this post should act or refrain from acting based on any information included in or accessible through this post without seeking the appropriate legal or other professional advice on the particular facts and circumstances at issue from a lawyer licensed in the recipient’s state, country, or other appropriate licensing jurisdiction.