Brett Trout
The legal profession is undergoing a digital transformation, with artificial intelligence (AI) playing an increasingly prominent role in legal research and document drafting. Some states even require attorneys to stay abreast of technological advancements, such as AI, in the legal field. For example, the Iowa Rules of Professional Conduct state:
“To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education, and comply with all continuing legal education requirements to which the lawyer is subject.” Iowa R. Prof. Cond. 32:1:1 Comment [8]
Despite this mandate, many attorneys are understandably wary of incorporating advanced technology, especially artificial intelligence (AI) into their legal practice. While AI promises efficiency and cost savings, a recent case in the United States District Court for the District of Wyoming serves as a stark reminder of the dangers of including unverified AI-generated content in court filings.

The Motion That Raised Alarms
In Wadsworth v. Walmart Inc. & Jetson Electric Bikes, LLC, attorneys for the plaintiffs filed a motion in limine that cited nine cases. The court discovered that eight of those cases did not exist. Upon investigation, the attorney who prepared the motion, Rudwin Ayala, of the country’s largest personal injury firm, Morgan & Morgan, indicated that he had drafted the motion. Mr. Ayala had uploaded the motion onto “MX2.law” to add supporting case law. This website is apparently an in-house database launched by Mr. Ayala and his firm. Mr. Ayala stated this was the first time he had ever used AI in this manner.
Unbeknownst to Mr. Ayala the AI system had fabricated the case citations. Without verifying the accuracy of the AI-generated case citations, he added them to his motion that was then filed with the court along with signatures of two other attorneys for the plaintiffs. Mr. Ayala does not appear to have learned that the cases were non-existent until the court identified the questionable cases and issued an order to show cause demanding the attorneys and law firms involved produce the cited cases or face sanctions.
Upon investigation, the attorneys and law firms admitted that the citations were AI-generated hallucinations—false information produced by AI—resulting in the court imposing disciplinary actions on all of the attorneys involved.
What Are AI Hallucinations?
AI hallucinations occur when an AI model generates information that appears credible but is entirely fictitious. In the legal field, this can manifest as fabricated case law, incorrect citations, or misleading summaries of legal principles. AI systems, including chat-based platforms and proprietary legal tools, generate responses based on patterns in their training data rather than real-world verification. If unchecked, these hallucinations can mislead attorneys, courts, and clients, potentially resulting in legal errors and ethical violations.
The Court’s Response
In deciding not to sanction the law firms involved, the court took into consideration that the law firms had taken several remedial steps to identify the problem and implementing firm-wide policies to prevent such issues arising in the future, including:
•?Promptly withdrawing the Motion in Limine;
•?Being honest and forthcoming about the use of AI in generating the case citations;
•?Paying opposing counsels’ fees for defending the Motion in Limine; and
•?Implementing policies, safeguards, and training to prevent another occurrence in the future (and providing proof to the court of such measures).
Despite these remedial steps, transparency, and apologetic sentiments, the court found that all three attorneys whose signatures appeared on the motion, had violated Fed. R. Civ. P. 11(b), meriting sanctions for their involvement in filing. These sanctions included:
- Mr. Ayala – revocation of his pro hac vice status and a $3.000 fine. Aggravating factors included: 1) Mr. Ayala was the attorney who used the AI-platform to generate the case citations and include them in the motion; 2) the number of hallucinated cases in the filing compared to real cases; 3) Mr. Ayala’s apparent access to legal research resources; and (4) the fact that attorneys have been on notice of generative AI’s issues in hallucinating cases for quite some time. A mitigating factor warranting a less severe punishment was Mr. Ayala’s honesty and candor with the court when confronted with the fake citations.
- Timothy Michael Morgan, Pro Hac Vice, Morgan and Morgan – $1,000 fine. Although Mr. Morgan was apparently not provided with a copy of the motion prior to filing, and had not read the motion, he did affix his e-signature to the bottom of the Motion in Limine.
- Taly Goody, Goody Law Group, Palos Verdes Estates, CA, – $1,000 fine. Although Ms. Goody was apparently not provided with a copy of the motion prior to filing, she did affix her e-signature to the bottom of the Motion in Limine. Although she was technically local counsel, the court noted that this was Ms. Goody’s first and only appearance for a case in the District of Wyoming.
The court noted that although it appeared that Mr. Morgan and Ms. Goody relied on Mr. Ayala’s reputation and experience to comply with his Rule 11 obligations, that factor was inconsequential in determining whether a violation occurred.
These sanctions highlight the legal system’s intolerance for inaccuracies stemming from AI misuse and underscore the ongoing ethical obligations of attorneys to not only be familiar with burgeoning technology, but to use it in a manner consistent with the rules governing civil procedure and professional conduct.
Lessons for Attorneys Using AI in Legal Research
For legal professionals, this case serves as a critical reminder:
- Always Verify AI-Generated Citations – Treat AI as a research aid, not an infallible authority. Verify all statutes and case citations.
- Understand the Limits of AI – AI-generated content should never replace human expertise and professional judgment. As the subject matter experts, lawyers are the last line of defense, ensuring that all court filings under their names are based on real, verifiable sources.
- Take Responsibility for Legal Filings – Whether they write them or not, attorneys who sign court documents are personally responsible for their accuracy. Blindly relying on AI or lead counsel representations can lead to ethical breaches and professional discipline.
- Adopt AI Safeguards in Law Firms – Implement law firm policies that require verification of AI-generated legal research before submission to the court.
- Local Counsel is Liable – Just because a reputable attorney from a reputable firm represents that a court filing is accurate, attorneys have a legal and ethical duty to confirm that every document submitted to the court under their name is truthful and that all case citations are real and stand for the premise asserted.
The Future of AI in Law
AI is a powerful tool that is becoming ubiquitous for its power to provide clients with better, faster, and cheaper legal services. Given the usefulness of AI and state requirements that attorneys be technologically adept, attorneys cannot hide their heads in the sand and simply ignore AI. Indeed, it may not be long before courts start sanctioning attorney’s for failing to understand the uses and limitations of AI in the practice of law.
Attorneys need to understand that AI is no substitute for diligence and professional responsibility. They must balance technological innovation with ethical standards to prevent the misuse of AI-generated information. As courts continue to scrutinize AI’s role in legal proceedings, attorneys must adapt by educating themselves, integrating AI responsibly, and vigorously validating AI output.
This case is yet another AI wake-up call for the legal profession: the future of AI-assisted law is promising, but attorneys must approach this future armed with ongoing education and a commitment to the integrity of the profession.
Recent Comments