Four Oregon cases—including one that ended without sanctions—tell you what you need to know about using AI in Oregon’s courts.
In December 2025, the Oregon Court of Appeals issued a published order in Ringo v. Colquhoun Design Studio, LLC, 345 Or App 301, 582 P3d 695 (2025), establishing what has now become a framework for sanctioning attorneys who submit AI-generated fabricated content to Oregon courts. Days later, a federal court adopted that framework and applied it on a more significant scale. Just a few weeks ago, the Court of Appeals applied its own framework again in a $10,000 sanctions case. And a fourth Oregon case shows what happens when an attorney discovers an AI error and takes concrete action to address it. Here’s the full picture.
Ringo v. Colquhoun Design Studio (December 2025)
The facts of Ringo are relatively familiar after a review of opinions addressing the use of generative AI across U.S. courts. The respondents’ counsel filed a brief citing two non-existent cases and a fabricated block quotation. The Court of Appeals issued an order directing counsel to show cause why the brief should not be stricken and the attorney sanctioned. Counsel responded but did not address how the errors occurred, claiming simply that they were “inadvertent.”
The Court of Appeals in Ringo found that counsel had violated ORCP 17 C(3)—Oregon’s state analogue to Rule 11—by certifying that his legal arguments were “warranted by existing law” when they plainly were not. The court was equally concerned that the attorney never confirmed he had used AI, never described his research process, and never explained why his verification failed. He simply blamed a vague research tool and promised to do better.
The court struck the brief, ordered counsel to pay $2,000 in sanctions, and required a new brief with a certification that counsel had actually read every case cited and verified that every source existed. The sanctions breakdown: $500 per fake citation and $1,000 per fabricated quotation or false statement of law.
Why “hallucination” is the wrong word
In its order, the Ringo court spent considerable time on terminology. Many courts have adopted the word “hallucination” to describe when tools like ChatGPT generate non-existent cases. The Ringo court rejected it:
“We also recognize that it has become common to refer to cases and principles fabricated by artificial intelligence as ‘hallucinations.’ We reject that terminology because it obscures both the nature and the seriousness of the situation we face. … [G]enerative artificial intelligence is not perceiving nonexistent law as the result of a disorder. Rather, it is generating nonexistent law in accordance with its design.”
345 Or App 303–04.
The distinction matters for attorney accountability. A “hallucination” suggests the AI malfunctioned, but large-language models are working exactly as designed when they generate plausible-sounding but entirely fabricated legal citations—the models are intended to predict likely word sequences, not to verify truth. Calling these fabrications “hallucinations” subtly shifts blame from the attorney who failed to verify to the tool that supposedly misfired. Consider it the legal equivalent of “my calculator gave me the wrong answer” when you typed in the wrong numbers—that’s not an explanation that would fly on an exam, and it’s not going to hold up in court. The Court of Appeals’ linguistic precision serves a clear purpose: attorneys using AI tools have a nondelegable responsibility to verify their work—full stop.
Couvrette v. Wisnovsky (December 2025)
Ringo established clear monetary sanctions. Days later, U.S. Magistrate Judge Clarke, in Couvrette v. Wisnovsky, adopted the framework and demonstrated that Ringo sets a floor, not a ceiling. 2025 WL 4109655 (D. Or. Dec. 12, 2025). In Couvrette, counsel (admitted pro hac vice) filed three briefs containing 15 nonexistent cases and 8 fabricated quotations. Judge Clarke imposed $15,500 in monetary sanctions payable to the court, struck all sanctionable briefs without leave to refile, ordered counsel to pay the opposing party’s attorneys’ fees, dismissed the offending plaintiffs’ claims with prejudice, issued a show cause order to local counsel, and directed the Clerk to notify the Oregon State Bar. Id. at *15–16.
Judge Clarke used Ringo as a baseline but made clear that aggravating factors—repeated local rule violations, false certifications, stonewalling show cause orders, attempted cover-ups—can make sanctions catastrophic. In Couvrette, the monetary sanctions were actually the least significant consequence—the client lost her case entirely because of the attorney’s behavior.
Judge Clarke also directly addressed whether $500 per fake case is sufficient deterrence:
“Attorneys weigh risks and rewards for a living. With a known price tag, an attorney could decide that the risk [of] paying $500 per non-existent case or $1,000 per fabricated quotation is outweighed by the potential reward of getting away with outsourcing to a chatbot the time intensive work of legal research and writing.”
The court’s solution was to layer attorneys’ fees on top of the per-violation sanctions, making the total cost unpredictable and potentially enormous.
Doiban v. Oregon Liquor & Cannabis Comm’n (March 2026)
Just a few weeks ago, in Doiban v. Oregon Liquor & Cannabis Commission, the Court of Appeals applied the Ringo framework again. 347 Or App 742, — P3d — (2026). The numbers there are also significant: at least 15 fabricated case citations, at least 9 fabricated quotations attributed to a mix of real and nonexistent cases, and multiple instances where real cases were cited for propositions they don’t stand for. Running the Ringo formula produces $16,500 in sanctions—the court capped it at $10,000.
Counsel in Doiban offered a detailed account of how the fabrications ended up in the brief: his staff, after finding limited results on Westlaw and Lexis, turned to Google searches that surfaced what appeared to be legitimate case citations; those were copied directly into the brief without further verification. No ChatGPT, counsel insisted—just a misplaced trust in search results.
In response to this explanation, the Court of Appeals stated that the representations were accepted “for the purpose of determining the sanctions”—and then added: “we do not necessarily accept all of the representations as fact.” 347 Or App at 745. Given the volume and character of the fabrications—at least 15 fake citations and 9 fake quotations, some attributed to real cases—the court’s skepticism is understandable. The pattern is consistent with AI-generated content, whatever its source.
Ultimately, though, the source doesn’t matter much to the legal analysis. As the court put it: “Whether an attorney relies on a partner or associate for an initial draft of a brief or, instead, overly relies on a computer * * * prior to filing, the attorney signing the final filed brief is certifying that the citations therein are accurate and not contrived from thin air. The advent of generative AI did not change that principle * * *.” Id. at 749. The explanation for how unverified citations ended up in a brief is less important than the fact that they did—and that someone signed it.
Whatever the source of the fabrications, the conduct that drove the sanctions decision was the delay. Of note, opposing counsel in Doiban had flagged the problem in an email on April 3, 2025, before even filing the answering brief. The answering brief itself—filed April 16, 2025—included a footnote noting that citations in the opening brief could not be located. Counsel did not respond to the email, did not address the fabrications in the reply brief, and took no corrective action for seven months—until questioned face-to-face at oral argument in November 2025. The court found counsel had “minimized the gravity of the situation—at least until face-to-face with our court.” Id. at 747–48.
In the end, the court reduced the sanction from the $16,500 formula result to $10,000 for three reasons: the show cause response was filed before Ringo was decided; counsel provided a detailed explanation of how the errors occurred; and counsel acknowledged the problem and implemented new office procedures. But the court was clear that a significant sanction remained appropriate regardless. Counsel “at least should have known, well before our decision in Ringo, that submitting a brief with unchecked and ultimately fabricated citations may breach an attorney’s duties of professionalism, truthfulness, and candor to the court.” Id. at 749.
The flip side—and the importance of prompt disclosure
To be sure, not every AI error in an Oregon court has ended in sanctions. Green Building Initiative, Inc. v. Peacock (D. Or. 2025) shows what happens when an attorney takes affirmative steps to correct after discovering an AI-generated mistake.
There, an attorney used Microsoft Copilot to edit (not research) a brief. Copilot inserted two fabricated citations—one of which was a variation on a real Oregon Court of Appeals case frequently cited in anti-SLAPP litigation. See Green Building Initiative, Inc. v. Peacock, 350 F.R.D. 289 (D. Or. 2025). When the error came to light, counsel came forward with a full explanation, reimbursed the client for fees connected to the filing, offered to pay opposing counsel’s fees, committed to additional CLE on AI risks, and donated $5,000 to the Campaign for Equal Justice. Judge Simon declined to impose formal sanctions, finding he was satisfied with the remedial steps already taken. See Green Building Initiative v. Peacock, 2025 WL 3198411 (D. Or. Nov. 12, 2025).
The contrast with Couvrette and Doiban is stark. Same underlying problem; entirely different outcomes. The variable that drove the difference was not the number of fake citations but how and how quickly and proactively each attorney responded when the problem was discovered. In that respect, Green Building Initiative is as useful a precedent as Ringo for Oregon practitioners: immediate, complete disclosure isn’t just the ethical course, but can be the difference between no sanctions and losing your client’s case.
What should you know?
The lesson across all four cases is consistent: verify everything, and if you find an error, disclose and correct it immediately.
If you use AI tools for legal research or drafting—and this article is not to say that you shouldn’t; AI can be genuinely useful for drafting and brainstorming—verify every citation before filing. Read the actual cases, not just the headnotes. Be prepared to certify you’ve done both, as Oregon courts may require this going forward. And verification means checking in a reliable legal database: Westlaw, Lexis, or the Oregon or Pacific Reporters (if you’re still going to the law library). The cases tell us that the source of a fabricated citation is ultimately irrelevant; what matters is that someone signed a brief certifying it was real.
If you discover errors post-filing, don’t file a “Notice of Errata” that papers over the problem. Explain what happened, how it happened, and what you’ve done to prevent recurrence. And don’t wait. Doiban makes clear that silence—even months of it—will not be overlooked, even when the errors predate Ringo.
A few helpful resources: the Oregon State Bar’s Formal Opinion No. 2025-205 on Artificial Intelligence Tools warns that lawyers must verify AI output to avoid violating RPCs 3.3 and 4.1. The ABA also issued Formal Opinion 512 on Generative Artificial Intelligence Tools in 2024. And attorney Damien Charlotin maintains a comprehensive database at damiencharlotin.com/hallucinations/, currently tracking over 1300 cases mentioning generative AI.
A note on process: I ran this article past Claude when I finished drafting and asked for additional authorities and a second set of eyes. (Yes, that Claude. No, the irony is not lost on me.) The technology itself isn’t the problem—the problems arise when attorneys treat AI output as authoritative without verification or fail to apply their own judgment to legal analysis. Ringo, Couvrette, and Doiban tell us what the Court of Appeals will do when that line is crossed. Green Building Initiative tells us there’s still a path back if you move quickly and honestly. The question for the rest of us is which example we intend to follow.
Want more of this? I write about Oregon’s appellate decisions so you don’t have to read them all yourself. Subscribe to OnRemand.
Comments for this post are closed.