Are Canadian Courts Taking AI-Hallucinated Citations Seriously?

When someone uses an AI tool to research the law, the tool sometimes invents case citations that do not exist. Lawyers call them “fictitious citations” or “hallucinated case law.” Courts call them a growing problem.

Over the past year, we have tracked every known instance of a Canadian court or tribunal identifying a non-existent case authority in a filing (or AI misuse). That research is published as a searchable database and updated weekly. But tracking the problem is only half the picture. The other half is what happens next.

We built a sanctions database to find out. It records the consequences that followed each fictitious citation discovery. After de-duplicating related decisions within the same proceeding, the dataset now contains 147 unique decisions across 45 courts and tribunals. Here are the results.

The problem is accelerating. The response is not keeping up.

Eight decisions in all of 2024. Eighty-nine in 2025. Fifty in the first three months of 2026 alone.

Of the 147 decisions, 72 (49%) resulted in a warning only. Thirty (20%) imposed serious sanctions. The remaining 45 (31%) noted the fictitious citations but imposed no consequence at all.

Serious (30) Warning (72) None (45)
Figure 1. Fictitious citation decisions by quarter, Q1 2024 to Q1 2026. De-duplicated by proceeding.

The pattern holds across every quarter. Warnings dominate. Even in Q1 2026, when total volume hit 50 decisions, warnings still accounted for 44% of outcomes.

This raises a basic question. If nearly half of all responses amount to “don’t do that again,” is the system treating fictitious citations seriously enough?

There are reasons for caution on both sides. Many of these cases involve self-represented litigants who may not understand what went wrong. A warning may be proportionate. But warnings create no deterrent record. A litigant who receives a warning in one proceeding faces no formal consequence if they do it again in another.

The serious cases are getting more serious.

The 30 decisions with serious sanctions tell a different story. Early cases mostly involved modest cost awards of $200 or $500. Recent cases have gone much further.

In Reddy v Saroya, 2026 ABCA 20, the Alberta Court of Appeal ordered $17,550 in costs against counsel personally. In Ko v Li, 2025 ONSC 6785, Justice Fred Myers referred counsel to the Law Society of Ontario and initiated contempt of court proceedings. In Gu v Fogler Rubinoff, 2026 ONSC 466, the court reduced the costs a self-represented party would otherwise have received because their submissions contained fictitious authorities.

The range of serious sanctions now includes cost awards (from $100 to $17,550), costs denied or reduced against the filing party, cost awards against counsel personally, motion records struck from the file, enhanced cost scales, referrals to law societies, and contempt proceedings.

Courts are finding tools in the existing procedural toolkit. They just do not agree on which tool to reach for.

Self-represented litigants file most fictitious citations. Lawyers face the harshest penalties.

Self-represented litigants account for 123 of the 147 decisions (84%). That number is not surprising. They are less likely to recognize a fabricated citation when they see one.

What stands out is the sanctions gap.

Serious Warning None
Figure 2. Sanction outcomes by litigant representation type. Percentages show share within each group.

Among the 24 decisions involving counsel, 11 (46%) resulted in serious sanctions. Among the 123 involving self-represented litigants, only 19 (15%) did.

The numbers make sense on their own terms. Courts hold lawyers to a higher professional standard. A lawyer who files fabricated authorities has breached a duty the court can enforce directly through costs, law society referrals, and contempt. A self-represented litigant who does the same thing may receive sympathy rather than sanction.

But the gap creates a structural blind spot. The vast majority of fictitious citations reach the courtroom through self-represented filings. If 85% of those filings produce nothing more than a warning or no consequence at all, the signal to future filers is weak. The system’s toughest responses target the smallest share of the problem.

Where this is heading.

The current trajectory is unsustainable. That said, three things would help.

  1. Law societies need clear guidance on professional obligations around AI-generated legal research. The Law Society of Ontario’s Futures Committee has engaged on this issue, and formal rules may be implemented in due course.
  2. Courts would benefit from a shared framework for proportionate sanctions. Consequences should not depend entirely on which tribunal a litigant happens to appear before.
  3. Upstream verification can reduce the burden on courts entirely. Tools like CaseCheck allow litigants and lawyers to check whether their cited authorities actually exist before filing. Prevention is simpler than sanction.

The data is clear. The problem is growing faster than the response. Warnings alone will not solve it.