Fictitious Citations in Canadian Courts | AI-Hallucinated Case Database
Last Updated: March 3, 2026

When AI Invents the Law:
Fictitious Citations in Canadian Courts

Courts and tribunals across Canada have identified non-existent case authorities being cited in legal proceedings. In many instances, the use of artificial intelligence tools has been identified as the source.

Fictitious Citations
Decisions Affected
Courts & Tribunals
AI Warnings
We track fictitious citations cited as real cases. This database identifies specific fictitious citations that were cited as real authority and tracks cases where courts flagged the improper use of AI. Share this tool on X, LinkedIn, or copy link.
Important Disclaimer:

Unless otherwise specified, every citation listed below was described by the adjudicator as non-existent, fictitious, fabricated, or otherwise not locatable in reported case law. Citations marked with an asterisk (*) indicate that, although the adjudicator found that the cited case could not be located or does not exist, the decision does not attribute the issue to the use of artificial intelligence or make any finding about how the citation was generated.

We do not take a position on how or why (i) any citation came to be included in a filing or (ii) AI was used. We also do not attribute intent, motive, or wrongdoing to any party beyond what is expressly stated in the decision itself.

Geographic Spread

Fictitious Citations and AI Issues Across Canada

Click a province or territory to filter the database below.

Fewer More

Fictitious CitationAppeared InDateCourt / Tribunal
?

What about the AI-hallucinated citations
that weren’t caught?

This database only includes fictitious citations that adjudicators identified and flagged. For every case caught, others may have gone undetected. The true number of fake cases in Canadian courts that slipped through remains unknown.

Tom Macintosh Zheng

Using AI to prepare legal documents isn’t inherently wrong. For many self-represented Canadians, it may be their only realistic and cost-effective path to justice. The problem isn’t that people are using these tools. The problem is that our courts and tribunals don’t yet have adequate safeguards to catch when these tools hallucinate.

Tom Macintosh Zheng Co-founder of Courtready
Courtready
Our Solution

CaseCheck by Courtready

Don’t let fake cases become real law. CaseCheck is a Canadian legal citation verification tool that cross-references authorities against Canadian case law database. We help legal professionals identify AI-hallucinated cases before they reach the courtroom.

Try Our Demo Let Me Know When It’s Ready
AI hallucinations in Canadian courts
Understanding the Problem

Why Does AI Hallucinate Legal Citations?

Large language models like ChatGPT don’t retrieve information the way a search engine or legal database does. They generate text, one word at a time, by predicting what comes next. The result: fake cases in Canadian courts that look real but never existed.

1

AI predicts words, not facts

Large language models work by predicting the most probable next word in a sequence. They don’t look up cases in a database. When asked for legal authority, the model generates text that looks like a citation: a plausible party name, a realistic court abbreviation, a convincing year. None of it is verified against actual case law.

2

Legal citations are especially vulnerable

Case citations follow rigid, predictable formats: a party name, a year, a court code, and a number. This structure makes them easy for AI to mimic convincingly. A hallucinated citation like Smith v. Jones, 2019 ONCA 412 looks identical to a real one, and there is no visual tell that it’s fabricated.

3

The model doesn’t know it’s wrong

AI doesn’t distinguish between generating a real citation and a fictitious one. It has no concept of truth or accuracy, only statistical probability. The model produces what looks right with the same confidence whether the case exists or not. That’s what makes these errors so dangerous in legal settings where every authority must be verifiable.

This is a simplified explanation. AI hallucinations are an active area of research with no single agreed-upon cause or solution.