An analysis of 177 fabricated citations from 66 Canadian decisions finds that fake citations rarely appear alone. They cluster, mimic the host jurisdiction, and track the subject matter of the dispute.
In Alana Kotler v Ontario Secondary School Teachers’ Federation, 2025 CanLII 96840 (ON LRB), the Ontario Labour Relations Board flagged twelve non-existent case citations in a single set of submissions. In Re Nicholson, 2025 ONSC 1069, the Ontario Superior Court flagged six. In Pennytech Inc v Superior Building Group Limited, 2025 ONLTB 52666, the Landlord and Tenant Board flagged five.
These clusters are the norm, not the outlier.
We analyzed 177 named fictitious citations drawn from 66 Canadian decisions in our fictitious citations database. The dataset excludes decisions that flag fabricated authorities without reproducing the citations themselves. The pattern that emerges from the 177 citations that can be analyzed has direct consequences for how lawyers and self-represented litigants should approach citation verification.
Most fake citations arrive in batches.
The median Canadian decision flagging named fictitious citations contains two of them. The mean is 2.68. Only 32% contain a single fake.
Across the 66 host decisions, 45 flagged two or more fabricated authorities. Fifteen contained exactly two. Ten contained three. Fourteen contained four. Six contained five or more. One contained twelve.
This matters because citation verification is usually performed one citation at a time. A lawyer receiving a factum checks each authority independently, working through the book of authorities citation by citation. The clustering pattern suggests that approach is inefficient. If one citation in a set is fabricated, the conditional probability that another is also fabricated is high.
The reason is mechanical. Most legal filings affected by AI-generated hallucinations are drafted in a single session or a small number of sessions. The same prompt produces the same class of error across multiple outputs. A filer who does not verify one citation is unlikely to have verified the others in the same batch. The clustering we observe in the decisions reflects the workflow that produced them.
The Kotler cluster illustrates the scale this can reach. Twelve citations, all within labour and administrative law, none verifiable. Responding to a filing of that kind takes hours of opposing counsel time and adjudicator attention.
Fakes mimic the host jurisdiction.
The second pattern explains why these citations pass initial scrutiny.
Of the 123 fabricated citations that identify a specific Canadian court or tribunal, 101 cite a court within the same jurisdictional family as the matter in which they appeared. Ontario proceedings receive Ontario-flavoured fakes. Quebec proceedings receive Quebec-flavoured fakes. Federal proceedings receive Federal Court, Federal Court of Appeal, or federal tribunal fakes.
Quebec and Federal matters show the highest match rates at 96%. Ontario sits at 82%. BC and Alberta sit at 77% and 75% respectively. The pattern holds regardless of court family.
Fakes match the subject matter of the dispute.
Jurisdictional match is the surface pattern. Subject matter is the deeper one. Within every cluster we examined, the fabricated authorities track the legal issue before the court, not only the court itself.
All twelve fakes in the Kotler labour proceeding cite union and labour tribunal matters: OLRB, CIRB, CHRT, and a BC labour arbitrator. All six fakes in Re Nicholson, a bankruptcy proceeding, cite Ontario and BC court decisions with commercial or insolvency-adjacent styling. The five fakes in Dulac c. Ville de Gatineau all claim to be QCTAQ municipal and administrative authorities.
This is a predictable property of large language models. They pattern-match the surrounding context across multiple dimensions at once: jurisdiction, subject matter, and the typical case style cited on the issue. A model asked for labour authorities will produce labour-flavoured citations. Whether those citations exist is a separate question the model cannot answer.
The fakes fit the argument. That is what makes them hard to catch on casual review.
What this means for verification.
The combined pattern, clustering plus jurisdictional and subject-matter mimicry, has a practical consequence. Opposing counsel encountering a fabricated citation should assume the book of authorities was generated as a unit and should verify the remaining citations accordingly. Verification is most efficient when performed at the set level rather than citation by citation.
If you find one fake citation, expect more.
