Welcome to The Responsible AI Audit™ Brief.
Skills4Good AI’s weekly 3-minute audit of AI use in professional practice.


This Week’s Audit

Daniel, a senior associate at a midsize firm, used AI to draft a client memo on a breach-of-contract matter. The memo cited four recent appellate decisions supporting the client’s position. The analysis was clear. The writing was polished. The citation format looked consistent across all four cases.

The reviewing partner checked the sources before signing off. Three checked out. The fourth did not exist.

The court name was legitimate. The docket number was formatted correctly. The legal reasoning was plausible. But no such case had ever been filed, argued, or decided in any jurisdiction.

“The AI provided it. It looked exactly like the other three,” Daniel said afterward. “Nothing in the AI output flagged it as different.”

Here’s where professionals get stuck.

The AI did not retrieve these citations from a legal database. It generated text that resembled legal research. When one prediction was wrong, nothing in the output changed. The confidence stayed the same. The formatting stayed the same. The tone stayed the same. A fabricated citation looks identical to a real one.

Daniel had no structured process for verifying AI-generated citations. No checkpoint sat between the AI output and the partner’s desk. No method existed to distinguish sourced references from generated ones.

The partner caught it. This time.

But “this time” is not a process. And “the AI provided it” is not an answer when a client, a regulator, or a professional standards body asks how you verified your sources.


Fabricated citations are not a technology problem.
They are an AI verification problem. And AI verification is a professional duty.


Skills4Good AI’s Hallucination Detection for Lawyers course has been approved by the State Bar of California for 1.0 MCLE credit (Technology). The 3-minute AI audit below is drawn from the same methodology.

The 3-Minute AI Audit

Before you send your next AI-assisted memo, report, or recommendation, ask yourself:

  1. Can I locate every case, statute, or data point this AI cited in a primary database, independent of the AI that produced it? 
    Hallucinations do not signal themselves. A fabricated citation carries the same formatting and confidence as a verified one.

  2. Did I review this AI-generated research the same way I would review a junior associate’s first draft?
    When AI produces work faster, professionals often verify it faster too. The risk is dropping the AI verification step you would apply to any other source.

  3. Could I explain my reliance on this AI output if a client, a regulator, or a professional standards body asked how I verified its accuracy?
    If your answer depends on “the AI provided it,” you have located the gap. The question is not whether the AI was wrong. The question is whether you had a process to catch it.

These three questions surface what most professionals skip: the distance between what AI presented as fact and what you independently confirmed.

Fabricated citations are not a technology problem. They are a verification problem. And verification is a professional duty.

The Standard in Action

This is Hallucination. It shows up in law firms, accounting practices, healthcare providers, and anywhere professionals treat AI output as verified research. The Responsible AI Audit™ Standard is designed to catch it before it reaches your client, your board, or your regulator.

We teach this methodology to professionals who want a structured, defensible audit process. The AI Risk Detection Series, Course 1: Hallucination Detection for Lawyers, is where it starts. Approved by the State Bar of California for 1.0 MCLE credit.

Responsible AI Audit™ CPD Courses for Lawyers
See the Course

If you work with lawyers using AI tools, forward this to them.

Next week: Privacy breach in medical practice. What happens when AI surfaces patient data that was never entered into the prompt.

Till next week’s AI audit,

Josephine

Josephine Yam, JD, LLM, MA Phil (AI Ethics)
CEO & Co-Founder, Skills4Good AI
AI Lawyer | AI Ethicist | TEDx Speaker