Welcome to The Responsible AI Audit™ Brief.
Skills4Good AI’s 3-minute audit of AI use in professional practice.
This Week’s Audit
Elena Vasquez is a licensed Professional Engineer. When she puts her stamp on a document, it is not a formality. It is a declaration that the technical content is correct, complete, and safe to rely on.
She was preparing the materials specification document for a bridge rehabilitation project in Vancouver. It informs the construction team exactly which materials must be used and which technical standards they must meet.
Corrosion resistance in bridge steel is not a minor detail. It is the specification that determines how long the structure can safely carry traffic before the steel begins to degrade. The provincial transportation authority required that standard to be cited explicitly.
Elena fed the project details into an AI tool and asked for the applicable technical standards. The tool returned eight references. Each had a document number, a publication year, and a description of its scope. The list looked exactly like what a careful engineer would produce.
She incorporated the citations into the specification and stamped it.
Six weeks later, the plan reviewer sent a correction notice.
“Section 7.4,” the reviewer wrote. “Can you provide the source for the enhanced corrosion resistance requirements?”
Elena opened the standard the AI had cited — ASTM A709/A709M-21 — and read through it twice. There was no Section 7.4. The AI had cited a section that did not exist inside a document that did.
The correction notice was issued six weeks before the construction approval. If it had not been caught at this stage, the bridge would have been built to a corrosion resistance standard unsupported by any official regulation — a section that doesn't exist in any standard.
The correction itself was manageable. She located the actual requirements in the standard and rewrote the language. But the notice was in the project file permanently.
And her professional regulator had one question: had she verified what the AI cited, or had she just trusted how it looked?
Her license, the credential she had spent years earning, was now part of that answer.
She had confirmed the standard existed. She had not once opened it to verify its contents. The reference list looked exactly like verified research. Every citation included a document number, a year, and a scope description. None of it raised any doubt.
The correction notice was in her project file permanently. Her professional license was now the subject of a formal inquiry. The professional consequence was hers.
The problem was not that the AI made a technical error. It was that the AI hallucinated a citation: a reference that looked completely real, inside a document that was real, for a section that did not exist.
An AI hallucination is not a technical error.
It’s a citation that looks entirely real until you verify it by going to its source.
Verifying every citation before you rely on it in your workflow is a professional duty.
The 3-Minute AI Audit
This quick AI audit is one of the applications of the Three-Gate Process from the Responsible AI Audit: Hallucination Detection for Professionals course. Before you rely on your next AI-assisted specification, report, or technical document, ask yourself:
1. Can I locate every section and subsection the AI cited, not just confirm that the document itself exists?
A document can exist while the section the AI cited within it does not. Verification means checking the citation to its actual location in the source. Confirming the document number is not enough.
2. Did I apply the same review discipline to this AI-generated reference list that I would apply to a junior professional’s first draft?
Before a junior colleague’s research bears your name, it is checked against primary sources. AI-generated citations pose the same risk of error and need the same independent verification, no matter how authoritatively they are formatted.
3. Could I reconstruct every technical claim in this document from original sources if my professional regulator asked?
Your professional stamp certifies that the content is accurate and complete. If a citation you stamped cannot be found in the source, the professional explanation starts with you, not the tool that generated it.
An AI hallucination is not a technical error. It’s a citation that looks entirely real until you verify it by going to its source. Verifying every citation before you rely on it in your workflow is a professional duty.
These three questions serve as your audit against hallucinations in AI-generated citations. The AI will not flag its own fabricated citations. That verification is yours.
The Standard in Action
This is an AI Hallucination. It appears in engineering specifications, legal briefs, and clinical records. Wherever professionals rely on AI to generate citations without verifying them, their professional license is at risk. The Responsible AI Audit™ methodologies are designed to catch it before it reaches your regulator, your client, or your licensing board.
Responsible AI Audit™ is AI Risk Management. It is the practical “how” that builds on the “what” and “why” of Responsible AI Literacy.
Course 1: Hallucination Detection delivers the Responsible AI Audit™ methodology. It teaches a systematic three-gate verification method for AI outputs before they carry your name. Course 1 is available now.

Hallucination Detection for Professionals Course
See the Course
Over To You
Have a work scenario where you want to learn how to verify or audit an AI output? Reply to this email and tell us about it. We might feature it in a future issue. No names, no companies, no identifying details. Just the AI audit.
If you work with colleagues who use AI in their technical recommendations or design work, forward this to them.
Till the next AI audit,
Josephine
Josephine Yam, JD, LLM, MA Phil (AI Ethics)
CEO & Co-Founder, Skills4Good AI
AI Lawyer | AI Ethicist | TEDx Speaker