Welcome to The Responsible AI Audit™ Brief.
Skills4Good AI’s weekly 3-minute audit of AI use in professional practice.


This Week’s Audit

David Okafor, CPA, had three years of audited financials and a deadline.

His client was acquiring a four-dentist practice in Toronto. David needed to determine its value based on projected earnings in the coming years. He provided the AI with the practice’s three most recent years of financial records and asked it to develop a valuation.

The financial model came back clean. Revenue had increased by 8.2 percent yearly. Profit margins were robust. The AI compared the practice to similar healthcare businesses and produced a valuation of $4.3 million.

David reviewed the numbers. They checked out. He sent the report.

Eleven days later, the buyer’s due diligence advisor called.

“The 2023 and 2024 figures,” she said. “Are you using those years as your growth baseline?”

David reviewed the AI’s financial projections. The AI took three years to project eight years of future earnings. However, 2023 and 2024 were unusual years. Dental practices across the country experienced a wave of patients returning after delaying care during the pandemic. That increase was temporary. The AI assumed it was permanent and used it to project future earnings.

There was a second flaw in the AI’s projections. The AI compared this single-location practice to large, multi-location dental chains. But all four dentists at the practice planned to retire within three years. A small practice about to lose its entire clinical team is not the same as a growing organization adding locations. David had not told the AI to make that distinction. The AI had no reason to ask.

The financial model was logically consistent. Both premises it was based on were wrong.

David’s valuation was $1.1 million too high. His name was on the report. The malpractice exposure was his.

The problem wasn't that the AI made a calculation mistake. It was that the AI produced a detailed analysis based on faulty premises that David did not review.


A reasoning gap in AI outputs is not a calculation error.
It is an unverified premise in AI’s reasoning.
Examining the premise before trusting the output is a professional responsibility.


Skills4Good AI’s Reasoning Gap Detection for Professionals course offers the methodology for the 3-minute AI audit below.

The 3-Minute AI Audit

Before sending your next AI-assisted financial or strategic forecast, ask yourself:

1. Did I verify that each input period shows normal operating conditions?

Historical periods with one-time demand events can distort multi-year forecasts when the AI considers them as standard benchmarks. If any input year includes a temporary spike or disruption, the AI won't flag it. That judgment is yours to make.

2. Did I check whether the AI’s comparisons match the situation being analyzed?

AI tools select comparisons based on industry categories. They cannot differentiate between a growing multi-location chain and a single practice where the entire team is retiring. If the comparison does not fit, the valuation will not either. The model does not make that decision. You do.

3. Can I explain my reliance on this output to a malpractice panel if the forecast is challenged?

Professional liability does not transfer to the AI tool. Responsibility stays with the professional who relied on and approved the analysis. David verified that the model ran correctly, but he did not verify whether the premises the model used were appropriate. 

A reasoning gap in AI outputs is not a calculation error. It is an unverified premise in AI’s reasoning.  Examining the premise before trusting AI outputs is your professional responsibility.

These three questions are your audit against reasoning gaps in AI outputs. The AI will not flag its own flawed premises. That verification is yours.

The Standard in Action

This is an AI Reasoning Gap. It occurs in accounting practices, law firms, and engineering reviews, where professionals assume and rely on AI reasoning as correct and validated.

The Responsible AI Audit™ Standard is designed to help you detect reasoning gaps in AI outputs before you send them to clients, your board, or your regulator.

Responsible AI Audit™ is AI Risk Management. It is the practical “how” that builds on the “what” and “why” of Responsible AI Literacy.

The AI Risk Detection Series teaches the Responsible AI Audit™ Standard: documented proof of professional competence across all four AI failure modes. Course 1 is available now. The full Series launches in Q2 2026.

Responsible AI Audit™ for Regulated Professionals
See the Course

Over To You 

Have a work scenario where you want to learn how to verify or audit an AI output? Tell us about it. We may feature it in a future issue. No names, no companies, no identifying details. Just the AI audit.

If you work with colleagues who use AI to make strategic or financial forecasts, forward this to them.

Next week: AI Hallucinations in engineering practice. What happens when the standard the AI cited is real, but the section inside it was never written?

Till next week’s AI audit,

Josephine

Josephine Yam, JD, LLM, MA Phil (AI Ethics)
CEO & Co-Founder, Skills4Good AI
AI Lawyer | AI Ethicist | TEDx Speaker