Welcome to The Responsible AI Audit™ Brief.
Skills4Good AI’s weekly 3-minute audit of AI use in professional practice.


This Week’s Audit

Dr. Sarah Chen, a family physician at a group practice, used her clinic’s AI documentation assistant to draft a referral letter. Her patient, Marcus Torres, needed a cardiology consultation for shortness of breath on exertion and mild hypertension.

Her prompt was specific. “Draft a referral for Marcus Torres, 54, for shortness of breath on exertion and mild hypertension. Include his recent vitals, echo results, and relevant cardiac medications.”

The AI-generated letter was clear. It explained the reason for the referral and listed Marcus’s current medications. It also included a paragraph mentioning his history of alcohol use disorder and the psychiatric medication he was currently taking.

Sarah had not included either in her prompt. Both were in Marcus’s electronic health record.

The AI had access to the full patient chart through the clinic’s integrated documentation system.

“I didn’t ask for any of that,” she told herself. “I was focused on his heart symptoms.”

The referral letter went to the cardiologist’s office via secure fax. Several staff members at the receiving practice now have it in their system. Marcus hadn’t disclosed his psychiatric medications or substance use history to the cardiologist. He had not given consent for that information to be included with this referral.

Here's where professionals get stuck.

The AI assembled everything it could access and produced a letter that looked thorough and complete. It lacked any way to differentiate what Marcus had consented to share with the cardiologist from what was in his chart. Thoroughness was the feature. Consent was not.

Sarah had no structured process for auditing what the AI included versus what her patient had consented to share. The letter went out as drafted.


Privacy breaches are not a technology problem.
They are an unverified consent problem.
And verifying consent is your professional duty.


Skills4Good AI’s Privacy Breach Detection for Professionals CPD course offers the methodology for the 3-minute AI audit below.

The 3-Minute AI Audit

Before you send your next AI-drafted letter, summary, or clinical document, ask yourself:

1. Does every piece of personal information in this AI-generated document have independent patient consent for disclosure?

AI includes the personal information it can access. AI access is not the same as a patient's consent. A prior note, a medication entry, or a record in a patient’s chart may not carry the patient's consent to share with this recipient.

The AI cannot make that distinction. You must.

2. Did I review this AI-generated document the same way I would review a letter written from scratch?

When AI creates a polished, complete-looking document, the review tends to focus on tone and format rather than whether each disclosure carries the patient's consent. The AI doesn't know your patient’s privacy preferences. You do.

That review process should not change because the drafter was an AI.

3. Could I explain each disclosure in this AI-generated document if the patient asked why that information was shared with another party?

Each disclosure must have a valid reason: the data was necessary, and the patient consented to its reaching this recipient. If your only response is “the AI included it,” then that item was never checked for consent.

Your patient’s consent determines what information accompanies this referral. The AI’s access to their record does not.

These three questions expose the privacy risks between what the AI assembled from a patient’s chart and what the patient consented to share with this recipient.

Privacy breaches are not a technology problem. They are an unverified consent problem. And verifying consent is a professional duty.

The Standard in Action

This is a Privacy Breach. It occurs in medical practices, legal offices, and everywhere professionals use AI-generated documents that contain personal data, as complete, ready-to-send documents.

The Responsible AI Audit™ Standard is designed to detect it before it reaches your patient’s next provider, their employer, or their insurer.

We teach this methodology to professionals who want a structured, defensible AI audit process.

The AI Risk Detection Series for Professionals covers all four failure modes. Course 1 is available now. The full Series launches in Q2 2026.

Responsible AI Audit™ CPD Courses for Professionals
See the Course

If you work with colleagues who use AI in their work, send this to them.

Next week: Bias in hiring decisions. What happens when the AI’s shortlist has already been sorted before a human initially reviews a candidate’s name?

Till next week’s AI audit, 

Josephine

Josephine Yam, JD, LLM, MA Phil (AI Ethics)
CEO & Co-Founder, Skills4Good AI
AI Lawyer | AI Ethicist | TEDx Speaker