At the Institute of Corporate Directors NL Chapter webinar, "Bridging the GenAI Governance Risk Gap," I presented a practical framework for responsible AI governance. The most pressing issue? A Responsible AI literacy gap that endangers boards. Whether you serve on a for-profit or nonprofit board, or advise boards, closing this gap is crucial.

The burning question I addressed at the webinar—one I hear from boards constantly: “How do we know if management is telling us the truth about AI risks? "

The harder question: If the board can't verify what management reports, how do you provide governance oversight?

The $10 Million Question Nobody Asked

Sarah chairs an audit committee. She is smart, experienced, and diligent. Her Q2 management report stated: "AI is performing exceptionally. No compliance issues."

Sarah inquired about bias, privacy, and transparency. Management had responses.

Three months later: Their AI denied loans to 2,300 qualified applicants—disproportionately women and minorities. A class-action lawsuit was filed. Sarah and her audit committee colleagues' reputations were at stake.

Sarah wondered: What should we have asked that we didn't know to ask?

Here's the Thing About Responsible AI Governance

You can't verify what you can't understand.

Sarah's board did everything "right." Asked about AI risks. Reviewed AI policies. But they couldn't tell the difference between just checking boxes and real AI oversight.

  • Checking boxes:
    "We have an AI ethics policy."
  • Actual oversight:
    "Show us the data. What happened when the AI tool failed?"

This gap—between what boards think they're doing and what they can verify—is where the Responsible AI literacy gap lives.

What board members tell me: "I don't know what good answers sound like versus excuses dressed up in technical language."

You don't have to become a data scientist. What you need is curiosity to ask better questions and critical thinking to recognize when answers don't make sense.

The 5-Question Assessment for Your Board  

The Human Skill: Curiosity

Answer honestly. Being willing to look beneath surface assurances is essential.

  1. Does the entire board have enough technical fluency to understand the AI tools management uses?

    This is the AI governance risk of lacking AI Technical Fluency.

    Why it matters: Without board-level fluency, directors can’t challenge assumptions, recognize red flags, or fulfill their duty of care. Oversight becomes mere rubber-stamping.

  2. Does the board regularly oversee how management governs personal data?

    This is the AI Governance Risk related to AI’s impacts on Data Privacy.

    Why it matters: If the board doesn’t review data governance, privacy failures go unnoticed and unreported. This exposes the organization to fines, breaches, and loss of trust the board should have foreseen.

  3. Does the board regularly discuss the ethics risks of AI tools?

    This is the AI governance risk related to AI’s impacts on AI Ethics.

    Why it matters: When the board fails to examine AI ethics risks, harmful or discriminatory outcomes can occur unchecked, damaging trust and inviting regulatory scrutiny that the board is expected to anticipate.

  4. Does the board receive regular reports on AI tools' human rights risks?

    This is the AI governance risk associated with AI’s impacts on Human Rights.

    Why it matters: Without regular reporting on human-rights impacts, directors lack visibility to prevent harm, especially to vulnerable groups. It raises legal risks and threatens the organization’s social license to operate.

  5. Does the board hold management accountable for responsible AI practices?

    This is the AI Governance Risk of Accountability.

    Why it matters: Without clear executive ownership and a regular reporting schedule, the board cannot hold management accountable. Early warnings are missed, leading to oversight failure.

If your board answers "No" or "Sometimes" to more than two questions, it indicates a Responsible AI literacy gap between what you believe you're overseeing and what you can actually verify.

What Good Answers Actually Sound Like

The Human Skill: Critical Thinking

Your value: recognizing when an answer sounds good but tells you nothing.

When asking about testing on diverse customers, management provides answers that are:

  • Reassuring but Empty:
    "Yes, we test for AI bias."
  • Specific and Verifiable:
    "We test on stratified samples. Last quarter, we found underperformance for Hispanic applicants aged 18-25, so we retrained with 40% more data."

The first makes you feel better. The second gives you names, timelines, accountability, and something to verify.

You don't need to know how algorithms work. You need to recognize when someone is telling you what they did versus making you feel like they did something.

What Sarah Actually Did

The Human Skill: Adaptability

After the lawsuit, Sarah didn't hire consultants. She changed how her board learned.

She started quarterly "translation sessions"—30-minute gatherings where technical staff explain one AI concept in simple terms. Not to turn directors into engineers, but to foster understanding necessary for asking better questions.

In one session: "model decay"—how AI systems degrade over time if not monitored. Three weeks later, a director asked, "When did you last retrain the credit model?" Management paused. No response. They had never retrained it. The board identified a major issue before deployment.

The transformation wasn't about learning to code. It was about learning to ask: "Show us the data. Who's checking the checkers?"

Accountability questions are rooted in curiosity and critical thinking, not machine learning expertise.

AI Skills + Human Skills = Effective Responsible AI Oversight

The board's job isn't to understand how AI works—it's to understand when management doesn't have answers.

That requires both technical literacy to follow the conversation and human judgment to know when someone is avoiding your question.

Get your CPD while Closing the Responsible AI Literacy Gap

Your annual CPD is coming up. You can take another course about AI tools you'll struggle to retain. Or build practical human skills: asking the right questions, recognizing substantive answers, guiding boards toward oversight that protects stakeholders.

One makes you feel informed. The other makes you effective. These 5 questions are your starting point. The deeper work—curiosity, critical thinking, adaptability—that's what Skills4Good AI’s Responsible AI Literacy courses teach.

Our Responsible AI Essentials Course is CPD-eligible for professionals. It combines technical literacy with the human skills that make that literacy useful.

Effective AI Governance isn't about mastering the technology. It's about mastering the questions that reveal the truth.

Explore CPD-eligible Responsible AI Essentials

Building a community where curiosity meets accountability starts with conversations like this one. Forward this to someone who serves on or advises boards—they'll thank you.

Join the Skills4Good AI Academy where Responsible AI Literacy = AI Skills + Human Skills™

Because that's what transforms learners into leaders.

Josephine Yam
CEO & Co-Founder, Skills4Good AI
AI Lawyer | AI Ethicist | TEDx Speaker
Creator of the Responsible AI Literacy Framework