If you don’t check for fairness, GenAI won’t either.
In early 2024, Google’s generative AI tool, Gemini AI, made headlines - not for what it left out, but for what it added.
A user asked Gemini to generate an image of a 1943 German soldier. When a misspelling was introduced, “Solidier”, the AI returned images of people of color wearing Nazi-era German uniforms. Historically, this was extremely rare.
The backlash was immediate. Critics accused Gemini of “rewriting history” through overcorrection of bias. Defenders argued it was trying to challenge historic exclusion and widen representation.
But the controversy revealed a deeper truth:
Fairness in GenAI isn’t about ignoring the past- or mindlessly mirroring it. It’s about making deliberate choices to build a more just future.
Quick Takeaways
- Bias in GenAI is subtle - and fairness is judged by outcomes, not by AI companies’ intentions.
- Fairness is not one-size-fits-all - it must respect historical and cultural context.
- Critical human judgment is essential - GenAI can’t police itself.
Why Fairness and Non-Discrimination Matter in GenAI
Fairness and non-discrimination are the guardrails keeping AI aligned with our ethical values and fundamental human rights.
Together, these two principles form an essential pillar of Responsible AI. Here's the heart of it:
1. The Fairness Principle
Fairness means treating people equitably - not giving unfair advantages or disadvantages based on who someone is. But because GenAI learns from massive human data (reflecting historical inequalities), it can replicate existing patterns of human bias and discrimination.
Fairness asks:
- Are we making sure no one is pushed 10 steps behind - or 10 steps ahead - just because of their identity?
- Are we giving everyone a genuine, equitable chance - not just mindlessly applying the unfair rules and practices baked into society’s traditions and structures?
Fairness doesn’t always mean treating everyone the same. If some groups have faced barriers - like limited access to education, leadership, or financial opportunities - then fairness can mean providing extra support to those groups to overcome those barriers and close those missed opportunity gaps.
It’s not about giving anyone an unfair edge. It’s about leveling a playing field that was never fair.
Without this understanding, GenAI risks replicating and amplifying existing inequalities, making it even harder for marginalized groups to catch up with those who don’t face any barriers.
2. The Non-Discrimination Principle
Non-discrimination means actively preventing GenAI from harming or excluding people based on race, gender, age, disability, religion, or socioeconomic status. It’s not enough to assume outcomes are “fair enough” - we must intentionally design for inclusion.
Non-Discrimination asks:
- Are we making sure GenAI doesn’t exclude or harm people because of who they are?
- Are we challenging “good enough” outputs that might overlook real-world inequalities?
Unchecked GenAI risks making entire groups invisible. Or worse, it risks reinforcing harmful biases that deny people access to healthcare, education, jobs, and basic dignity.
Non-discrimination safeguards individuals and communities who might otherwise be excluded or marginalized.
How These Principles Work Together
- Non-discrimination ensures everyone can enter the race.
- Fairness gives those historically left behind a head start in the race - so they can truly compete on equal ground.
Together, they make GenAI a tool for expanding opportunity - not repeating historical injustice.
Why It Matters
Fairness and non-discrimination aren't optional - they’re essential to protect our fundamental human rights in the AI era.
If GenAI violates fairness and non-discrimination principles, the risks can harm our everyday human lives. They show up fast in:
1. Missed Opportunities
- When GenAI overlooks certain groups, people miss out on jobs, healthcare, education, and financial services - perpetuating inequality.
- Why it matters: Everyone deserves a fair shot at pursuing a meaningful life. Missed opportunities don’t just hurt individuals - they weaken innovation, reduce diversity of ideas, and widen economic inequality.
2. Reinforced Inequalities
- When GenAI repeats outdated stereotypes, it reinforces unfair views about what people can be, achieve, or deserve.
- Why it matters: Human dignity requires everyone to be seen, heard, and valued. Harmful stereotyping makes entire communities invisible in AI outputs - fueling marginalization.
3. Erosion of Public Trust
- When people view AI outputs as biased, they lose faith in AI and the organizations using it.
- Why it matters: GenAI can't deliver its benefits at scale without trust. When trust collapses, innovation stalls, and bias triggers public backlash, lawsuits, and lost credibility. These results destroy the potential for positive AI innovation.
The Gemini controversy wasn’t just about German soldiers being wrongly depicted. It was a glimpse of a bigger danger: GenAI rewriting facts, reshaping narratives - and leaving truth behind.
That’s why Fairness and Non-Discrimination aren’t just ethical AI ideals - they’re essential defenses against GenAI hardcoding new forms of bias into the future.
Fairness First Checklist: 3 Steps to Spot Bias in GenAI
Run this quick fairness check before you use or share any GenAI output:
Step 1: Who is missing in this output?
Why it matters: Fair GenAI should reflect the full spectrum of humanity - not just dominant or majority groups. If GenAI forgets whole groups, it’s not being fair. It’s repeating who gets seen - and who gets sidelined.
Step 2: Could this output reinforce stereotypes?
Why it matters: GenAI can subtly amplify old biases, normalizing unfair patterns of discrimination unless we intentionally check them. It can sneak old biases into new outputs - unless sharp humans catch them first.
Step 3: Would this output feel fair if it described me or someone I care about?
Why it matters: If it feels unfair when it’s personal, it’s a signal to pause and rethink. Empathy reveals hidden biases that AI algorithms can easily miss.
If any answer gives you pause - don’t just accept the GenAI output. Pause. Question it. Revise it.
That’s Responsible AI leadership in action. And leadership starts with one fairness-first decision at a time.
Quick Start: Build Your “Fairness First” Habit
- Share this article with a colleague who uses GenAI.
- Review the Fairness First: 3-Step Checklist together and start team discussions.
- Adopt one shared practice in your organization's AI use policy.
Over To You
Have you ever spotted bias in a GenAI output you almost missed? Contact us and share your story- we’re building a global playbook for Responsible AI.
Share the Love
Found this issue valuable? Share it with a friend who wants to learn how to use AI ethically and responsibly. Share this article or send them this link to subscribe: https://skills4good.ai/newsletter/
Till next time, stay curious and committed to AI 4 Good!
Josephine and the Skills4Good AI Team
P.S. Want to stay ahead in AI?
Here’s how we can help you:
1. Fast-Track Membership: Essentials Made Easy
Short on time? Our Responsible AI Fast-Track Membership gives you 30 essential lessons - designed for busy professionals who want to master the fundamentals, fast
Enroll Now: https://skills4good.ai/responsible-ai-fast-track-membership/
2. Professional Membership: Build Full Responsible AI Fluency
Go beyond the essentials. Our Professional Membership gives you access to our full Responsible AI curriculum — 130+ lessons to develop deep fluency, leadership skills, and strategic application.
Join Now: https://skills4good.ai/responsible-ai-professional-membership/