If you don’t check for fairness, GenAI won’t either.


In early 2024, Google’s generative AI tool, Gemini AI, made headlines - not for what it left out, but for what it added.

A user asked Gemini to generate an image of a 1943 German soldier. When a misspelling was introduced, “Solidier”, the AI returned images of people of color wearing Nazi-era German uniforms. Historically, this was extremely rare.

The backlash was immediate. Critics accused Gemini of “rewriting history” through overcorrection of bias. Defenders argued it was trying to challenge historic exclusion and widen representation.

But the controversy revealed a deeper truth: 

Fairness in GenAI isn’t about ignoring the past- or mindlessly mirroring it. It’s about making deliberate choices to build a more just future.

Quick Takeaways

  • Bias in GenAI is subtle - and fairness is judged by outcomes, not by AI companies’ intentions.
  • Fairness is not one-size-fits-all - it must respect historical and cultural context.
  • Critical human judgment is essential - GenAI can’t police itself. 

Want the Complete Guide + More?

You're only reading the introduction. The complete guide, detailed examples, and implementation steps are available inside our Skills4Good AI Academy. 

Join thousands of professionals in our FREE 7-Day Starter Course and gain instant access to:

  • This complete guide + other Responsible AI resources
  • 7 practical lessons (only 10 minutes a day)
  • Global community of professionals learning how to use AI for Good

No cost. No obligation. Just practical Responsible AI skills you can apply immediately.Join our Free Responsible AI Starter Course. Apply now!