Three Samsung engineers. One GenAI tool. Countless confidential info.

In 2023, employees at Samsung pasted proprietary code into ChatGPT while using it to troubleshoot bugs and summarize meeting notes. They didn’t realize those prompts could be stored or used to train the model.

The result? Confidential data could appear in other users’ outputs - especially if the algorithms memorized it during model training.

This week, we’re talking about prompt leaks - when your GenAI prompt accidentally exposes sensitive information.

This incident wasn’t a breach. It was an “overshare” enabled by a tool that doesn’t have a warning label.

That’s why GenAI prompt hygiene is another new Responsible AI Literacy skill.

Quick Takeaways

  • Prompt leaks happen when you include confidential info containing sensitive business, legal, or personal data in GenAI prompts.
  • GenAI tools store your prompts by default - unless you use a secured or enterprise version.
  • A 5-question checklist can help you prompt more safely and strategically.

Want the Complete Guide + More?

You're only reading the introduction. The complete guide, detailed examples, and implementation steps are available inside our Skills4Good AI Academy. 

Join thousands of professionals in our FREE 7-Day Starter Course and gain instant access to:

  • This complete guide + other Responsible AI resources
  • 7 practical lessons (only 10 minutes a day)
  • Global community of professionals learning how to use AI for Good

No cost. No obligation. Just practical Responsible AI skills you can apply immediately.

Join our Free Responsible AI Starter Course. Apply now!