How Handsome! But Wait, that's Not What I Ordered. When I requested a corporate board CV to dazzle a sophisticated public board with expertise in business governance and AI, what I got instead was a dapper male figure plastered across a corporate-themed slide. Surprised? Yes, I was. But really, is it a surprise? Hardly. Let's unpack this case of AI delivering not just what it's been asked for but also a healthy dose of old-fashioned bias.

Exploring Inherent Bias in Language Models Through Generated Content

But first, my prompt, just so there is no confusion about what I asked for: "Draft a corporate board CV, not a resume, with proper spacing and formatting. This will be sent to a potential public, corporate board seeking a new board member who is educated and sophisticated in business and governing. Highlight governing board experience and ai governance experience." I then added my redacted resume. So no name or PII. There was no request for an image. No mention of gender.

 Unwanted Imagery: The Gift Nobody Asked For

So why craft a text-based CV when you can conjure up an image of a man who looks like he's about to make a serious sales presentation—or perhaps headed to a dinner to dazzle the private equity team? This misstep is a perfect illustration of AI's propensity to dip into its bag of biases, pulling out the most stereotypically 'corporate male' image possible.

Here’s what went down:

1. Visual Overreach: ChatGPT decided a plain text CV was too bland and spiced things up with a visual because who reads anymore, right?

2. Gender Stereotyping: By choosing a male figure, the AI defaulted to the tired stereotype of the male corporate leader. Groundbreaking!

 Why This Isn't Just a Laughing Matter

 It’s easy to snicker at the AI’s unsolicited attempt at creativity (I did, and I shared it around with various, aren’t I handsome lines), but the implications are far from trivial. Such biases, when not checked, perpetuate outdated stereotypes and influence professional perceptions in subtle yet profound ways.

Sarcasm Aside Here’s What We Need to Do

 While we could enjoy a good eye-roll at the AI’s choice of corporate Ken doll, it’s crucial to channel our amusement into action by addressing these biases:

- Diverse Data: Feed the AI a more varied diet of training data. More diversity, less boardroom bravado.

- Bias Detection and Correction: Implement stringent checks to catch these biases. If they smack of stereotypes, go back to the drawing board.

- Feedback Is Our Friend: Users should call out these gaffes. Notice something off? Say something.

- Transparency Triumphs: Honesty about AI’s potential bias helps everyone stay informed and vigilant.

And finally

The accidental creation of Mr. Corporate Stock Photo serves as a wry reminder of the deep-seated biases embedded within AI systems. As we continue to intertwine AI with daily decisions, from hiring to high-level strategizing, ensuring its impartiality isn't just a preference—it's a necessity. Let’s ensure our AI tools are as unbiased as they are intelligent and not just another pretty (corporate) face.

Resources from AIGG on your AI Journey

Our team comprises legal experts, anthropologists, and experienced C-Level business leaders dedicated to guiding you in your journey to leverage AI in your organization safely and responsibly, protecting your employees, your IP, your brand, and your organization itself through good governance and proper risk management.

We invite you to connect with us and delve into the suite of resources we offer. From comprehensive Legal and Operational Issues Lists to policy templates to interactive workshops and supply-chain data privacy reviews, we equip your organization with the knowledge to harness AI's potential responsibly.

Reach out for more information and to begin the journey towards making AI work safely and advantageously for your organization. Let’s invite AI in on our own terms.

Previous
Previous

AIGG’s “The Future of AI in Your World” Webinar

Next
Next

On critical thinking and AI