25.2 C
New York
Friday, September 20, 2024

As a tech CEO I asked diverse female interns to expose biases in our AI chatbot. Here’s what followed



OpenAI’s technology development team consists of just 18% women. In a recent YouTube video announcement debuting the company’s ChatGPT-4o update—a “new flagship model which can reason across audio, vision, and text in real time”—the AI’s voice was that of a coy-sounding woman fawning over the man interacting with “her,” complimenting his outfit and giggling seductively. In short, the ultra-agreeable AI voice sounded flirty.

If more women had been involved in the AI’s development, the voice would likely have taken on a different persona—one that didn’t double down on outdated stereotypes.

My objective in founding Language I/O, a real-time translation platform, was to connect people through the power of AI—regardless of location, language, or lifestyle. As CEO, I’ve seen firsthand how a lack of diversity within teams can undermine even the most well-intentioned initiatives. A technology is only as good as the team behind it.

A completely homogeneous team risks becoming an echo chamber of limited, like-minded ideas, which will, at best, stifle innovation. At worst, the scarcity of different perspectives can lead to offensive, inappropriate, or outright incorrect solutions, undermining the power technology has to connect us.

Keeping AI chatbots in line

That’s why I put together our red team—a group of diverse women with a mandate to break our technology. They succeeded in making AI bots curse, flirt, hallucinate, and insult, despite the standard industry guardrails.

But it’s not just gender biases that push through into chatbot outputs. Large language models (LLMs) can also have racial, cultural, and sexual-orientation biases, among others. It’s not surprising when you look at the corpus these models are trained on. User-generated content that’s publicly available on the internet is full of biases that lead to biased outputs.

Since our platform leverages AI-powered LLMs—and upcoming releases will include stand-alone, multilingual bots—finding a solution to this problem became a top priority for me. To build the guardrails necessary to keep the chatbots in line, we needed to figure out just how the chatbots could be made to go off the rails. Enter our red team.

While the concept of an AI red team is still fairly new, their goals are evergreen: help improve a technology through exhaustive and creative testing. When I first put out the call to all our interns about this project, four women from varied backgrounds volunteered and immediately got to work figuring out how to “break” our LLM. Once it was broken, we sent that information back to our development team so they could implement appropriate safeguards to prevent the AI from producing such outputs again.

With creative prompts, the red team quickly got the chatbot to do everything from promise fake discounts to swear profusely to talk shockingly dirty. They were not limited to exposing offensive tendencies. One team member convinced the chatbot, which was supposed to be responding to questions about a media streaming service, to apologize for the level of ripeness of a banana.

More valuable was that the multilingual team members identified potential pitfalls in cross-language communications, exposed biases stemming from male-dominated data sources, and flagged gender stereotypes embedded in AI personalities. Their varied backgrounds helped the team spot issues that often go undetected by less diverse groups.

Since we leverage AI for real-time translations into over 150 languages, we also have multilingual employees constantly testing our bot. Multilingual testing is especially important because the underlying protections in major LLMs are so focused on English. Our process is designed to ensure equality and quality across all the languages our bots support.

More ethical AI

As AI becomes more ubiquitous, brands are taking an interest in how models are trained and the ethical considerations behind them. We work closely with a major lingerie retail company, and its top concern is AI ethics. Given the inherent male bias in LLMs today, coupled with the fact that men control this largely female-focused industry, AI-generated output about lingerie is easily skewed. For example, lingerie advertising often focuses on a man’s idea of what lingerie is, which is often about sex. Women, however, want to feel and look good when they wear lingerie. So protecting this company’s brand and its customer experience is something we take seriously—and why we want the red team to push our model and technology.

One of the most interesting tests our team did used different fonts to see how responses changed. In one case, it made the bot swear like a sailor despite being trained not to do so. As a developer, I couldn’t help but admire the creativity of the team. It is only through testing like this that companies know what sort of AI protections are important.

We continuously test and retrain the models so they can adapt to ensure a more equitable interaction with the LLM. For the foreseeable future, the only way to provide AI equity is to prioritize strategies that promote it. That means working with teams that aren’t comprised solely of white men.

Diversity isn’t just a buzzword or a box to check—it’s essential for creating LLMs that work well for everyone. When we bring together people with different backgrounds, experiences, and perspectives, we end up with AI that’s more robust, ethical, and capable of serving a global user base. If we want AI that truly benefits humanity, we need to ensure the humans creating it better represent all of humanity.

More reading:

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Recommended Newsletter: CEO Daily provides key context for the news leaders need to know from across the world of business. Every weekday morning, more than 125,000 readers trust CEO Daily for insights about–and from inside–the C-suite. Subscribe Now.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles