Design

Is AI doomed to be racist and sexist?

4 min read
Aya Burstein Ben-Aharon
  •  Feb 4, 2019
Link copied to clipboard

Once upon a time, humanity was using bias to survive—to assess potential dangers and keep us safe from the big wide world and various predators. Millennia later, have we started building machines that will become those predators?

Human thinking is characterized by bias, and it’s a built-in byproduct of human-designed AI. These incorporated biases can echo and amplify problematic social perceptions, such as lack of social diversity and perceived superiority between certain social groups. I decided to take a look at the biases found in both human psychology and technology; and the disturbing intersection between the two.

“When it comes to AI, data is queen.”

Twitter Logo

But what exactly is bias?

According to the Cambridge English Dictionary, bias is “The action of supporting or opposing a particular person or thing in an unfair way, because of allowing personal opinions to influence your judgment.”

Since the dawn of time, bias has helped us survive. Think about it this way: if you’d see an abandoned house in the middle of the night with a weird looking figure staring at you, you’d probably try to get out of there as soon as possible and not stop until you reach a safe space.

When dealing with humans, there will always be some bias. We make snap judgments to keep ourselves safe (like in the example above). It’s how we’re programmed. There are two kinds of biases: conscious and unconscious.

According to Sandy Sparks from the University of Warwick, unconscious bias can be understood as “a bias that we are unaware of, and which happens outside of our control. It is a bias that happens automatically and is triggered by our brain making quick judgments and assessments of people and situations, influenced by our background, cultural environment, and personal experiences.”

According to the Perception Institute, conscious bias is “the attitudes and beliefs we have about a person or group on a conscious level. When people feel threatened, they are more likely to draw group boundaries to distinguish themselves from others.”

So, what about this AI business?

“It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.” Stanford University

Look at this image.

Oprah Winfrey, Flickr

What do you see? Do you recognize what or who is in this picture? I assume you got it pretty fast. It’s Oprah Winfrey.

How many nanoseconds did it take you to recognize that it is an image of a human being? That it is a woman? That she has dark skin? That it’s Oprah? I assume all that information got processed pretty much instantly.

While humans can recognize another human face really quickly, machines are still much slower than the human brain in making the calculations necessary for accurate facial recognition.

Joy Buolamwini and the ‘coded gaze’

How does AI react to non-white non-men? Let’s talk about researcher Joy Buolamwini. Buolamwini is a Ghanaian-American computer scientist and digital activist based at the MIT Media Lab.

When she was working on her bachelor’s degree in computer science at the Georgia Institute of Technology, she noticed that AI systems recognized her lighter-skinned friends but did not recognize her own (dark-skinned) face at all. At that stage, she assumed that someone else would take care of this ‘bug’ and solve the problem. For her own undergraduate research, she simply used her light-skinned friend’s face.

But when Buolamwini started her master’s degree at MIT Media Lab, again she encountered the same problem.

In order for her own face to be recognized by the system, she needed to wear a white mask—just to be recognized as human.

Joy Buolamwini demonstrates “The Coded Gaze: Unmasked”

Hiding her face behind a mask was not the answer: she needed to change things.

Buolamwini called the problem “the coded gaze” and launched the Algorithmic Justice League to highlight algorithmic bias, provide a space for people to voice concerns and experiences with coded bias, and to develop practices for accountability during the design, development, and deployment of coded systems.

In her research, Buolamwini stressed that “there is a clear distinction between face detection (Is there a face?) and face classification (What kind of face?). If no face is detected (as happened to [her] unless [she] put on a white mask), no further work can be done using that face. No classification tasks can happen if there’s no face detected.”

Buolamwini was curious about what kind of results she’d get if she ran her face across different facial analysis demos.

She chose to focus her research around three companies: IBM, Microsoft, and Face++. All three companies perform better on males than females, and also more accurately on lighter-skinned individuals than darker-skinned individuals. All three companies’ facial analyses performed the worst on dark-skinned women.

When it comes to AI, data is queen. The algorithms can recognize only what the data scientists train them to recognize; so, if there are many more white men than black women in the dataset used to teach the algorithm, it will be worse at identifying black women.

It’s all about the data

Another interesting example is Google search. When I searched Google Images for ‘grandma’, Google’s algorithm returned results with very specific images of grandmas.

One could argue that I received accurate results. The problem with these results is that they were too narrow and far from being inclusive or exhaustive; there exists a void of how a massive and meaningful part of humanity would imagine an image of a grandma.

The grandma that Google’s algorithm is showing here is very specific—and very white.

When the intersection between humanity and tech goes south

Let’s take a look at what actually happened when Microsoft decided to jump headfirst into bringing AI to the public domain.

Picture the scene: it’s 2013 and everyone on the team is super excited. A cool, new product is about to be launched: Tay, the teenage chatbot.

It started innocently enough. Tay was launched on Twitter, and the team had a couple of beers to celebrate. As part of the design of her age and character, she was planned with slang—not any formal language.

The plan was for Tay to learn from humans and so she came with a “repeat after me” option. As one can easily imagine, many people from the Twitter community took advantage of this ability.

It took her less than 24 hours for her to become a racist, feminist hater, Hitler lover…

Microsoft didn’t wait for a bigger disaster; they quickly sent her into a long and deep sleep. In addition, the developers destroyed all of her horrifying tweets. All that is left for humanity are the screenshots.

But what could have been done differently?

Let’s zoom out for a moment and look at how data scientists train chatbots.

Tay was a chatbot programmed specifically to learn and relearn from humans. Bots get better and more accurate the more training they get, and as the data scientists provide it with more questions and answers—so, if a chatbot receives more racist questions, it will become more racist. It is a product of the data it receives.

Data scientists have the ability to blacklist specific words and/or sentences, so they can be blocked from ever becoming part of the bot’s vocabulary and experience.

For Tay, Microsoft did mark some terms as sensitive—but none of the extremely racist, chauvinistic, or generally problematic words and definitions that Tay started repeating.

“Many people think that technology is neutral, that it is objective, that it’s just a tool, that it’s based on math or physics, and has no values, that it’s value free. That’s the dominant narrative around technology and engineering in particular.”

Dr. Safiya Noble
Twitter Logo

Microsoft’s decision to release a bot who learns directly from other people was, at best, shortsighted. In addition to that, Tay was released into the Twittersphere—a proven, highly toxic environment for women. (Amnesty International conducted a 16-month long investigation looking at the toxicity of Twitter towards women.)

The sheer volume of trolling and snark that exists in the Twitter community in general also proved to be problematic. Microsoft should have taken all of that into consideration.

It’s not all doom and gloom, though; Microsoft had also worked on another chatbot at the same time for the Chinese market which was a great success, and currently has a new chatbot called Zo.

Zo was also designed to be a young teenage woman. I tried to talk to her and challenge her on subjects such as feminism and Hitler, and she refused to open these discussions with me. I assume it’s because Microsoft has learned from their painful history and has added a few more terms and topics to their blacklist.

Zo won’t take your bait

There’s also a thing called biased language

According to Richard Nordquist, the term ‘biased language’ refers to words and phrases that are considered prejudiced, offensive, and hurtful. Biased language includes expressions that demean or exclude people because of age, sex, race, ethnicity, social class, or physical or mental traits.

When looking at biased language, it is important to remember that some languages are gendered and will have two versions of the same word, one for masculine and one for feminine.

Hebrew is a gendered language, whereas English is not. When I translated the English word ‘nurse’ into Hebrew on Google Translate, it delivered only the word for a female nurse—without options. The same happened when I searched for the word ‘doctor’ but this time it provided only the masculine. Our human biases are being translated into the machines we build.

Google’s algorithm eats thousands of texts for breakfast, but we don’t actually know which texts it’s eating. Data analysts are teaching the Google Translate algorithm to read a huge amount of text, and from reading these texts, the algorithm needs to know how to translate different words and sentences.

The more times ‘nurse’ refers to a female, the more commonly it will automatically translate it as such. So too with ‘doctor’ being assumed to be a man. The algorithm itself isn’t chauvinistic; we humans are. When the algorithm learns from a biased dataset, it will translate words and sentences in a biased way. Biased data in, leads to biased data out.

Dr. Safiya Noble, author of “Algorithms of Oppression: How Search Engines Reinforce Racism” agrees.

“Many people think that technology is neutral, that it is objective, that it’s just a tool, that it’s based on math or physics, and has no values, that it’s value free. That’s the dominant narrative around technology and engineering in particular… Computer language is, in fact, a language. And as we know in the humanities, language is subjective. Language can be interpreted in a myriad of ways.”

Can this algorithmic bias change?

Machines can only work from the data we choose to feed them.

Algorithms learn a new language, and in fact any new information, by digesting a huge amount of data. Looking back on the last few hundred years we’ll see that the voices ruling the world of literature, art, philosophy, and science in the west world were very male, very white, and very Western. Until recently, it was straight white males who ruled the robotic tech industry too.

“Big Data processes codify the past. They do not invent the future.”

Cathy O’Neil
Twitter Logo

It makes sense that they never noticed the inherent bias that Buolamwini discovered. They had never thought (or had the need) to look.

Will AI dominate our lives?

As long as we are aware of the problem, we can find solutions. Data analysts can train the algorithm on a consciously non-biased training set, or we can keep running test sets on the algorithm. If the results that come back are biased, simply update the training set with non-biased texts and run a test set again until the bias has been narrowed

As product designers and technology-makers we must aspire to work in a diverse team, test our algorithms on a diverse crowd, and define appropriate metrics —aka diversity by design.

Cathy O’Neil, the author of “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy”, calms the storm by putting it like this:

“Big Data processes codify the past. They do not invent the future.”

Humankind has a big challenge ahead of itself. We won’t (and can’t) go back to ‘humans only’ mode, but we must think of solutions and tools that will help us build more ethical machines.

Want to learn more about AI?

Collaborate in real time on a digital whiteboard