Can Artificial Intelligence be biased? The issues are not just black and white 1


Artificial intelligence and facial recognition technologies are advancing at a fast pace. A.I tools and technologies are increasingly used in industrial and national security settings. The U.S. Customs and Border Protection recently announced that it was piloting the facial recognition biometric technology at a Transportation Security Administration (TSA) checkpoint at John F. Kennedy International Airport (CBP link).

It said that “As part of this technical demonstration, CBP is partnering with TSA to utilize international travelers’ photographs taken at TSA’s Terminal 7 international checkpoint to compare against travel document photographs.”

These announcements by industrial and governmental agencies to leverage AI and facial recognition technologies highlight the advances in the tools and techniques. However, some researchers are also beginning to voice concerns over the fundamental biases and racial overtones creeping into the data, techniques and software, impacting the results.

Joy Buolamwini, an African-American researcher at the Massachusetts Institute of Technology (MIT) highlights how she experienced the bias of facial recognition firsthand. A New York Times Article describes “When she was an undergraduate at the Georgia Institute of Technology, programs would work well on her white friends, she said, but not recognize her face at all. … But a few years later, after joining the M.I.T. Media Lab, she ran into the missing-face problem again. Only when she put on a white mask did the software recognize hers as a face.”

Picture: Joy Buolamwini and Timnit Gebru (via NYT and MIT Tech Review)

When she realized the face recognition software was moving out of the lab and into the mainstream with the biases she had observed, she recalled deciding it was “Time to do something.” A.I. software is generally ‘trained’ with large amounts of data. Buolamwini noticed that there was disproportionate amount of data about white men and a minuscule amount of data on black women in the system, making it harder for software to identify black women. Her short TED Talk about coding bias has been viewed more than 940,000 times, and she founded the Algorithmic Justice League, a project to raise awareness of the issue.

 

While Buolamwini is focused on addressing biases in data-sets used in Artificial Intelligence, another African-American A.I researcher, Timnit Gebru, has begun looking around the AI world and is concerned that she sees almost no one who ‘looks like her.’

In a recent interview with MIT Technology Review (link), Gebru talks about how bias gets into AI systems and how diversity can counteract it. Answering a question about ways to counteract bias in systems, she explains

“The reason diversity is really important in AI, not just in data sets but also in researchers, is that you need people who just have this social sense of how things are. We are in a diversity crisis for AI. In addition to having technical conversations, conversations about law, conversations about ethics, we need to have conversations about diversity in AI. We need all sorts of diversity in AI. And this needs to be treated as something that’s extremely urgent.”

Gebru goes on to describe her experience working in AI

“ It’s not easy. I love my job. I love the research that I work on. I love the field. I cannot imagine what else I would do in that respect. That being said, it’s very difficult to be a black woman in this field. When I started Black in AI, I started it with a couple of my friends. I had a tiny mailing list before that where I literally would add any black person I saw in this field into the mailing list and be like, “Hi, I’m Timnit. I’m black person number two. Hi, black person number one. Let’s be friends.”

 

Is the AI-bias a black-and-white issue ?

A cursory look at the list of ‘Machine Learning Engineers’ on LinkedIn (link) highlights a disproportionate number of Indian, Chinese and Asian names working for blue-chip employers, including Uber, Google and other ‘Top 20 Companies paying the Highest salary for A.I Engineers’

Interestingly, none of the Brown and Asian researchers and AI experts have been as vocal in highlighting biases in Artificial Intelligence and facial recognition. All this makes one wonder if the issue of racial bias is as black-and-white as it is being made out to be ? Do Brown and Asian faces face the same challenges with facial recognition software?

 

Other links:


About the Author: Mohan K  is an Enterprise Architect, tech columnist and blogger.


Leave a Reply

One thought on “Can Artificial Intelligence be biased? The issues are not just black and white