author photo
By Clare O’Gara
Thu | Jun 11, 2020 | 5:30 AM PDT

In some ways, the timing is fitting.

Recent weeks have demonstrated the overwhelming efforts from Black Lives Matter, social justice activists, and allies protesting racism and police brutality.

While their work addresses systemic problems, the focus is predominantly physical.

But this problem extends beyond the "real" world. Apparently, it lives in the digital sphere as well.

And something that demonstrates this reality is a recent error from Microsoft's artificial intelligence journalist.

Microsoft's robot journalist makes a racist error

The first mistake here may have been replacing a human editor with a robot.

Months into the COVID-19 pandemic, Microsoft decided to fire hundreds of the human journalists that curated articles for the company's news provider, MSN.com.

Microsoft replaced the editors with an AI system. About a week after the change, however, the company encountered a problem.

The Guardian covered the incident:

"An early rollout of the software resulted in a story about the singer Jade Thirlwall's personal reflections on racism being illustrated with a picture of her fellow band member Leigh-Anne Pinnock."

The two fellow band members of the group Little Mix both identify as women of color.

Thirlwall responded to the article on Instagram:

"@MSN If you're going to copy and paste articles from other accurate media outlets, you might want to make sure you're using an image of the correct mixed race member of the group.

This shit happens to @leighannepinnock and I ALL THE TIME that it's become a running joke. It offends me that you couldn't differentiate the two women of colour out of four members of a group … DO BETTER!"

The Guardian reported on Microsoft's response to the mistake:

"Asked why Microsoft was deploying software that cannot tell mixed-race individuals apart, whether apparent racist bias could seep into deployments of the company's artificial intelligence software by leading corporations, and whether the company would reconsider plans to replace the human editors with robots, a spokesman for the tech company said:

'As soon as we became aware of this issue, we immediately took action to resolve it and have replaced the incorrect image.'"

Racial bias in AI: a tale as old as artificial intelligence

Bias in artificial intelligence is far from new.

A quick Google search reveals a plethora of incidents in which robots significantly disadvantage people of color.

Just recently, an algorithm used in hospitals demonstrated extreme racial bias against black Americans, according to Nature:

"The study, published in Science on 24 October, concluded that the algorithm was less likely to refer black people than white people who were equally sick to programmes that aim to improve care for patients with complex medical needs. Hospitals and insurers use the algorithm and others like it to help manage care for about 200 million people in the United States each year."

These racial disparities also help to explain, among other systemic reasons, the disproportionate impact of COVID-19 on communities of color.

And Wired recently covered a scientific project attempting to address and fix racial bias in AI:

"Now the scientists who helped teach machines to see have removed some of the human prejudice lurking in the data they used during the lessons. The changes can help AI to see things more fairly, they say. But the effort shows that removing bias from AI systems remains difficult, partly because they still rely on humans to train them.

'When you dig deeper, there are a lot of things that need to be considered,' says Olga Russakovsky, an assistant professor at Princeton involved in the effort."

It is also worth acknowledging that these incidents are not the fault of the artificial intelligence systems that make them. A computer is only as good as the data it receives.

But when a machine receives biased data, it learns from that bias and compounds upon it. Solving a degree of prejudice that ingrained is not unlike the history of racism in American itself.

And remember, the data that AI receives comes from this society: from IT employees, from the internet, and from culture.

A paper from the AInow Institute titled "Discriminating Systems: Gender, Race, and Power in AI" highlights this problem with some key findings:

  • There is a diversity crisis in the AI sector across gender and race.
  • The AI sector needs a profound shift in how it addresses the current diversity crisis.
  • The overwhelming focus on "women in tech" is too narrow and likely to privilege white women over others.
  • Fixing the "pipeline" won't fix AI's diversity problems.
  • The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation.

The research also provides two-fold recommendations for the industry: address racial bias in AI and improve racial diversity in the workplace.

Comments