PHOTO EDITED BY WE REP STEM.

In the western world, the marketing of artificial intelligence (AI) is overwhelmingly white. Think about it: everything — from AI-themed movies to the stock images we see in news articles and advertisements, to the dialects we hear from assistants like Alexa and Siri, and the TV shows we watch — heavily feature white, able-bodied, cis-gender personas.

In a recent statement and paper, researchers from the University of Cambridge Leverhulme Centre for the Future of Intelligence (CFI), argue the erasure of people of colour and individuals with intersecting identities in AI risks the creation of a “racially homogenous” workforce that will create biased algorithms.

They argue current cultural depictions of AI should be challenged and dismantled because they depict a future where people who are not cis-gender, able-bodied, and/or white have been erased.

“Given that society has, for centuries, promoted the association of intelligence with White Europeans, it is to be expected that when this culture is asked to imagine an intelligent machine it imagines a white machine,” Dr. Kanta Dihal, a lead at CFI’s ‘Decolonising AI’ initiative, said in a statement.

“People trust AI to make decisions. Cultural depictions foster the idea that AI is less fallible than humans. In cases where these systems are racialized as white that could have dangerous consequences for humans that are not.”

VIDEO: BIASED ALGORITHMS CAUSE SIGNIFICANT HARM

Dr. Dihal collaborated with her colleague Dr. Stephen Cave to write a paper on the topic published this week in the journal Philosophy and Technology.

It examines a range of AI-related fields, including an analysis of how robots with Black identities suffer more online abuse. And an independent investigation into search engines conducted for the paper found that all non-abstract results for AI had either white features or “were literally the colour white.”

“One of the most common interactions with AI technology is through virtual assistants in devices such as smartphones, which talk in standard White middle-class English,” said Dihal.

 “Ideas of adding Black dialects have been dismissed as too controversial or outside the target market.”


Sign up for our newsletter and get our headlines delivered right to your inbox.

DIVERSIFYING AI TEAMS MAKES ALGORITHMS SAFER

Increasing diversity in the way AI is marketed may help create a less homogenous workforce, something that isn’t just important from a moral standpoint — research suggests it can make algorithms safer.

“AI systems are becoming smarter every day … But current AI systems are far from perfect,” writes Juan Mateos-Garcia, Director of Innovation Mapping, Nesta and Joysy John Director of Education, Nesta.

“They tend to reflect the biases of the data used to train them and to break down when they face unexpected situations. They can be gamed, as we have seen with the controversies surrounding misinformation on social media, violent content posted on YouTube, or the famous case of Tay, the Microsoft chatbot, which was manipulated into making racist and sexist statements within hours.”

Mateos-Garcia and Joysy argue risk reduction is achieved by recruiting cognitively-diverse teams. While their research focuses on increasing gender representation, their recommendations — which include policy interventions and better recruitment strategies — could create lead to significant changes in the tech space.

IT’S TIME TO DIVERSIFY AI. FILE PHOTO/WE REP STEM.

RACIST ALGORITHMS

Biased algorithms can cause significant harm. An example of this happened in October 2019, when significant racial bias was discovered in a national, U.S.-based algorithm used by health insurers to make decisions for millions of patients.

According to the American Association for the Advancement of Science (AAAS), the tool underestimated “the health needs of Black patients” and determined that Black patients “are healthier than equally sick whites, thus reducing the number of Black patients who are identified as requiring extra care.”

At the time, The Washington Post reported the issue could be present in other tools used by the health care industry, which collectively manages 200 million people annually. Since discovered, a team has been working to reduce the bias.

It’s not just an American issue. Earlier this month, the UK’s Home Office announced it would stop using an algorithm used to help decide visa applications after determining it contained “entrenched racism.” 

The analysis found the algorithm, which has been used “for years to process every visa application to the UK” was providing a “speedy boarding for white people” while automatically denying visas to applicants from some non-white countries.

NOTE FROM THE EDITOR: We used the image of a white woman to create the thumbnail for this piece, and we are still struggling with this editorial decision. We like to use our art as a way to promote positive representation, which is esecially important in fields where none exists, like AI. To be transparent, we couldn’t find any (free) stock images featuring people of colour in robot-related searches and went with what we could acquire. If you feel this was the wrong choice, please let us know. We are always open to dialogue about how we can do better.


Like what you read? Here's how you can support We Rep STEM.