Twitter officials have apologized after a social media experiment unveiled racist undertones in the company’s image cropping algorithm. The discussion began on the social media platform over the weekend, when Twitter user @bascule posted several images featuring U.S. Republican Senator Mitch McConnell, who is white, and Barack Obama. Twitter’s algorithm consistently cropped the images to feature McConnell, regardless of where he was positioned in the photograph:

Inverting the colours had a different result.

While some experiments saw the algorithm give preference to a smiling Black man over a non-smiling white man, other experiments featuring several smiling and non-smiling faces saw the algorithm consistently select a more masculine-looking face, OR the person with the lightest skin tone:

If everyone in the images is the same race, the algorithm shows a preference for men over women.

It isn’t exclusive to humans. When served images of golden retrievers and black labs, the algorithm chooses the lighter dog.


Twitter crops its images to prevent them from taking up too much real estate on the feed, and to allow multiple images to show up in a tweet.

It uses a proprietary algorithm to focus on what it deems the most interesting parts of the picture, attempting to ensure faces and text remain centred.

In a statement, Liz Kelley, a member of the Twitter communications team, says the company tested for bias and found “no evidence” of gender or racial discrimination before running the algorithm, “but it’s clear that we’ve got more analysis to do. We’ll open source our work so others can review and replicate.”

Twitter’s chief technology officer Parag Agrawal echoed Kelley’s statement, tweeting the algorithm “needs continuous improvement.”


In the western world, the marketing of artificial intelligence (AI) is overwhelmingly white. Everything — from AI-themed movies to the stock images in news articles and advertisements, to the dialects from assistants like Alexa and Siri, and the shows on TV  — heavily feature white, able-bodied, cis-gender personas.

In August, researchers from the University of Cambridge Leverhulme Centre for the Future of Intelligence (CFI) published a paper arguing the erasure of people of colour and individuals with intersecting identities in AI risks the creation of a “racially homogenous” workforce that will create biased algorithms.



And biased algorithms can cause significant, sometimes life-altering, harm.

One example occurred in October 2019, when significant racial bias was discovered in a national, U.S.-based algorithm used by health insurers to make decisions for millions of patients.

According to the American Association for the Advancement of Science (AAAS), the tool underestimated “the health needs of Black patients” and determined that Black patients “are healthier than equally sick whites, thus reducing the number of Black patients who are identified as requiring extra care.”

At the time, The Washington Post reported the issue could be present in other tools used by the health care industry, which collectively manages 200 million people annually. Since discovered, a team has been working to reduce the bias.

It’s not just an American issue. In August, the UK’s Home Office announced it would stop using an algorithm used to help decide visa applications after determining it contained “entrenched racism.” 

The analysis found the algorithm, which has been used “for years to process every visa application to the UK” was providing a “speedy boarding for white people” while automatically denying visas to applicants from some non-white countries.