Artificial intelligence (AI) has been around for decades, with the first research coming into fruition in the 1950s. That was when the term was formally coined, and the technology continues to be studied and utilized by academics and researchers alike. But in recent years, we’ve seen mass adoption and usage of AI applications across all industries. As new developments arise in AI, more organizations are investing their resources into honing the business applications of the technology. In fact, global spending on AI tech and robotic process automation is predicted to reach almost $35 billion by 2023 ⁠— further proving the surge in demand for this technology.

However, there is a rising concern about the various biases in AI systems and its extensive impact on society as a whole. In general, some level of bias among humans is normal, as these implicit biases are shaped by our unique experiences. But while normal, unconscious bias can be harmful — especially when it goes unchecked. We are seeing increasing evidence of that happening in AI.

Since AI is the simulation of human intelligence in machines, it’s not surprising to see  biases make their way into AI systems — including racial biases. This predisposition against certain races can have harmful consequences on people.

One way racial bias seeps into AI is through data. The data used to feed and train the AI system may reflect human subjectivity and substratal social biases. This is because some AI models may not be fed with the sufficient data to make predictions that can benefit everyone. On top of this, algorithms can also be at fault in boosting the racial biases in AI systems, especially if the model is not trained properly in identifying when it is learning a bias.

For example, a study published in Science has found that an algorithm used in US hospitals to distribute health care to patients has been discriminating against people of colour. The algorithm was more likely to refer white people for health improvement programs than Blackpatients who were just as in need of care. This is incredibly alarming, as the algorithm was used all over the country and used to manage care for at least 200 million people in the US.


AI used in hiring and talent procurement can also fall victim to racial bias. While AI use in recruitment has been noted to streamline the process and even mitigate ageism in the workplace, flawed datasets used to create AI systems can exclude many people from candidacy pools — instead favouring those in predominantly white zip codes and addresses.

Addressing the racial bias in AI requires a massive overhaul of existing systems and processes. For one, there needs to be more people from historically underrepresented backgrounds involved in developing AI systems — but minority groups remain underrepresented in the STEM industry, particularly those from Black, Indigenous, Hispanic backgrounds. And for people who identify with one or more of these identities and are also disabled, representation is even lower. In order to combat this widening racial gap in the STEM workforce, STEM education needs to be more accessible to underprivileged minority groups. Aside from grants and scholarships, accessibility through remote learning can encourage racially marginalized individuals to enroll in STEM courses.

In response, many of today’s universities and colleges are now offering fully remote STEM programs, such as computer science, mathematics, and even data analytics. Indeed, the online data analytics degrees offered today can be done anywhere and anytime — thus levelling the playing field for individuals who may not have the same time and resources for traditional education. Such programs are just as competitive, and they provide students with a solid foundation in data visualization, forecasting and predictive modeling, and using operational data with analytical tools. These are all principles that are said to shape the future of businesses. And with annual worldwide data predicted to reach 180 trillion gigabytes by 2025, it’s important that future AI and STEM professionals are diverse and understand the nuances in data. This is one of the best ways to lessen AI bias, but it can only be achieved with more inclusive education — which such remote learning programs are helping to address.

In addition, having sound methodologies for building models can also help in creating truly subjective AI systems. This entails having crystal clear objectives, extensive data assessment, selecting the most appropriate algorithms, and continuous refining. By listening to feedback and constantly incorporating new information, AI developers can also have a much better chance at creating AI systems that are fair and impartial for everyone.

Exclusively written for by Rachel Jill.