Artificial Intelligence Can Be Racist, Sexist and Creepy. Here Are 5 Ways You Can Counter This In Your Enterprise.

Must read

Opinions expressed by Entrepreneur contributors are their own.

I started my career as a serial entrepreneur in disruptive technologies, raising tens of millions of dollars in venture capital, and navigating two successful exits. Later I became the chief technology architect for the nation’s capital, where it was my privilege to help local government agencies navigate transitioning to new disruptive technologies. Today I am the CEO of an antiracist boutique consulting firm where we help social equity enterprises liberate themselves from old, outdated, biased technologies and coach leaders on how to avoid reimplementing biased in their software, data and business processes.

The biggest risk on the horizon for leaders today in regard to implementing biased, racist, sexist and heteronormative technology is artificial intelligence (AI).

Today’s entrepreneurs and innovators are exploring ways to use to enhance efficiency, productivity and customer service, but is this technology truly an advancement or does it introduce new complications by amplifying existing cultural biases, like sexism and racism? 

Soon, most — if not all — major enterprise platforms will come with built-in AI. Meanwhile, employees will be carrying around AI on their phones by the end of the year. AI is already affecting workplace operations, but marginalized groups, people of color, LGBTQIA+, neurodivergent folx, and disabled people have been ringing alarms about how AI amplifies biased content and spreads disinformation and distrust.

To understand these impacts, we will review five ways AI can deepen racial bias and social inequalities in your enterprise. Without a comprehensive and socially informed approach to AI in your organization, this technology will feed institutional biases, exacerbate social inequalities, and do more harm to your company and clients. Therefore, we will explore practical solutions for addressing these issues, such as developing better AI training data, ensuring transparency of the model output and promoting ethical design. 

Related: These Entrepreneurs Are Taking on Bias in Artificial Intelligence

Risk #1: Racist and biased AI hiring software

Enterprises rely on AI software to screen and hire candidates, but the software is inevitably as biased as the people in human resources (HR) whose data was used to train the algorithms. There are no standards or regulations for developing AI hiring algorithms. Software developers focus on creating AI that imitates people. As a result, AI faithfully learns all the biases of people used to train it across all data sets.

Reasonable people would not hire an HR executive who (consciously or unconsciously) screens out people whose names sound diverse, right? Well, by relying on datasets that contain biased information, such as past hiring decisions and/or criminal records, AI inserts all these biases into the decision-making process. This bias is particularly damaging to marginalized populations, who are more likely to be passed over for employment opportunities due to markers of race, gender, sexual orientation, disability status, etc.

How to address it:

  • Keep socially conscious human beings involved with the screening and selection process. Empower them to question, interrogate and challenge AI-based decisions.
  • Train your employees that AI is neither neutral nor intelligent. It is a tool — not a colleague.
  • Ask potential vendors whether their screening software has undergone AI equity auditing. Let your vendor partners know this important requirement will affect your buying decisions.
  • Load test resumes that are identical except for some key altered equity markers. Are identical resumes in Black zip codes rated lower than those in white majority zip codes? Report these biases as bugs and share your findings with the world via Twitter.
  • Insist that vendor partners demonstrate that the AI training data are representative of diverse populations and perspectives.
  • Use the AI itself to push back against the bias. Most solutions will soon have a chat interface. Ask the AI to identify qualified marginalized candidates (e.g., Black, female, and/or queer) and then add them to the interview list.

Related: How Racism is Perpetuated within Social Media and Artificial Intelligence

Risk #2: Developing racist, biased and harmful AI software

ChatGPT 4 has made it ridiculously easy for information technology (IT) departments to incorporate AI into existing software. Imagine the lawsuit when your chatbot convinces your customers to harm themselves. (Yes, an AI chatbot has already caused at least one suicide.)

How to address it:

  • Your chief information officer (CIO) and risk management team should develop some common-sense policies and procedures around when, where, how, and who decides what AI resources can be deployed now. Get ahead of this.
  • If developing your own AI-driven software, stay away from public internet-trained models. Large data models that incorporate everything published on the internet are riddled with bias and harmful learning.
  • Use AI technologies trained only on bounded, well-understood datasets.
  • Strive for algorithmic transparency. Invest in model documentation to understand the basis for AI-driven decisions.
  • Do not let your people automate or accelerate processes known to be biased against marginalized groups. For example, automated facial recognition technology is less accurate in identifying people of color than white counterparts.
  • Seek external review from Black and Brown experts on diversity and inclusion as part of the AI development process. Pay them well and listen to them.

Risk #3: Biased AI abuses customers

AI-powered systems can lead to unintended consequences that further marginalize vulnerable groups. For example, AI-driven chatbots providing customer service frequently harm marginalized people in how they respond to inquiries.  AI-powered systems also manipulate and exploit vulnerable populations, such as facial recognition technology targeting people of color with predatory advertising and pricing schemes.

How to address it:

  • Do not deploy solutions that harm marginalized people. Stand up for what is right and educate yourself to avoid hurting people.
  • Build models responsive to all users. Use language appropriate for the context in which they are deployed.
  • Do not remove the human element from customer interactions. Humans trained in cultural sensitivity should oversee AI, not the other way around.
  • Hire Black or Brown diversity and technology consultants to help clarify how AI is treating your customers. Listen to them and pay them well.

Risk #4: Perpetuating structural racism when AI makes financial decisions

AI-powered banking and underwriting systems tend to replicate digital redlining. For example, automated loan underwriting algorithms are less likely to approve loans for applicants from marginalized backgrounds or Black or Brown neighborhoods, even when they earn the same salary as approved applicants.

How to address it:

  • Remove bias-inducing demographic variables from decision-making processes and regularly evaluate algorithms for bias.
  • Seek external reviews from experts on diversity and inclusion that focus on identifying potential biases and developing strategies to mitigate them. 
  • Use mapping software to draw visualizations of AI recommendations and how they compare with marginalized peoples’ demographic data. Remain curious and vigilant about whether AI is replicating structural racism.
  • Use AI to push back by requesting that it find loan applications with lower scores due to bias. Make better loans to Black and Brown folks.

Related: What Is AI, Anyway? Know Your Stuff With This Go-To Guide.

Risk #5: Using health system AI on populations it is not trained for

A pediatric health center serving poor disabled children in a major city was at risk of being displaced by a large national health system that convinced the regulator that its Big Data AI engine provided cheaper, better care than human care managers. However, the AI was trained on data from Medicare (mainly white, middle-class, rural and suburban, elderly adults). Making this AI — which is trained to advise on care for elderly people — responsible for medication recommendations for disabled children could have produced fatal outcomes.

How to address it:

  • Always look at the data used to train AI. Is it appropriate for your population? If not, do not use the AI.

Conclusion

Many people in the AI industry are shouting that AI products will cause the end of the world. Scare-mongering leads to headlines, which lead to attention and, ultimately, wealth creation. It also distracts people from the harm AI is already causing to your marginalized customers and employees.

Do not be fooled by the apocalyptic doomsayers. By taking reasonable, concrete steps, you can ensure that their AI-powered systems are not contributing to existing social inequalities or exploiting vulnerable populations. We must quickly master harm reduction for people already dealing with more than their fair share of oppression.

More articles

Latest article