Are You Responsible For Your AI-biased Business Decisions?

ai bias

Everyone wants to work for your company. You receive hundreds of resumes every day. There are simply too many for humans to read. So, like many companies, you use a service that ingests the resumes and uses AI to score potential candidates against job descriptions. From your perspective it is the perfect use case. It’s fast. It’s efficient. And the candidates who make it through the system are pretty high level. This sounds awesome – but what happens to the candidates who don’t make it through the system?

Scoring Isn’t What You Think It Is

When you score a candidate in person, you are applying a massive contextual human consideration set to every criterion. When you train an AI model to score a candidate on a specific attribute, you must create a rule base (or train the model) for that attribute. Machines are not human, and they have no context.

AI models learn. That’s the great news. But they need proper feedback loops to continuously improve. At the Shelly Palmer Innovation Series Summit at CES 2020, Barak Turovsky, AI Product Leader at Google, talked about AI bias in the context of Google Translate (the largest AI project in the world). He outlined the size and scope of the problem and offered a series of examples to help us understand the process. You can view it here.

Feedback Loops

In the case of Google Translate, the AI has an instant and almost perfect feedback loop. You enter your words or phrases; it responds with a translation; you tell it if the translation is correct or useful for your purposes. This helps. But as you can see from the case of gender bias that Barak described in his talk, some biases are so baked into our culture and our language that they are incredibly difficult to overcome.

Setting the super-hard baked-in bias issues aside for a moment, Google Translate learns from each of the 140 billion words it processes every day. And it gets better every day. Importantly, everyone in the process has a complete understanding that the quality of the AI’s work is being judged every time it’s used. This is exactly what you want when training an AI model: a clear, transparent process where feedback can be used to recalibrate and adjust the system.

Back to the Black Box of HR

Back to our hypothetical resume-reading AI. Here we have the perfect example of a black box. Resumes go in. They are scored according to criteria you think you fully understand, although you have no way to know what proxy data the software engineers have used to represent missing data the model needs. When someone is rejected, what is the feedback mechanism that improves the model?

When a candidate successfully passes through the system, a human will review the resume and then the candidate will complete a traditional interview process. But how does the system tell people whom it rejected why it rejected them? It probably doesn’t. Which may be OK, but…

What happens when someone whom the system rejects gets the “big job” at one of your competitors a week later? How does the system learn from that? Without that feedback, how can it improve?

This Is Everyone’s Problem

At the enterprise level, you are going to look at the total applicant pool and judge it on aggregate, not on an individual basis. So, from your perspective, you don’t care that a few good ones got away. You’re saving time and money using AI to pre-select or even select candidates. You can just say you’re “following the data.”

What data?

You don’t own the resume-reading AI system; you’re renting it. But there’s a very good chance that that system is also being rented by other big corporations. The criteria for entry-level to mid-level jobs at any big corp are about the same. Do you think that a candidate who was rejected from your system did better in another system? No. That hypothetical resume was scored with the digital equivalent of a demerit, and there is a very good chance that it was sent to the bottom of the virtual pile. Heaven help that candidate – there’s no way for the candidate to find out why he or she was rejected, change what caused it, challenge the rejection, or even know whom to go see.

You may not think this is your problem. You’d be wrong. Multiply this example across the entire spectrum of automated decision-making your organization is working hard to bring online as part of your ongoing digital transformation projects.

This is a time when we all have to get a little smarter about AI bias. We owe it to our organizations and to our posterity.

 

Shelly Palmer is Fox 5 New York's On-air Tech Expert (WNYW-TV) and the host of Fox Television's monthly show Shelly Palmer Digital Living. He also hosts United Stations Radio Network's, ...

more
How did you like this article? Let us know so we can better customize your reading experience.

Comments

Leave a comment to automatically be entered into our contest to win a free Echo Show.