diverse team

The Human Solution for Artificial Intelligence Bias

Artificial intelligence bias is a danger to many systems we are growing dependent upon. A tool is only as useful as the craftsman wielding it. In a more modern sense, our algorithms are only as good as the programmers who wrote the code, and the database they can pull from. Think about the data that you feed your HR Information System. If bad data goes in, then what comes out?

This is critical to consider when evaluating whether or not we can trust an Artificial Intelligence tool. Some systems are better built than others, and some aren’t worth your trust because they can’t be relied upon.

Measuring Transparency

Transparency is a critical factor to consider because everyone should want to know how the software you use can do what it does. As a society or as an organization, the hope is that collectively we can come together and decide whether or not something is safe, trustworthy, or at least does what it claims to do. In the software world, something that is completely open and transparent is referred to as “Open Source”. That’s often reserved for things that are free to use, or open to the public.

At the other end of the spectrum are highly protected systems sometimes referred to as a “black box”. That’s something that is entirely protected and you have no clue about how it operates, how it works, or anything about how it delivers on its claims.

Somewhere in the middle are products where companies will share bits and pieces about how they can accomplish their feats, typically without sharing the “secret sauce” that they will keep as proprietary information.

With that in mind, as you start to be approached by vendors of AI services it is critical that you consider what exactly you might be buying. The degree to which you understand how the system works has to factor in significantly to your assessment of whether or not it will be a good fit for you and your organization.

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it.”
Forget Killer Robots—Bias Is the Real AI Danger
Will Knight, MIT Technology Review

Diversity and Inclusion

Organizations have to be particularly mindful of bias in all of their practices and policies. Using an AI system is really no different. There are certainly cases where turning over your organization to a flawed AI based on biased data would be worse than not using AI at all.

One system, called COMPAS is used by some judges to predict a defendant’s likelihood of re-offending and has an impact on parole. According to a report by ProPublica, there are concerns that the system is biased against minorities. When ProPublica compared COMPAS’s risk assessments for more than 10,000 people arrested in one Florida county with how often those people actually went on to re-offend, it discovered that the algorithm “correctly predicted recidivism for black and white defendants at roughly the same rate.” But when the algorithm was wrong, it was wrong in different ways… making major mistakes in who would be more likely to commit other crimes.

Checks and Balances

These are complex issues that can’t be easily solved. Writing an algorithm to decide something involving fairness is always going to come back to a human being’s best ability to define what that means and to assign metrics to it. In the short term, the best solution is better accountability and oversight. This is where Human Resources and your leadership team come in. Machine learning and AI systems are in the early days and the race to get them up and running is often being done too quickly and without enough consideration to the quality of the data input.

Is someone there asking the tough questions to make sure there is equitable representation for all? If not, then that needs to change.

What often gets lost in the race towards technological achievement is the reality that these tools are meant to help us… to help our organizations, our employees, our customers, and our communities. Technology doesn’t exist in a vacuum. These are our tools, and we are the craftsmen, despite how intimidating they may feel at times, we are always in control.

Michael Wilson

About Michael Wilson

Michael Wilson is a Digital Strategist who works with people to build, protect, and elevate their brands online.

Leave a Comment

You must be logged in to post a comment.