Are machines better at discrimination than human beings?
Human beings are inherently biased. We would like to believe that we are largely rational beings but as years of research has proved, we aren’t. In fact, there is an entire field of study (behavioral economics, social psychology, call it what you may) dedicated to studying human irrationality. Organizations have begun playing their part by educating employees on the many pitfalls of unconscious bias in areas like hiring, performance management and day-to-day interactions. Yet, leaders have miles to go before solving the problem of discrimination.
We’d only just begun to dive into the depths of discrimination and unconscious bias when AI arrived as the brand new shiny object in town. Suddenly, every little startup attempted to sell their revolutionary offering, said to leverage the superhuman power of AI. Conferences and marketers worldwide spent millions convincing attendees that HR systems were grossly outdated if they lacked the latest AI gimmick. The unanimous claim was that machines can do tasks better than humans. They may even be better at unbiased hiring and performance management.
When recognizing the benefits of AI, we must be just as mindful of the pitfalls. Discrimination, amongst other things, might be AI’s strongest suit yet. We take pride in machines being able to make objective decisions, but we forget that those decisions can be unfair and discriminatory.
Algorithms do a good job of objectively learning biases present in the world today.
Employees, both current and potential, are protected from discrimination related to certain protected characteristics (such as age, disability, sex, race, sexual orientation and religion or belief). When asking machines to make decisions for us, there’s the risk that they will throw up potential discrimination issues. This study found that Google is more likely to advertise executive-level salaried positions to search engine users if it thinks the user is male. Harvard researchers found that ads about arrest records were much more likely to appear in searches for names thought to belong to a black person versus a white person. Three years ago, Google ran into trouble when its image recognition algorithms began classifying black people as gorillas and the company still struggles to solve this problem. For now, it suffices with a workaround.
One doesn’t have to look too hard for instances when AI has shown discriminatory trends. After all, AI is designed by human beings, it’s trained on algorithms that aren’t always public, and AI is modeled on past data that is likely highly biased.
Take hiring for example: The data set available for modeling might lead the machine to believe that white males living in a 40-kilometer radius are most likely to succeed at a particular role. It’s likely that the machine selects gender, region and proximity to the office as influencing factors. Discrimination has not been studied well enough from a computational perspective, so AI customers lacking complete visibility of the modelling data may unknowingly discriminate.
There is a second downside. Machines might target people who think and work alike if the system works well. It will filter out any candidate who does not fit your company’s regular bill. I have always encouraged taking calculated bets on people. Some work, some don’t; but we learn no matter what. I am not sure if we can configure AI to take smart bets just yet.
I love technology and I am all for leveraging AI to take over tasks to give us bandwidth to do more work. However, in times where discrimination and biases create major challenges, I am equally wary of letting machines do the thinking for me. Human judgement is far from accurate. Thus, until we’ve figured it all out, a close partnership with AI provides the perfect balance. Machines take on the purely transactional decision making, partner with us on low-level choices and leaves the rest to the human brain, which continues to be one of the best machines that we know of.