Rule Of Thumb: The Implications of Using AI For Hiring

Yuichiro Chino

Yuichiro Chino

The rapid development of using artificial intelligence for hiring purposes has proven many benefits to organizations and companies. AI technology promises to help find the perfect candidate for a job both efficiently and cost-effectively. However, AI has highly accurate predictive power and is not free from bias, which poses a threat to minority groups compared to other candidates. When hiring, humans cannot expect to be able to assign moral and ethical responsibilities to machines.   

Initially, AI was like an assistant to Human Recourses department, pre-scanning resumes for desired qualifications to speed up the recruitment and hiring process. However, AI and machine learning are rapidly evolving to be better at predicting and inferring people's qualities.  

What are the benefits of using artificial intelligence for hiring?

AI aims to fit the perfect candidate into a job position. With machine learning, the software can compile the typical qualities of current employees and use that as a guide for recruiting. AI looks for applicants that have attributes that match the workplace culture. Ideally, the qualities portrayed by the employees of the company are what make it successful. It makes sense to want to find someone like-minded that easily clicks with coworkers and their mentality.

As a hiring manager, this is game-changing. It fast tracks the hiring process by efficiently narrowing down a high volume of potential candidates. Therefore, HR can spend less time searching for employees and more time training hired employees.

AI claims to eliminate unconscious human bias that makes hiring unfair. All humans are subject to bias according to a Yale University study that found that male and female scientists trained to be objective when looking at job applications were still more likely to hire men, pay them $4000 more, and rank them more competent than women. Unconscious biases and discrimination toward women, minorities, and older workers are harmful to the success of these groups and the company.

Diverse companies perform better. The research reported by McKinsey & Company demonstrates that gender, ethnic, and cultural diversity within a company correlates with profitability and value creation. Therefore, eliminating any human bias while hiring is essential to the success of the company.

Companies are moving towards using AI to do the hiring process, but is AI entirely free from bias?

Computational systems can infer various qualities about someone from digital crumbs on the internet, even if none of this information has been explicitly disclosed online. This includes private information that is typically withheld from an employer's knowledge, such as sexual orientation, personality traits, and political leanings. Zeynep Tufekci's TedTalk on machine intelligence highlights software capable of predicting clinical depression in individuals months before expressing symptoms from their social media data. Great for early intervention in the mental health field; how does this system apply to hiring?

AI has the potential to weed out candidates that present with high levels of future depression. It can detect and weed out female candidates that are likely to become pregnant. AI could be selective towards candidates with aggressive personalities because that is the workplace culture. All of these decisions are made unknown to the people hiring due to machine learning. With machine learning, compared to traditional coding, choices are made without labeling the variables of each decision. Artificial intelligent hiring systems become a black box for humans.

So similar to humans with bias, computers can have an algorithmic bias. However, algorithms have the potential to spread prejudice on a massive scale and at a rapid pace. Machine learning is not free from discriminatory practices. For example, teaching a computer to see for facial recognition involves showing training sets of multiple faces. If the training set lacks diversity, then the faces that deviate from the norm will not be recognized by the computer.  

Microsoft’s attempt to integrate AI into our everyday lives through social media backfired when the chatbot started using racist language. The Twitter account TayTweets was created to learn more about conversational understanding through casual conversation with millennials. Within 24 hours, Tay was egged on by Twitter users and learned to engage in racist, provocative, and political conversations. The chatbot “disputed the existence of the Holocaust, referred to women and minorities with unpublishable words, advocated genocide,” and participated in 9/11 conspiracy theories. Tay proved that AI still has a long way to go before it will be able to discern prejudices, taught by humans, on its own.

The solution to stopping algorithmic bias is to use inclusive coding practices when developing AI. Coders have to think more conscientiously to make sure machine learning is more diverse to prevent discrimination.

While AI has the potential to make the hiring process more efficient, it cannot entirely replace the human aspect of hiring. Humans cannot be eliminated from the process because computer systems will still need some form of neutral auditing. Without checks and balances, minority and high-risk groups are at risk of being shut out of the job market.

Ultimately, we cannot outsource our moral responsibilities to computers. To hire without discrimination, humans need to maintain some authority in the process. We can and should use computation to help us make better decisions and diminish bias and monetary loss, but it has to be done within our moral judgment framework.

Previous
Previous

The Four Hundred: India’s growing sneaker culture

Next
Next

The Four Hundred: Defining females behind the microphone