Career Site Announcement Banner Image
Announcing Gem Career Sites
Captivate, nurture, and convert exceptional talent with beautiful designs, unified context, and full-funnel analytics
Career Site Announcement Banner Image

Articles

AI Recruiting

Breaking Barriers: Tackling Bias in AI Recruitment for Fair and Inclusive Hiring

Brandice Payne | Author

Brandice Payne

Senior Manager, Content & Customer Marketing

Posted on

January 24, 2024

Artificial intelligence promises to add untold value to our lives by providing information in record time, displaying trends, and streamlining time-consuming tasks of all kinds. Recruiters find the technology particularly useful for evaluating resumes, engaging in candidate communication, and even running video interviews. 

Potential issues like ChatGPT’s tendency to create “hallucinations,” human-based hiring biases getting baked into the technology, and privacy concerns might give a human recruiter pause before using AI. Despite these concerns, 67% of hiring managers value the time-saving capabilities that advanced artificial intelligence brings to recruiting teams. 

The simple fact is AI is still in the early stages of development and requires the dedicated work of human recruiters to help eliminate bias. In this article, we examine how bias occurs in AI recruiting and some strategies you can use to help limit it and create a more equitable hiring experience. 

Understanding Bias in Recruitment

AI systems rely on algorithms, known as machine learning, to recognize trends found within data sets. Unfortunately, the data sets that power several AI hiring tools come filled with skewed representations of who makes up the modern workforce. 

In fact, the issue first came to light when computer scientists found that facial recognition programs couldn’t recognize Black faces yet had no problem with pale complexions. The technology, which at first glance might appear completely unbiased, had, in fact, inherited human biases because the humans who structured machine learning data sets experienced unconscious biases. 

The problem doesn’t just impact people of color, as many AI recruitment tools have a distinct bias toward male job applicants over female applicants. 

The issue stems from the training data, where AI models lack diversity in input data, resulting in biased outputs. Without human evaluation and intervention, these biases can undo much of the work recruiters have done to dismantle biases in the recruitment process. 

Even forward-thinking organizations, like Amazon, fall victim to unconscious biases. In Amazon’s case, the company found their AI sourcing model preferred male candidates for technical roles, which occurred because historical data was filled with biases toward male applicants. 

The Impact of Bias on Hiring Decisions

So, what was the result of Amazon’s biased hiring practices? Amazon forced the project back to the drawing board, resulting in millions of wasted dollars. 

HireVue is another company that experienced challenges related to AI technology, with algorithmic bias related to facial expressions. The biases plaguing HireVue resulted in a complaint filed with the FTC, causing the company to remove the use of AI facial recognition in the hiring process. 

Microsoft went through a similar situation but faced scrutiny from the Equal Employment Opportunity Commission (EEOC) instead.

These incidents highlight the work still needed to refine artificial intelligence. After all, it is humans who created artificial intelligence, and it will take human ingenuity, consistent effort, and a dedication to equity to eliminate biased hiring decisions.  

The Ethical Imperative: Addressing Bias in AI Recruitment

The growing movement focused on creating diversity, equity, and inclusion in the workplace is a call for an ethical approach to talent acquisition. However, the newness and controversial nature of artificial intelligence make job seekers hesitant to accept the technology.

Critics of AI will point to the biased nature of the input data as an ethical dilemma, claiming that automation overlooks qualified job candidates because humans don’t evaluate algorithms for discrimination. The AI algorithms may compare characteristics with current staff or flawed data sets and unintentionally violate anti-discrimination laws.  

Another issue is the collection and use of personal data that ends up in databases. Privacy is a growing concern as digital devices demand us to divulge more of our lives, meaning businesses must be mindful of how and where they use candidate data. 

Before implementing any AI tools, whether it’s to assign assessments or chatbots, businesses should adequately vet the platforms they deploy for data privacy practices. Luckily, Gem provides such a vetted platform with its diversity-focused talent acquisition software that protects applicant data and helps human resource professionals reduce bias. 

Strategies for Mitigating Bias in AI Recruitment

There are many strategies that help mitigate bias in AI recruitment software. Here’s a look at a few ways managers and developers can reduce bias:

  • Adversarial training – Adversarial training pits two neural networks against each other to keep both in line. One neural network generates the output results, while the other checks the results for potential bias. This mitigation method is an advanced strategy and requires extensive coding to accomplish.

  • Data augmentation – Another programming method to mitigate bias is data augmentation, which codes different viewpoints and backgrounds into the training database. 

  • Resampling datasets – Database managers can resample training data to ensure that all groups have equitable representation. Resampling helps balance input data without skewing qualifications. 

  • Diversify training data – The issues concerning facial recognition bias are a direct result of biased training data. Image-based AI algorithms depend on diverse training data to create accurate representations. Training your AI models with multiple data sources will also help balance the input data and prevent AI biases. 

  • Remove bias from tagging info – Machine learning also depends on tagging to create logical hierarchies in data management. However, if humans are unconsciously tagging information with implicit biases, AI will emulate those discriminations. You can educate your teams with unconscious bias awareness training to help eliminate biased tagging. 

A combination of education and hard data science coding is what it will take to limit bias in AI technology, especially for recruiting. It might not be practical to code your own adversarial training models, but with the right partner, you won’t have to; you can focus on the soft skills instead. 

Gem’s software comes pre-built with diversity-protecting measures to bring confidence to users looking for an unbiased hiring platform. Contact us today to see how Gem can fit into your current ATS and learn how we can elevate your hiring with AI. 

Forge a Bias-Free Path with Gem’s Talent Acquisition Software

Artificial intelligence, machine learning, deep data, and large language models enable software programs to take on human-like characteristics capable of aiding recruiters in finding and hiring qualified candidates. But the technology is far from perfect.

Unfortunately, AI is displaying some of the same biases that humans are susceptible to, such as discriminating against people of color and disqualifying female candidates based on gender. These biases harm organizational reputation and invite consequences from entities like the EEOC or the Federal Trade Commission. 

Fortunately, there are strategies programmers and recruiters can deploy to help curb some of these biases and create a more equitable hiring process. These strategies include diversifying training data and training employees for unconscious bias awareness. 

Yet, there is no replacement for a quality pre-built platform, and that’s exactly what Gem brings to the hiring table. Gem’s talent acquisition software helps you visualize pipeline diversity through a centralized dashboard that also unlocks more than 20 acquisition channels, including social media sites like LinkedIn. 

Also, Gem’s analytics can help you discover candidate drop-off rates from underrepresented groups to create a more equitable hiring pipeline. Request a demo and explore how Gem can help supercharge your inclusive hiring! 

Share

Request Demo Image

Get started today

See how Gem can help you hire with remarkable speed and efficiency