10 Ways to Reduce Interviewer Bias
Interviews are the singularly most decisive form of assessment when it comes to determining a candidate’s fit with an open role. Teams have come to rely on them—in some cases, almost exclusively—in hiring decisions. Yet study after study has shown that, ultimately, interviews don’t effectively predict performance. One study found that only 8% of interview decisions accurately predicted on-the-job performance—a percentage that pales in comparison to your chances of getting “tails” in a coin flip. A few years back, Barry Deutch, Master Executive Search Recruiter, wrote on LinkedIn that “In 30 years of executive search, over 1000 search projects, and interviews with over 250,000 candidates, we cannot find a single correlation that links how someone interviews with their on-the-job performance.” More recently, 42% of talent acquisition professionals said that interview bias is the reason interviews fail as an effective method of candidate selection. Others claim that bias is the #1 cause of hiring mistakes.
To consider these statistics together makes sense. Interviews are such a poor predictor of performance because they’re personal exchanges between human beings. And because the equation involves human beings, there will naturally be social factors and assumptions—let’s call them the biases they are—that cloud interviewers’ ability to objectively evaluate candidates and influence subsequent hiring decisions for the worse. These poor hiring decisions lead to high turnover rates. They mean that somewhere in the process, the interviewer passed over a better candidate. And of course they did, if they made their decision based on factors that had nothing to do with the candidate’s ability to do the job.
The human nature of the interview complicates things even more for recruiting teams and hiring managers who are committed to diversifying their teams. We’ve got all sorts of strategies and technologies—from inclusive job descriptions to pre-employment assessments—to minimize bias early on in the funnel. But how do you check bias when it’s time to put two people in a room (or these days, on a Zoom call) and have them talk? Bias hasn’t disappeared in our new era of virtual interviews: in fact, a recent Fast Company article suggests it’s gotten worse: “Implicit bias still exists in hiring practices, but job seekers who normally employ a raft of routines designed specifically to overcome those biases are now bereft of them.” (Think about the assumptions you might make about professionalism or interview preparedness if a candidate’s kid interrupts the Zoom interview, for example. Or—as the Fast Company article reminds us—with salons closed, Black women who might otherwise relax their hair before interviews because they know how deep black hair discrimination runs no longer have access to that option. What’s your internal response to that candidate’s appearance?)
We already know that unconscious bias isn’t something we can wholly eliminate. At best, we can become as aware of it as possible. But now we have a whole new hiring environment to contend with, rife with new opportunities for bias. If we can set ourselves up to judge candidates for their knowledge and skills, rather than on unspoken or unconscious criteria, we have a much better chance of hiring the right person for the job—and of meeting our well-intentioned diversity goals. So how do interviewers reduce bias enough to make fairer—and better—hiring decisions? Here are our top ten tips:
Define the Job, Not the Person
Of course, this begins with your job description; but interviewers need to come to the table clear about the details of the role. Take the term “job description” literally: it describes the job to be done, not the person who will perform the job. Consider the tasks the role entails, not the qualifications the “ideal candidate” has. Focus on action verbs (the candidate will build, oversee, create, define, reinvent, etc.), instead of whether they have a certificate in X or a degree in Y. If the candidate can show how they’ve successfully undertaken similar work, it means they have “the necessary qualifications.” What is this person’s past performance doing comparable work? should be the question you keep returning to.
It helps to think of your job descriptions as “results-based” or as “impact statements,” long before the interview. Lay out what the new employee would be expected to achieve at certain milestones (3 months, 6 months, 12 months) after being hired. What will they own and be responsible for? What will success look like at these milestones? With the answers to these questions in mind, interviewers can better focus on what matters—whether the candidate can achieve the goals and objectives by those milestones. Even better? Impact statements give hiring managers more clarity in assessing success when it comes time for performance reviews, making unconscious bias less likely to creep into those reviews as well.
Conduct a Phone Screen First
Starting with a phone screen means eliminating the visual clues that so many of our biases thrive on. You’ll miss the non-verbal cues you may have subconsciously come to depend on: whether the candidate crosses their arms when they speak, what they’ve chosen to wear today, what’s hanging on their living room walls, what their smile looks like (or whether they even smile). Whatever first impression the candidate does make will be less impactful without those visual cues. Instead you get to focus on the story that they tell about themselves, their experiences, and their accomplishments.
Take notes to get the most out of each call—and so you have the details in front of you when it’s time to review, rather than a memory of the person’s voice, or their laugh, or their stutter. There’s still plenty of room to make assumptions there. Good notes will help you focus on the facts.
Standardize Your Interview Questions
This best practice should be applied to phone screens as well. Standardizing interview questions means asking candidates the same questions, in the same order, in every interview, helping you make an “apples to apples” comparison at the end of the day. Structure reduces the likelihood that the interviewer will veer off-script due to a “gut” or emotional reaction to the candidate. They won’t forget to ask about skill X and then reject a candidate because the candidate failed to mention skill X, which is crucial to the role. They won’t adjust questions based on affinity (or lack thereof) in a way that prevents the interviewer from seeing the whole picture. Structure levels the playing field: each candidate has the opportunity to demonstrate their skills and knowledge within the constructs of those (role-specific!) questions; and you get to compare answers more easily, side-by-side.
While we’re at it, if it’s possible to conduct interviews around the same time each day—and for interviewers to be in the same place, or in similar conditions—for each interview, do that. Your goal should be to keep interviews as comparable to each other as possible.
Reduce the Chit-Chat
This best practice should rather naturally follow from your standardized questions. Of course, there will be answers that candidates offer—pertaining to their background or experience—that will make you want to dig in for more detail; but reducing the small talk between questions ensures that you’ll keep that playing field level.
That’s not to say don’t ask candidates questions like “How are you?” to break the ice! It’s only to say that going down the small-talk road can exacerbate bias, because it reveals personal details that aren’t relevant to the job at hand. Pre-COVID, for example, you didn’t ask questions like “How was your drive into the office?” because the neighborhood the candidate drove in from—or whether or not they took public transportation—might lead you to make assumptions about them. The same goes for our virtual era: don’t ask candidates where they’re calling in from, or where they’ve decided to “ride out” the pandemic. Don’t make small talk about what their work-from-home experience has been like. You’ll essentially be asking for details about their personal lives that can trigger bias, or otherwise cloud your judgment.
Use a Rubric
A rubric should follow from those standardized interview questions we spoke of above. Don’t just prepare the questions themselves. Decide ahead-of-time what an “excellent answer,” a “good answer,” and a “poor answer” look like for each question, and add these to your candidate scorecard. Each question should align with a particular (and essential!) quality that you’re assessing candidates on—whether that’s a hard skill, a soft skill, values-alignment, or culture-add. Your rubric should align with how well the candidate can show that they’ve mastered that skill or align with that value or behavior. It’s tricky to qualify ability; but rubrics at least allow interviewers to systematize their observations. They also help interviewers avoid giving too much credit for one particular skill or quality over others.
There are a few different ways you can express your scores. You might consider a Likert scale, which allows you to rank candidates in language (poor - fair - average - good - exceptional), or a numerical rating scale (1-5 or 1-10), which uses numbers rather than language units. If you decide on the latter, make sure the interviewing team is clear about what a “1” or a “5” answer looks like. What kind of answer to a question constitutes a 4 rather than a 5?
Have interviewers fill out scorecards either during the interview or immediately afterward, while their memories of the conversation are still fresh. Every interviewer should fill out a scorecard before they see their peers’ evaluations, in order to minimize conformity bias.
Take Great Notes
Our memories are notoriously unreliable; and relying on recall will only open you up to unintentional biases. You’ll remember your strongest impressions and quite possibly forget the details that were most pertinent to the role. At best, our memories are only interpretations of what happened in the past; they’re influenced by both past and current emotions. So write it all down—as quickly after the interview as possible, and in as much detail as you can. To the degree that you can record candidates’ exact responses—and not your interpretations of those responses—do so.
The other benefit of taking detailed notes is that it allows you to improve your interviews over time. Store scorecard data so you can compare your hiring team’s predictions to new hire performance. If you have visibility into HR's performance reviews, score new hires 6 months, 12 months down the line on the same categories; and use those scores to identify the gaps between interviewers’ predictions and reality. Maybe you discover that a question you created to assess a certain trait was ultimately not that predictive after all. Interrogating the gaps will help teams improve both their individual and their collective accuracy.
Give Sample Work Assignments
This is where you test candidates for what actually matters. Give them an assignment that’s directly related to what the role requires—whether that’s writing a piece of code or a blog post, analyzing a data set, presenting a case for how they’d solve a problem, and so on. Comparing assignments objectively allows you to select candidates based on merit and performance—nothing more. If you need to, anonymize test results or work samples. But we’d caution against anonymizing everything in your hiring process. Doing so gives recruiters, interviewers, and hiring managers a reason not to work on their unconscious biases. And we should all be consistently doing the work—even as we automate and anonymize to make fairer, more equitable hiring decisions.
Use Panel Interviews
To the degree that it’s possible, always have more than one person interview candidates. In fact, the more people involved in the interview process, the better—because the more input you have, the more checks-and-balances you have on biases. Your interview panel should be as diverse as you can make it—include a range of ages, genders, backgrounds, positions, and seniority levels. A diverse interview team is much less likely to make a decision based on bias than a panel of like-minded individuals with shared backgrounds and experiences.
Give each interviewer the questions they’re best equipped to assess candidates on. This is the other benefit of a panel: a single interviewer will have certain strengths, but have difficulty assessing candidates on other traits. Once everyone has evaluated each candidate for themselves, compare evaluations, and check biases.
Know What Kinds of Bias You’re Up Against
There are many kinds of bias: first impression (you prefer the candidate who immediately offered that firm handshake), affinity bias (they’re into acro yoga, too?!), halo and horn effect (if they’re brilliant/inept at one thing, they must be brilliant/inept at everything else they do), and so on. We won’t go through them all here; but they’re worth researching and tracking in your interview process—as well as in your life in general. Observe your “gut instinct” each time you meet a new candidate. Make a mental note of how you feel about them. Eventually, patterns may emerge.
Being curious about your biases as you discover them will keep you from getting uptight or defensive about them. So try some experiments. If you find you’re having first impression reactions—in which you seek out positive or negative aspects of a candidate to confirm your positive or negative first impressions—try the opposite. Look for positive aspects of the candidates who initially turned you off as you speak with them to neutralize those first impressions. You can also put yourself in hypothetical situations. Would you make the same decision if the candidate in front of you was Black or disabled? Why or why not? If that other candidate you spoke with had given the same answers as this candidate, would you feel differently about them? If not, why not? Maybe you even include a “likability” section in your rubric. This is not to include it as part of your score. Rather, it’s to recognize and honor it as part of you that wants in on the decision. And it’s another way of observing patterns in the ways you respond to certain groups of candidates.
However possible, give candidates the benefit of the doubt throughout the entirety of the interview. Remember that they’re subject-matter experts; they’ve made it this far for a reason.
Justify Your Decision
We talked about rubrics above—they’re an important step toward objectivity. But they’re ultimately only useful if you can justify the scores you’ve given to candidates. When you meet with your panel after the interviews are complete, make sure each interviewer gives substantial reasons for why they’ve decided a candidate is—or isn’t—a good fit. Have them use real evidence of competency and motivation from their notes—including direct quotes from the candidate—to demonstrate why a candidate should be hired or rejected.
In these conversations, pay attention to the language you use: sentences that begin “I feel…” or “I get the sense that…” should be observed and avoided. “The candidate wasn’t able to prove that…” or “The candidate gave a great example of a similar project they’d worked on when…” are great sentence-starters. Interviewers should hold each other accountable when something about an argument doesn’t sit quite right for them.
Above all, be willing to be open, curious, and vulnerable with your hiring team about the assumptions you hold that you can’t yet see. This is ultimately the only way to meet the goal you all collectively hold—which is to hire the best candidate possible, every time.
“Culture Fit v. Culture Add”: Is the Question Too Reductive?
A lot of companies are dispensing with the term “culture fit” right now when it comes to diversity initiatives. Here's our take on it.
The State of Diversity in Recruiting
Gem's "State of Diversity in Recruiting" is out. We’re grateful for those of you who were willing to take this survey to give us a better understanding of the ways DEI is being taken seriously, and thoughtfully, right now.