Insights

Does Your Data Promote Fairer Hiring?

[object Object]
Gayathri KannanFebruary 28, 2021
ovalsPostFeaturedImage

Having more information than needed can sometimes do more harm than good. This holds true for the data you use to screen candidates too.

In this blog, we use the example of video interviews to illustrate how the candidate data you gather ties into your hiring decisions.

Before we jump in, let’s note that there are two types of video interviews used in AI recruiting solutions.

The first is one-way or on-demand video interviews, in which candidates record their responses and share them with the recruiter. These interviews are asynchronous and are commonly used when assessing candidates at scale.

Two-way video interviews are back and forth interactions that take place in real-time. The candidate and interviewer simultaneously engage in a discussion facilitated by tech tools like Zoom or Skype.

However, the essence of all video interviews is the same. It’s an audio-visual means of communication where visual and verbal feedback together provide insights into candidates.

The way these insights are collected and used can impact outcomes for better or worse. 

The good

When hiring for positions like customer-facing roles, the combined use of image and sound can be preferential. 

Video interviews allow recruiters to assess verbal and non-verbal behaviour together.

This can be useful if company values and needs are tied to the way candidates present themselves. Once again, in customer-facing roles assessing body language and presentation can be important for employers. 

Traditionally, face-to-face interviews were used to gauge a candidate’s fit when recruiting for such roles. With the switch to remote hiring, video interviews are a useful alternative for conveying rich, non-verbal information.

But what happens when you’re hiring for positions that don’t involve in-person customer interactions, like contact centre jobs? Or when you want to advance diversity and inclusion initiatives and give everyone a fairer chance? Is it still helpful to consider a candidate’s appearance, gestures or behaviour?

The bad

Video interviewing tools today analyse facial expressions, body language, micro movements like eye twitching, and many other non-verbal elements. 

These data points are considered when scoring candidates and in the case of some tools, creating shortlisting suggestions.

The problem arises when these scores are used to screen applicants for roles where factors like those listed above are not linked to the candidate’s ability to do the job. This makes the screening process unfair. 

Plus, it damages your chances of making the right hire. You could easily miss out on the best suited talent with the right skill set only because a brief facial expression or movement negatively affected their interview scores.

Relying blindly on AI-generated shortlisting suggestions can also disadvantage applicants who don’t fit the AI’s ideal mould of a suitable candidate. Algorithm bias is certainly something to watch out for. 

But having the recruiter assess video interviews manually and shortlist candidates isn’t an unbiased method either.

As humans, we have unconscious biases. While we think we can ignore or disregard certain information, it almost always unconsciously influences our decisions. Something as immaterial as the interviewee’s choice of outfit or perceived age could end up impacting who you progress to the next round. 

To truly address bias, we must closely examine the human and technologic data we are using when hiring and question whether it’s serving our purpose.

The solution

1. Only collect and use information relevant to the role.

If you can screen candidates without seeing them, do it. If assessing candidate appearance isn’t essential during the screening process, remove visual cues entirely where possible.

But even then, elements of voice like accent, tone or speech impediments can promote bias.

This is where tools like Curious Thing’s AI voice interviews are advantageous. By assessing only what the candidate says and not how they say it, it gives everyone a fairer chance and allows for objective evaluation.

2. Hide identifying data.

Name, gender and other identity-related information is often required for documentation purposes. But it shouldn’t play a role in deciding which candidate should be shortlisted.

Using tools that let you hide this data from candidate profiles and results can reduce chances of unconscious bias creeping in.

3. Let AI support decision-making, not drive it.

AI uses the information it is fed to recognise patterns in data points and arrive at an outcome, like shortlisting suggestions in this case. 

But unlike humans, it lacks a sense of judgement. 

When allowed to call the shots, systematic errors in AI algorithms or algorithmic bias can result in unfair outcomes.

A viable alternative is to have AI analyse candidate performance equitably and use this data to guide hiring decisions. Curious Thing’s AI voice interviewer does just this, keeping the process human-led but making decisions easier and fairer with AI.

Advancements in technology have made it possible to give every candidate a fairer chance and reduce bias at different stages in recruitment. The key is to thoroughly examine your data source and the information it captures, where you use it and how it impacts hiring. Just because something works doesn’t mean it’s the best fit and the same holds true for data.