Not all AI recruiting technology is foolproof. While some AI technologies help prevent human bias from creeping into recruitment processes, other options can perpetuate human bias. For example, Amazon was using a CV matching tool that was discovered to favour male candidates. This does not mean all AI recruiting tools are inherently faulty, it simply means faults can exist. We need to be on the lookout for such faults and assess the tools we might use accordingly. This means asking your potential vendors about what steps they take to ensure their AI recruiting technology is fair and free from bias. In this blog, we discuss how an AI recruiting tool might replicate human bias and talk about what we do at Curious Thing to help prevent this.
AI technology might discriminate against candidates due to biased data. This happens when data fed to algorithms reflects prejudices that already exist. For example, Amazon used past applicant data to inform their CV matching algorithm. Because Amazon had hired more men than women in the past, the AI was trained to reflect this (not on purpose of course!).
Another problem that occurs is that data collected might not represent reality, meaning the AI algorithm is working with limited information. For example, an AI algorithm might look for desirable employee attributes using attributes already modelled by star employees at certain companies. However, if said companies do not have great diversity then the model employees sampled will only reflect a small number in the population. This limits the opportunity for different but equally suitable candidates to be recognised. Basically, AI will learn patterns shown in data, including embedded biases so we must be vigilant as to the quality of data we are using to make decisions.
Similarly, the way we train an AI can create technology that is biased. For instance, if an AI technology rates people’s voices and appearance and uses this information to create candidate scores, it will be because people trained the AI to look for these attributes and score them. Therefore, it is important to ask the companies you are considering working with about the way their algorithms work and what inputs are used to make decisions.
At Curious Thing, we do not want our AI to draw conclusions beyond what it is designed to determine. That’s why we do not use data from past hiring decisions in our algorithms. All the data we collect is standalone and related to specific interview rounds. That means a candidate may conduct multiple interviews with Curious Thing and our AI will analyse each interview individually, it will not associate data together to make patterns that might promote bias.
Furthermore, our digital interviewer is designed to analyse what candidates say not how they say it. We use a natural language processing approach which means our AI analyses candidate responses by assessing sentences and phrases, not just keywords. In short, we aim to blindly interview candidates and analyse their responses only. Our AI is not trained to identify gender, ethnicity or age. Candidates complete a 10-15 minute interview and our AI will judge how well they performed in that round only. We present human recruiters with quantitative and qualitative information that allows them to more efficiently make shortlisting decisions. It is recruiters who decide what metrics they would like to prioritise not our AI. For example, recruiters can choose to look at how all candidate’s scored in motivation and business acumen.
Recruiters also have the option to listen to entire interviews or certain sections if they wish to do so. Our process is all about combining AI capability with human intelligence.
We design our product to keep humans in control of the process, using AI to make decisions easier, not make decisions for you.
“At Curious Thing we don’t want AI to draw conclusions beyond what it is designed for. The best decisions come from humans using intelligent data to draw conclusions. Our process is simple, we collect the data and our recruiters make the decisions.” - Sam Zheng CEO of Curious Thing
Overall, we consistently review our practices and algorithms with a view to prevent bias from creeping into the process. We’re very proud that our product helps recruiters remove bias from the recruitment process and work hard to ensure our product continues to help solve problems like this. If you have any more questions about how we deal with bias, please do get in touch with our team, we are more than happy to discuss this important issue with you.