Can algorithms inject objectivity into the hiring process?

aat comment

Managers tend to hire people in their own image, but are computer algorithms more objective?

According to a 2015 report from the Chartered Institute of Personnel and Development, recruiters have unconscious bias towards hiring candidates they like and want to spend time with outside of work.

This doesn’t mean that the best candidate gets chosen, so can the latest hiring software do any better?

Automated computer processes, or algorithms, are now commonly used in the initial screening of candidates, zipping through volumes of CVs and application forms, and tirelessly scoring psychometric tests.

Such automated systems are consistent, quicker, more efficient and more cost effective than humans, says Neill Thompson, senior lecturer in occupational and organisational psychology at Northumbria University. “For example, an algorithm-based system used for a graduate scheme in the finance sector where hundreds of applicants apply will immediately sift out those not meeting a given threshold.”

Best for the job?

Artificial Intelligence-based algorithms can also analyse facial expressions, choice of words and speech patterns in video interviews, attempting to measure engagement, motivation and other indicators of high performance behaviour, says Thompson.

Theoretically, they should therefore predict which candidates will perform well in the future, but Thomson says this is questionable, at least for now. “It will be another five years or so before we can compare candidates selected through algorithms against equivalent candidates selected through traditional methods to be sure that those predictions are accurate.”

Chris Rosebert is head of Data Science & A.I at technology recruiter Networkers. He says: “Computer algorithms have the capability to select the best candidates for the job from a technical perspective, but they are notoriously challenged when it comes to accurately gauging communication and teamwork skills.”

Martin Ewings, director of Specialist Markets at recruitment company Experis, points out that recruitment is just as much about finding someone who is the right cultural fit as it is about finding someone with the right skills and experience. “The ‘right fit’ is difficult to incorporate into an algorithm, so the human element of hiring is, and will continue to be, vital.”

Humans build algorithms in the first place, so what happens when they get it wrong? Stephen Cuppello, psychology data analyst at psychometric testing company Thomas International, says that the data input and coding needs to be correct from the start for algorithms to work properly. “It is people who write the job description. Ill-defined roles and unsupported assertions as to what’s important mean that algorithms will easily find the wrong candidate. The old adage ‘garbage in, garbage out’ is key to keep in mind.”

Unconscious bias

There is some evidence that when properly designed and applied, algorithms can inject objectivity into the hiring process.

“For example, scoring of performance in assessments centre activities has been shown to be more accurate when performed by computers rather than by humans,” says Thomson.

But unconscious bias is still an issue. Rosebert says: “Unconscious bias of human recruiters gets replaced with conscious machine bias. We see algorithms falling into analytical patterns as a result of their programming – they can end up stereotyping just as humans do. We are miles away from a truly cognitive machine that is fool-proof.”

Algorithms also tend to mimic human bias when they analyse and reproduce previous screening and hiring patterns, says Kate Allen, managing director at financial recruiter Allen Associates.

“This was shown in a Vodafone pilot. They wanted to employ more women so they removed gender from their selection criteria. The pilot data showed that, even though plenty of women were applying, they just weren’t getting shortlisted for interview. The algorithms used people analytics based on previous hiring behaviours to predict that other (presumably male) candidates were a better fit for the organisation,” Allen says.

The technology has its blind spots in the finance sector, too, where the gender imbalance at the top is still widely weighted in favour of men.

Allen says: “The technology relies heavily on applicants using certain keywords to describe their personality traits, for example ‘competitive’, but research shows that women are less inclined to use them. This means that the technology may well restrict their chances.”

Recruiting for diversity

It does, however, appear that removing conscious human bias from algorithm’s selection criteria is the right step towards greater workplace diversity.

Algorithms do have the potential to help employers find candidates that might go unrecognised by current hiring processes,” says Rosebert.

Deloitte were the first large British business to use algorithms to access a more diverse talent pool. Two years ago they adopted a contextual recruitment system for their graduate and school-leavers intake, having recognised that the best people don’t necessarily come from the best universities or from certain socio-economic backgrounds.

The data on candidates’ economic background and personal circumstances allows them to make more informed choices about candidates by considering the context in which they gained their academic achievements. The system also flags disadvantaged candidates who nevertheless performed exceptionally well at school or university.

Algorithms can certainly play an important role in the hiring process. “But the challenge is to ensure they enhance rather than hinder the search, selection and appointment stages,” concludes Allen.

Iwona Tokc-Wilde is a business journalist.

Related articles