Recruiting with AI could lead to algorithmic bias

Media
Insights
Author
Leon
Back
Back

A study by Cambridge University from 2022 found that voice pattern, phrenology and  similar ways of analysing the characteristics of candidates are unsuitable methods of selecting applicants as they are too biased.

This brings into question AI’s ability to perform important tasks that would be handled by humans, even when taking into account how biased or subjective those humans can be.

AI does not have an agenda when used as a tool for hiring candidates but it can have a habit of replicating discrimination, whether it be age, gender, disability or race, and undermining steps to hire fairly and equally.

The famous Amazon case from a few years ago revealed that the data used to train the AI resulted in a bias, favouring males and classing females as unacceptable.

Data is not the only place where bias can creep in as algorithms have the capacity to be biased too.

This comes down to the trust that managers have for AI, without an understanding that it is not, and may never be, perfect.

They are keen to introduce it in an attempt to cut costs on hiring managers and teams, with a belief that it will also bring more efficiency to the process.

Regardless of whether governments intervene to regulate against the bias coming from AI hiring systems, like New York’s Local Law 144 and a possible EU AI Act, if a person is discriminated against by the system, they would have an Equal Opportunity case and 79 per cent of orgs employed either automation, AI or a mixture when choosing who to hire last year, without understanding the biases involved.

Companies really need to educate themselves on the risks involved with AI, instad of blindly trusting that it will all work out for the best.

AI is no joke. It can do a great many things well, but removing humans from the business of hiring humans would be a mistake.

Similar articles