The Great Resignation: can AI help businesses with recruitment?

by Joseph George, MD, Dufrain

Artificial Intelligence (AI) was a term coined in 1950s America, but it wasn’t until the 1980s that we saw the first real “boom” of discussions on the subject globally. Hollywood and literature exploded on the topic, dreaming up Star Wars’ CP-30, and Space Odyssey’s HAL 9000 which fed into the psyches of scared employees who feared they would soon be replaced by robots.

In more recent years, however, the conversation has turned. AI is now seen, and used, to enhance a business’s day to day: automating tasks for efficiency and removing the risk of human error where appropriate. AI is just an evolution of traditional tech – if trained with accurate data, it is there to help businesses better streamline everyday processes at a lower cost.

An area where AI has taken off in recent years is recruitment. Using tech to help a business increase the efficiency, reliability and accuracy of its recruitment processes has become even more important in recent years, as Brexit and the more recent Coronavirus pandemic sparked a mass resignation of professionals in all sectors. Financial and professional services in particular are feeling the pinch.

Potential bias

As a basic example, AI can be trained to sift through applications for a certain role, pulling out candidates that it deems to have certain qualifications via the identification of key words. The intention is that this removes the potential subconscious bias of a human process.

However, it is important to note that AI driven algorithms are only as good as they are trained. They are taught by the data which is fed into them, including personal information and past examples to be replicated (or not, as the case may be).

Advanced artificial intelligence uses machine learning to mimic human behaviour. It takes volumes of structured data to achieve this effectively and avoid the risk that it will unintentionally learn a human’s unconscious bias by following the same trends as the data. Well publicised cases have shown recruitment algorithms favouring candidates based on gender, background or upbringing by mimicking previous trends. It’s vitally important to get it right if we’re going to deploy it.

The role of data

Establishing sound data processes is crucial reducing the risk of discrimination. That means ensuring only the correct type of data is used in AI systems. Data should only be collected from authoritative sources and in compliance with GDPR and privacy laws. Only the highest-quality data, which is relevant to the task, should be used. It should also be anonymised where possible with limited identifiers which could perpetuate bias and it is down to the user, in this case the employer, to make sure that their data is in a fit state to ensure the technology will work ethically and effectively. The role of reporting and transparency driven by various regulators across differing industries plays a big part here.

Financial institutions should not be shy in outlining what data algorithms are processing, and how they are doing so. That way, all parties can be confident that ethical standards are being met.

Less time, more results

The reward of doing all of this properly is tangible. Not only can it make the process more efficient, but it can also help that business stand out to the candidate by having more time to spend engaging with candidates, building relationships and improving overall experience.

In the environment of ‘The Great Resignation’ and high competition, AI can also help businesses find a higher quality of candidates quicker, and from a larger pool. By leveraging machine learning, substantial data pools can be searched in seconds to identify candidates who meet search criteria. This technology is especially helpful in the recruitment of what are known as “passive candidates” – people who are not activity looking for a job but may fit the bill.

Once potential candidates have been sourced, humans can step in to begin the vetting process and ensure that there can been no bias. The more that the technology is used, refined and corrected, the better the results will be each time.

It is vitally important that the data is good, but while we are still refining AI processes, we must also ensure that a diverse workforce is in place to decide what good looks like, what data is being used and, ultimately, when to override a decision made by an algorithm. Using this method, the way in which AI is trained will get infinitely better over time.

Clearly, artificial intelligence can improve recruiting efficiency and effectiveness. It can scan for key words and eliminate people who may not fit specific criteria, but the human element cannot be replaced, and AI is not the sole solution to increased proficiency.

Used alone, it may provide data that does not prove useful, it will ignore the human aspects of culture fit and it may overlook qualified candidates with underachieving resumes. Technology must be used alongside human interference to ensure the best process for recruitment across the board.

spot_img

Explore more