Article
Can UK based employers safely use AI tools?
5 May 2023 | Applicable law: England and Wales | 4 minute read
Employers and recruiters have a wealth of new tools at their disposal, powered by AI.
These include software that scans a candidate's CV, enriches it by browsing the web and social media platforms for additional information on the candidate and then scores the candidate's profile to provide a recommendation to the recruiter in question.
Similarly, other companies develop and market tools aimed at screening and scoring candidates during an automated video-interview. Interviewees are usually given points based on their eye contact, body language, dress code and, more controversially, their facial expressions.
Finally, particularly over the pandemic, we have seen the rise in tools aimed at monitoring employee engagement.
While tools like the above have the potential to save substantial time, improve the recruitment and retention of valuable candidates and counter human-bias, employers should exercise caution – not least because of incoming regulation on AI.
Lack of evidence
In the past few years, we have seen a rise in companies claiming that their AI algorithm can analyse and detect a person's emotions based on body language and facial expressions. In October 2022, the Information Commissioner Office (ICO) warned companies to assess the risk of using "emotional analysis technologies" before adopting them.
The reasons for the ICO's warning are two-fold:
- emotion analysis inevitably involves the collection, storage and processing of a vast range of personal data which could include special category data; and
- as of yet, 'emotion AI technology' is not fully backed by scientific data.
Emotion AI technology is also prone to bias, inaccuracy and, perhaps a greater concern, discrimination and the potential claims that come with that. The ICO has noted that algorithms are not known (yet) to be capable of detecting subtler emotional cues. Depending on the data set used to train the algorithm in question, the AI software may lose cultural context for tone and body language and rely on stereotypical expressions of emotions.
Any AI interview screening tools should therefore be closely vetted by employers and recruiters before they are implemented.
Issues relating to explainability and profiling
On a related note, companies developing and selling AI tools will often note that they are not able to explain why their AI system has recommended a particular candidate or course of action. This is known as the 'black box' problem.
This potential lack of explainability can come into conflict with requirements under the General Data Protection Regulation (GDPR (and the UK's DPA 2018)) and potentially leave an organisation vulnerable to accusations of discrimination.
In particular, under the GDPR individuals have the right not to be subjected to a solely automated decision that produces legal or similarly significant effects – save for a few exceptions where safeguards must be in place (e.g. a subject's right to obtain human intervention).
If, for example, candidates applying for a job are subjected to automated decision making in order to narrow the pool for a second (human) interview, then employers and recruiters must ensure this is made clear in order to comply with their obligations under the GDPR and provide relevant safeguards in any event.
AI regulation – a tale of two incoming approaches
The EU AI Act is expected to be finalised shortly and will have extra-territorial force as it is concerned with whether or not outputs produced by an AI system are used in the EU / affect subjects within the EU, regardless of where the company developing and selling the tool is based.
Where employers and recruiters are screening candidates based in the EU, they should be mindful that they may fall within the scope of regulations that are more stringent than the UK's proposed pro-innovation framework.
As for the UK, in late March 2023 the Government published its pro-innovation AI framework. The Government proposes to adopt a principle-based approach, which will leave individual regulators to issue non-binding best practice guidance notes. Although less prescriptive than the upcoming EU AI Act, the UK's framework will also be aimed at regulating the use / deployment of AI rather than AI technology itself.
Employers will be interested to note that the Government used the example of a fictional company providing recruitment services to underscore how different regulators will come together to address cross-cutting issues (like explainability and fairness) arising out of the use of AI in recruitment or employment.
For the fictional company (and for the very real employers and recruiters falling within the scope of the framework) following all applicable guidance and regulation will mean that they are able to deploy any AI tools and services responsibly.
Best practice going forwards
Employers should make sure they vet any AI tools used for candidate recruitment and retention closely and use a holistic approach.
In particular, it will be important for employers to ensure that applicable data protection regulations, privacy and human rights are all complied with – together with obligations arising under the Equalities Act 2010.
In the UK, individual regulators – like the ICO – are expected to issue further detailed guidance to help businesses navigate the responsible use of AI.
Whilst the principles underpinning the UK's AI strategy will be non-binding initially, regulators may in the near future use powers within their remit to monitor compliance more closely – in order to build the public's trust in AI, which is one of the UK Government's key objectives.
Employers should therefore make sure they are up to date on guidance issued by relevant regulators such as the Equality and Human Rights Commission, the ICO and the Employment Agency Standards Inspectorate.
Regulators across the world are trying to keep up with the pace of technology; employers must ensure they keep up with regulators too.