Everywhere you look, there’s AI! Adopting these tools in HR and recruitment processes offers to provide automatic and simpler processes, leading to greater efficiency, scalability, and less room for human error.
Nevertheless, usage of such technology also poses serious risks to security, existing biases, and discriminatory advertisements.
The Department for Science, Innovation & Technology has issued a new guidance document on ‘Responsible AI in Recruitment’ on 25 March 2024.
It provides guidance for employers to help mitigate the ethical risks associated with using AI as part of recruitment and hiring processes and comply with relevant legal requirements, including the government’s AI regulatory principles.
The guidance focuses on ‘assurance mechanisms’ for organisations to use when procuring AI systems from suppliers and when deploying AI in the organisation.
These mechanisms include completion of impact assessments, risk assessments and bias audits, creation of an AI governance framework within the organisation, and different types of testing of the AI systems.
The guidance also breaks down these assurance mechanisms by reference to the different stages of procuring and deploying AI, providing suggested questions for the organisation to ask itself at each stage.
Recent publications indicate that organisations taking a radical approach by completely banning AI have not eradicated its use. Instead, many employees find ways around this by using their own systems and transferring AI-generated work to their work computers, potentially introducing significant security risks.