But many worry about ethical usage, and there are risks, says lawyer, offering tips for HR
Singapore HR leaders are embracing artificial intelligence (AI) technology, judging by an Employment Hero report which found that 98% of HR practitioners are already using AI tools within their HR software and 80% of HR leaders are predicting an increase in AI in the coming years.
While tasks such as identifying and reporting on employee data trends and creating HR content such as job adverts have Singaporean HR leaders excited about the future of AI, 65% are concerned about the ethical use of AI within the HR industry.
Employee privacy (37%), lack of trust and transparency (29%) and lack of AI governance principles (27%) are the top 3 concerns selected by HR professionals.
Darren Grayson Chng, a Singaporean data and tech lawyer, told HRD that there are many challenges and risks that apply to AI adoption generally, but there are a few that are particularly relevant to the HR industry.
“There are not just risks to privacy, but also the risks of bias, unfairness, and discrimination, and the severity of the consequences can be significant where we are dealing with a person’s career and livelihood,” said Chng.
“If these risks eventuate, will the affected person even know that they have been subject to a decision made by AI that perhaps discriminates against them? If they do know, can they seek an explanation on how the AI works, and how the AI reached its decision? Can they contest the decision? Who is accountable for the AI’s decision?”
There are currently no clear legal standards or guidelines on the use of AI within the HR industry, which was cited in the survey as one of the top three concerns. However, in July, Singapore’s Personal Data Protection Commission launched a public consultation on proposed advisory guidelines (AG) on the use of personal data in AI recommendation and decision systems.
“The AG aims to clarify how the personal data protection law applies to organisations’ collection and use of personal data to develop and deploy systems that are used to make decisions autonomously, or to assist a human decision-maker through recommendations and predictions,” said Chng.
While the government assesses AI-driven decision-making processes, Chng says there are things that HR leaders can do to mitigate their concerns, such as understanding how algorithms within HR software work and ensuring they have been rigorously tested for issues like bias and discrimination. Additionally, they should perform regular audits of AI systems to ensure that their recommendations are fair and equitable.
But one of the most important factors in preventing bias, unfairness, discrimination, is the company’s culture.
“An organisation's culture and values are a blueprint as to how the organisation will handle AI, whether it will prioritise profits and monetary ROI over things like transparency, ‘explain-ability,’ fairness, security, and privacy, that AI is used safely and responsibly,” said Chng.
While new technology is exciting, Chng suggests that relying solely on AI to make decisions that have an impact on human rights should be avoided.
“For now, I would recommend limiting the use of AI, especially in cases where human rights are affected, to supporting human decision making; for example, in the form of providing a second opinion, rather than allowing AI to be the sole decision maker,” he added.