Vigilance urged for HR leaders using generative AI tools

'They must ensure they're enforcing policies, educating employees on data security, and monitoring compliance,' says expert

Vigilance urged for HR leaders using generative AI tools

HR leaders using generative AI tools are encouraged to be vigilant in using the new tech due to the risk of leaking confidential or sensitive information to the technology.

Jesse Todd, CEO of EncompaaS, said all HR-related data interactions with generative AI tools should be secure and follow established information governance guidelines to prevent leaks.

"Given the sensitive nature of data that HR leaders manage, they must ensure that they're adhering data protection by enforcing policies, educating employees on data security, and monitoring compliance," Todd told HRD.

Previous research has shown that a growing number of businesses, including their HR departments, are integrating generative AI tools in their systems in a bid to save time and make room for further tasks.

HR leaders utilising generative AI tools should work with their Risk, Compliance, and AI teams to ensure their utilisation of the technology is in line with company guidelines and best practice, according to Todd.

"Regular audits and compliance checks, along with employee training on the secure use of these tools, can help ensure sensitive data is protected," he said.

Leaking sensitive information

The call for vigilance came amid risks of inadvertently feeding generative AI tools potentially sensitive of confidential data when using them.

Last year, an employee at Samsung was reported to have accidentally leaked sensitive information while using ChatGPT.

Todd said cases of sensitive data leaks happen often due to "copying and pasting confidential information into AI prompts and exposing documents for analysis."

"Even seemingly harmless data uploads can contain metadata or context that, when combined with other data, reveals sensitive information. This can lead to unintentional exposure of confidential details," he warned.

To mitigate these risks, Todd advised that all data should be discovered, normalised, and de-risked before getting uploaded to generative AI models.

"Companies need to have policies and clear guidelines for using gen AI and adopt intelligent information management solutions that automatically find, organise, enrich, and de-risk organisational data so that it's ready for responsible use in gen AI tools," he said.

Information management policies should also cover "anonymization, encryption standards, regular security audits, and ethical use with clear consequences for policy violations."

"HR leaders can foster a data security culture by promoting awareness through regular training, leading by example, and integrating data protection practices into daily workflows. Recognition programs for adherence to security practices can also reinforce the importance of safeguarding information," he said.

Recent articles & video

Fired for failing to bill external clients? Data engineer slams unfair redundancy

Keeping it casual

Rise in employers saying hybrid work reduced productivity: survey

Ombudsman: Over 200 protected disclosures, enquiries reported this year

Most Read Articles

ERA shoots down retiree's unjustified dismissal claim

Vigilance urged for HR leaders using generative AI tools

Self-defence or 'excessive' force? Worker cries unjustified dismissal