What role will these tools play in HR's future?
In part five of HRD’s ChatGPT series, we look at the importance of policies in AI and debate the role of HR in AI’s future.
Recently, a slew of companies have set limitations on ChatGPT – including Amazon, Verizon, Citigroup and Goldman Sachs. According to reports, some of these limitations come down to compliance concerns around new and previously unused AI.
But can employers really police these kinds of bans? And, more importantly, are they really realistic in the long run?
Speaking to Marlina Kinnersley, CEO of organizational success platform Fortay.ai, she tells HRD that in order to remain legally and ethically compliant with new technology, leaders need look at drafting AI policies.
“With the speed of information and ever-growing uses for how ChatGPT can improve our productivity and tackle tedious tasks, it's dangerous for employers not to have a generative AI policy,” she says. “In your ChatGPT or generative AI policy, provide guidance around permitted and prohibited technology uses, like idea generation and reviewing confidential client documentation or highly sensitive code for mistakes.”
These mistakes can be costly. How, where and why you store employee data is under scrutiny in Canada, especially since the advent of the Electronic Monitoring Act late last year. As such, HR leaders are having to become experts in policy drafting – with a recent survey from Gartner finding that 48% of practitioners are currently drafting ChatGPT rule books.
“You'll want to identify the use cases that require an approval process by a designated expert and clearly outline the steps involved,” adds Kinnersley. “Periodic training, internal and external transparency to indicate content that ChatGPT generated, and a dedicated committee with an equity and inclusive lens will be vital to reducing risks, including enabling inequities, bias, and other forms of harm. A clear and comprehensive policy ensures generative AI technology's responsible and effective use.”
Are employers still wary of AI?
When you think of AI, you’re likely conjuring up images of Minority Report or I, Robot, or The Terminator – what seemed like sci-fi is now eerily close to reality. And while we’re not quite at Deep Space Nine level just yet, for employers it’s about weighing up the possibilities with the potential risks.
Latest News
“Generative AI is in its infancy, with many risks that need to be carefully thought through, with clear policies to mitigate adverse impacts to any person, brand, customer, or community,” warns Kinnersley. “Companies are concerned about the legal and reputational risks that ChatGPT can bring regarding data privacy and security, potential data breaches, ensuring compliance with data protection regulations, content inaccuracy, and perpetuating inequities and bias, leading to unethical outcomes.”
Having said that, employers can reduce these risks and potential adverse impacts of generative AI with periodic training, monitoring, and clear, robust policies and guidelines, she says. These policies are the groundwork of your future AI plans. Without them, leaders risk running foul of privacy laws as well as irking employees.
AI as an arm of HR
The question on every HR leader’s mind right now is “What role will AI and ChatGPT play in HR’s future?” As with any new workplace tool, it’s important to walk before you run. However, what’s becoming increasingly clear is that ChatGPT, and by extension all workplace AI, is merging itself into HR’s purview.
In 2017, research from CareerBuilder predicted that five years from then, AI will be a regular component of HR. And while only seven percent of those asked admitted they believe that AI could do the job of an HR leader, six years on, it’s clear that segments of HR are being automated thanks to bots.
But what does this mean for already overstressed HR practitioners? Will HR departments have to take on yet more responsibilities – or will this partnership result in the formation of new, as yet unseen, roles?
“ChatGPT is already proving valuable to HR by improving efficiency by drafting or improving HR policies, procedures, job descriptions, and strategy plans,” says Kinnersley. “Automating the repetitive, tedious work will create more time to focus on human-centered work like compassion for sensitive people matters, leadership development, maintaining culture health, and creating inclusive and equitable environments for employees.
“The use of generative AI will continue to evolve, providing greater value to HR in the future as long as these tools are responsibly and ethically implemented.”