'Just because the chatbot says it, doesn't make it so'
In part three of HRD’s ChatGPT series, we look at the legal challenges circling organizational robotics.
Following multiple media interviews with leading robotics, there’s been more conjecture around AI’s place in society. In the business world, CEOs and HR leaders are still wondering what role AI will ultimately play in their organizations – is it a case of if or when ChatGPT will come a ‘knocking?
HRD spoke with Serena Huang, Founder of Data with Serena and Chief Data Officer at ABE.work, (formerly global head of people analytics, visualization and HR technology at PayPal), on AI’s rising prominence in corporate strategy, and why we need to be wary of Shiny Object Syndrome.
“There’s a lot of potential benefits from ChatGPT – especially around automating administrative HR functions, getting rid of the tedious tasks, and freeing up practitioners’ time,” she tells HRD.
“But just remember, not everything online is validated. Who’s to say it’s 100% accurate? Or it’s 80% accurate? Or it’s a complete lie? I think we need to be careful when taking answers from AI – which a lot of people aren’t doing right now.”
Huang, like many HR leaders, is quick to warn organizations against rushing in blindly. As with most new tech, the boardroom tends to get excited. And while there’s many questions still surrounding AI’s place in HR, there’s no denying the possibilities are quite literally endless.
“I think one of the biggest areas ChatGPT will help with is job applications – HR leaders could use the tech to write job adverts as well as helping candidates craft their resume. I wouldn't say it’ll replace resume or career coaches, but for people who either cannot afford to or who don't know how to get some professional help, this could be useful,” she says.
However, this too comes with its own set of unique challenges.
“When we adopt new AI, there's a lack of knowledge – and when it's a shiny object, we don't always consider whether or not what the tech is telling us is true,” says Huang.
Am I legally responsible for my AI’s mistakes?
This is where critical thinking comes in – as well as issues around compliance and deception. Speaking with Huang, she brough up the example of a candidate lying on their CV. If they’ve used ChatGPT for that resume – and have been found out – who’s to blame? Is it the candidate or the bot?
Similarly in HR, if practitioners start using AI internally and something goes wrong, will the HR leader be legally held accountable? Or is it the robot’s fault?
“Employers need to take responsibility for the entire organization,” says Mike MacLellan, partner at Ontario-based law firm CCPartners. “If you’re putting any kind of faith into a computer program, you’re ultimately responsible for the output. It’s no different from putting your employee on a forklift in the warehouse – that piece of machinery needs to be in working order.
“And if something goes wrong, the employer is liable.”
In this new, burgeoning, AI-driven world, there’s no clear policies on how to proceed. New York City is leading the way, looking at new legislation to regulate AI’s role in hiring. Local Law 144 will require HR departments to properly vet their AI hiring tools, inform any jobseekers that they’re using the bot as well as tell candidates they characteristics that the AI will be analyzing.
Local Law 144 is the first of its kind – but as AI’s place in organizational strategy continues to grow, it surely won’t be the last.
“I would take that a step further,” adds MacLellan. “For example, if an employer is using AI to screen candidates, and the tech is routinely screening out ‘non-Canadian’ sounding names, then I could see the employer being responsible and liable for discrimination.
“It goes back to my original point – that the employer has the ultimate responsibility here. You can’t turn a blind eye to the equipment or processes you’re using. Any business that’s relying on AI has to understand it's a piece of equipment that they use for the benefit of the business. However, as with any other piece of equipment or process or software, they have to ensure it's working properly.”
Will HR become a co-worker of ChatGPT?
With job losses abundant in the tech sector, news of an intelligent AI is bound to raise suspicions. Employees are understandably anxious about losing their jobs, with a report from Insight Global claiming that three in four employees are worried about being made redundant.
But it’s not a case of people versus robots – it’s more that employers need to upskill their people in AI in order to make the best of it.
“I do think that a lot of HR departments might be getting too excited,” adds Huang. “They’re thinking, “Oh, I have all these budget issues, I can’t hire as many people, I’ll just deploy some bots and get everything done.’ That’s not the case.”
A real human needs to be present in the process before any major decisions are made, which could down the line result in new, specialist roles appearing within HR departments. “We might need new roles such as Prompt Engineer, who helps makes best use of the generative AI by providing the right prompts. ChatGPT is more likely to answer prompts correctly if it has high-quality sources of data but will answer incorrectly on topics with lots of misinformation,” says Huang regarding the future of work.
“In my role leading people analytics & HR tech, I was also the advisor on AI applications in talent - and that means ensuring there’s always a human involved in any talent decision that’s made. I'm watching that closely. Because I know when things are convenient, people tend to forget. Just because the chatbot says it, doesn't make it so.”