With AI still in the ‘honeymoon phase,’ how can training programs be effective?

'It's starting to threaten the work that we consider intrinsically human': academic on the importance of addressing employee fear around AI training programs

With AI still in the ‘honeymoon phase,’ how can training programs be effective?

As the “honeymoon phase” (as the World Economic Forum calls it) of generative AI development progresses, employers are regularly given the urgent message that they must train their workforces now to be nimble adopters of tech, or risk being left behind in the “new age of AI.”

A February 2024 Angus Reid poll revealed that only 21 percent of IT decision-makers in Canada are confident in their company’s ability to implement AI efficiently, while 70 percent of organizations are worried about the ethics involved.  

With so many questions around AI implementation, and Canadian AI legislation still being debated in the senate, how exactly should employers train people on the new tech?

HRD spoke with Kevin Lee, assistant professor of organizational behaviour and human resources at U.B.C.’s Sauder School of Business to find out.

Addressing employee fear is first step to AI training

Generative AI is not the first technology disruption the workforce has experienced, and feelings of fear around automation and job security are not new, he says. But gen AI in particular is bringing that fear into “stark relief” because it’s moving into new territory.

“It's starting to threaten the work that we consider intrinsically human,” says Lee.

“A lot of the work that we're seeing is starting to be automated or that we're fearing is going to be automated, has to do with things like cognitive work, or creative expressive work. People like lawyers or doctors, engineers even, are finding themselves a little bit threatened by some of these technologies.”

When implementing new technologies into people’s work, it’s important for employers to be aware of how closely tied individuals are to their roles in the workforce, Lee says; for most people, their work is closely tied not only to their identity but to their value in society as well.

“If we believe that work is the most important thing that we do, or at least it's the thing that we spend most of our time doing in our lives, it really does become sort of this question of ‘If AI takes over my work, if I let it in, what's going to happen to my relationship with the society around me?’ It's not just our work, but rather our whole relevance to the social order, our whole identities, essentially.”

Avoid resistance to AI training with justification 

A question that often gets neglected when organizations are implementing AI training programs is the reason why it is necessary, Lee says.

Without this justification being apparent, employees come to their own conclusions around why AI is being introduced into their workspace – and that can lead to resistance.

The only way to be able to justify AI implementation in ways that make sense is by first ascertaining exactly what challenges employees are experiencing, and how AI can be used to meet those challenges, he says.

“Do we actually know what our employees are facing, do we actually know what they're experiencing at work? Have we actually talked to them extensively about their fears with regard to these new technologies and how they might be able to actually use, rather than be used by, these technologies?”

AI used in unexpected ways, so AI training should be flexible

Contrary to what was originally expected, AI is being implemented in unexpected ways in workplaces, says Lee. Rather than taking over people’s jobs, it is being adapted by employees to suit their specific needs and gaps in efficiency – for this reason, AI training programs can only be successful if they are designed from the ground up, not “top-down”.

He calls this training approach a “dual imperative”, of not only justifying the need for AI in an individual’s workflow, but also demonstrating that the employer cares about improving their experience rather than automating it.

“[It’s about] this caring idea, of trying to figure out actually how it can improve people's lives,” Lee says.

“And that really can only come from either firsthand experience of the work, and/or being in constant communication with the people that the technology is actually being implemented for.”

This process can involve informal focus groups where employees share how they are using AI for in their day-to-day, and then management taking those cues to adjust their training and expectations.

Some employers bring third-party consultants in to help with this process. Lee says – to assess the work, determine how AI would be useful, and how to identify the value and intent of the technology in their particular organisations.

Communication key for AI training success

Lee points out that genuine and open communication about AI in the workplace is crucial – especially for Gen Z and younger workers who have unprecedentedly high expectations of their employers around values and honesty.

“If the messaging is about things that people don't actually care about, it's not going to be received or thought to be resonant. It's not going to particularly help, overall,” says Lee.

“These justifications, and also the understanding of how to actually implement these things, really need to come from the ground up, and opening up those channels of communication as much as possible, to really get in touch with what is happening within your organization, is really important.”