Employees confess to putting sensitive data on publicly available GenAI assistants
A new report is warning employers about the emerging risk of "shadow AI" in workplaces as employees admit to the unsupervised use of generative AI tools to improve performance.
Findings from TELUS Digital Experience revealed that 68% of employees who use GenAI at work are accessing publicly available GenAI assistants, such as ChatGPT, Microsoft Copilot, or Google Gemini, through their personal accounts.
Nearly a quarter (22%) of employees who already have access to a company-provided GenAI assistant also admitted that they are still using their personal GenAI accounts.
The use of these GenAI tools comes amid the significant gains that employees say they get from using the technologies. These are:
The findings are similar to research from Microsoft and LinkedIn last year, which found that 78% of AI users are bringing their own AI tools to work as they enjoy their benefits.
"Generative AI is proving to be a productivity superpower for hundreds of business tasks," said Bret Kinsella, General Manager, Fuel iX™ at TELUS Digital, in a statement. "Employees know this. If their company doesn't provide AI tools, they'll bring their own, which is problematic."
The widespread use of public GenAI tools is fuelling the rise of shadow AI, according to the report. This refers to the use of AI tools without the approval of the company's IT department.
"Organisations are blind to the risks of shadow AI, even while they are secretly benefitting from productivity gains," Kinsella warned.
In fact, 57% of employees admitted that they entered sensitive information on publicly available GenAI assistants. These data include:
This practice comes as 44% of employees said their company does not have AI guidelines or policies in place, or if they don't know if their company does.
Another 42% said they believe there are no repercussions for not following their company's AI guidelines.
Given that some employees with company-provided AI tools are still using their personal accounts, Kinsella said providing AI tools will not be enough to mitigate risks.
"A key to harnessing AI's potential while mitigating security risks is to provide employees with GenAI capabilities that include robust security and compliance and are also easily updated with the latest AI model improvements," Kinsella said.