'AI Barbie' trend raises concerns about 'shadow AI'

Organizations can't recover data lost to AI-driven privacy breaches, say experts, offering tips for HR

'AI Barbie' trend raises concerns about 'shadow AI'

If you scrolled through LinkedIn recently, you might have come across the “AI Barbie”. It's a trend that has taken the internet by storm, with people creating miniature dolls or action figures of themselves through generative artificial intelligence (AI) tools like ChatGPT. 

 Although it may seem like innocent fun, experts are raising concerns.  

According to CBC, trends like this can cause privacy and data usage concerns—specifically, how the personal data of users is being used in these tools and how “bad actors” could scrape the data to target people. 

To participate in the trend, users normally must input personal information, like personal attributes, and even job titles to get the end results of the action figures they want. 

With many professionals participating in this trend on LinkedIn, it links to the broader concern of how comfortable employees have become with inputting personal—and sometimes work-related—information into generative AI tools. 

“I have been seeing in many organizations that even employers are not aware that sharing these things with a third-party AI is a risk,” says Ali Dehghantanh, a professor at Guelph University and Canada Research Chair in Cybersecurity and Threat Intelligence.  

This year, a study published by TELUS Digital found that 57 per cent of enterprise employees entered sensitive information into publicly available generative AI assistants. Some of that information included personal data, customer information, and project details. 

How do privacy leaks happen? 

A lot of the time, Dehghantanh explains, employees think the private information shared in AI tools will stay confidential. However, these AI systems, once exposed to sensitive information, will train on it and could potentially expose that information, leading to a privacy leak.  

Other ways these privacy leaks and data breaches happen include when “attackers” use AI system networks to transfer sensitive data to external locations.  

Dehghantanh says when a privacy risk or breach occurs because of these tools, the AI aspect brings new challenges for organizations. 

“In traditional systems, like the web systems, even if the information is leaked, you can remove the server [breached by] the attacker, then the information will be recovered and there's no leakage anymore,” he explains. “But with AI, the chances that you can make an AI system to unlearn what it has already learned is low.” 

Organizations consider breaches like these a permanent loss of privacy, Dehghantanh says. It has a significant impact on the business. 

Additionally, employers do not have the option to work with vendors or AI tools to recover and restore information. 

“If it is gone to AI, that's it. [AI] already learned that, and it cannot unlearn that learning,” he says. 

Use private AI tools 

So, what proactive steps can organizations and employers take to stop employees from risking privacy and data breaches through AI tools? 

The first step, Dehghantanh says, is to make sure employers provide a “company-owned AI system for employees.” 

Companies and organizations, he says, should subscribe to or work with private AI vendors that don’t expose shared information to public access. 

Additionally, employers, he notes, can add privacy measures and “technical safeguards,” filtering out private information from data before it's shared with AI tools. 

“I recommend organizations conduct regular red teaming and adversarial testing exercises, to make sure that the AI systems are secure and are in good shape,” Dehghantanh says. 

Third-party AI tools, according to the Massachusetts Institute of Technology (MIT), are tools or AI solutions designed by external vendors and are typically available to the public or multiple clients. ChatGPT is an example of a third-party vendor. 

Private AI vendors, on the other hand, are created or deployed for a single organization and are highly restricted to ensure that data remains under the organization’s control. 

A past study from MIT found that more than half of AI failures come from organizations outsourcing to these third-party AI tools. These tools expose organizations to a range of risks, such as reputational damage, financial losses, and regulatory penalties. 

Provide clear guidance  

Carole Piovesan, managing partner at INQ Law and a lawyer specializing in cybersecurity and AI, says employers must provide clear policies to prevent issues like these. 

“[Employers] have to be clear that using third-party systems without those systems having been vetted is inappropriate,” she says. 

Many workplace issues involving AI stem from a lack of an acceptable AI policy and clear guidance that lets employees specifically know what AI tools are off-limits, Piovesan says. 

AI policies, she explains, should outline in detail how these tools should be used to avoid privacy leaks or the sharing of confidential information. Policies should also include an incident response plan in case of a breach. 

Privacy and policies for AI 

Organizations, she adds, must also make sure that policies are compliant with privacy laws. 

For instance, she says public sector organizations need to ensure they are compliant with emerging obligations under Bill 194, which introduces new amendments to the Freedom of Information and Protection of Privacy Act (FIPPA). 

According to the government of Ontario, the new amendment introduces additional obligations around privacy and cybersecurity rules when using AI—such as requiring organizations to conduct privacy assessments before collecting personal information.  

“[Organizations] should look to the regulator to see if the regulator has come up with any new guidance or consultation,” Piovesan says. 

Dehghantanh says that along with policies and procedures, there should be an AI governance approach where “everyone is responsible.” 

“We used to focus cybersecurity only on the IT side of the business. Nowadays, with the introduction of AI, cybersecurity should be considered everywhere. Even if you are typing a document, [employers] need to make sure it's not exposed to AI systems,” he says. 

Disciplining employees  

Last year, Cybersafe and the National Cybersecurity Alliance (NCA) surveyed 7,000 individuals across seven different regions and revealed that 38 per cent of workers admitted to sharing sensitive work information without their employer’s knowledge. In that study, 23 per cent of individuals admitted to skipping awareness training, believing they already “knew enough.” 

When it comes to disciplining employees who don’t comply, Piovesan says employers can follow the standard protocol. 

She adds that many organizations already have policies and plans in place to mitigate AI-related risks, but employers need to understand the challenges that AI presents. 

“My point about both the privacy and the monitoring is that AI doesn't change what you're doing. So, whatever your disciplinary actions are with respect to privacy breaches, or whatever your monitoring practices are for employees today, AI doesn't change that,” she says. 

“Apply those same processes to the use of those technologies and to your concerns about them.”