1 in 4 workers using ChatGPT at work, finds survey
Using ChatGPT in the workplace could create a risk of proprietary information, according to one expert.
Human reviewers from other companies may read any of the generated chats, and researchers found that similar artificial intelligence (AI) tools could reproduce data it absorbed during training, says Ben King, VP of customer trust at corporate security firm Okta, in a Reuters report.
"People do not understand how the data is used when they use generative AI services," he says.
"For businesses this is critical, because users don't have a contract with many AIs - because they are a free service - so corporates won't have run the risk through their usual assessment process.”
This comes at a time when many workers are using ChatGPT at work behind their employer’s back.
Specifically, some 28 per cent of workers regularly use ChatGPT at work, even though only 22 per cent say their employers explicitly allowed such external tools, according to a Reuters/Ipsos poll of 2,625 adults conducted between July 11 and 17, 2023.
Also, some 10 per cent of surveyed workers say their bosses explicitly banned external AI tools, while about 25 per cent do not know if their company permitted use of the technology.
The use of ChatGPT has soared earlier this year. Between January and February, the use of OpenAI’s generative AI technology soared by 120% globally, according to a report from DeskTime, provider of workforce management solutions.
Among companies that have embraced the use of ChatGPT and other AI tools, some are taking steps to ensure that it is safe to use.
"We've started testing and learning about how AI can enhance operational effectiveness," says a Coca-Cola spokesperson in Atlanta, Georgia, in the Reuters report, adding that data stays within its firewall.
"Internally, we recently launched our enterprise version of Coca-Cola ChatGPT for productivity." Coca-Cola also plans to use AI to improve the effectiveness and productivity of its teams, notes the spokesperson in the report.
Dawn Allen, CFO at food and beverage supplier Tate & Lyle, meanwhile is testing out ChatGPT, having "found a way to use it in a safe way".
"We've got different teams deciding how they want to use it through a series of experiments,” Allen says in the report. “Should we use it in investor relations? Should we use it in knowledge management? How can we use it to carry out tasks more efficiently?"
Geoffrey Hinton, the former Google executive dubbed as “the godfather of AI”, previously said that advancements around AI technology are pushing the world into “a period of huge uncertainty”.
Here are some tactical tips for safely integrating generative AI in business applications to drive business results, as detailed in a Harvard Business Review article by Kathy Baxter, principal architect of Ethical AI Practice, and Yoav Schlesinger, architect of Ethical AI Practice, both at Salesforce:
“Generative AI is evolving quickly, so the concrete steps businesses need to take will evolve over time. But sticking to a firm ethical framework can help organizations navigate this period of rapid transformation,” they say.