Lawyers cite risks on generative AI use, share tips for company policies on use of technology
US employers need to catch up to ensure they have the necessary policies in place to cover the use of generative artificial intelligence (AI) at work, according to a recent report.
Currently, only 26% of employees say that their employer has a policy related to the use of generative AI, and 23% say that such a policy is under development. Over a third (34%) say their organization does not have an AI policy, and 17% don't know, reports The Conference Board.
However, over half (56%) of employees are already using the technology at work.
“The urgency for establishing clear AI usage guidelines will only rise as the technology continues to accelerate in capability and scope,” says The Conference Board.
Already, 55% of respondents say that the current output of generative AI tools they're using matches the quality of an experienced or expert human worker:
- 45% say quality is equal to an experienced worker.
- 31% say quality is equal to a novice worker.
- 10% say quality is equal to an expert worker.
According to 72% of respondents to a previous study, different departments in their organizations are already taking advantage of generative AI without a company-wide strategy.
Use of generative AI for various tasks
More than three in 10 (31%) workers report using generative AI on a frequent, regular basis – including daily (9%), weekly (17%), or monthly (5%), according to The Conference Board’s report based on a survey of 1,100 US employees.
A quarter (25%) say they are using generative AI occasionally, while 44% have never used generative AI. With the use of the technology, workers do the following:
Latest News
- draft written content (68%)
- brainstorm ideas (60%)
- conduct background research (50%)
- analyze data and make forecasts (19%)
- generate/check computer code (11%)
- do image recognition and generation (7%)
Less than half (46%) of those who use generative AI say management is fully aware of their AI use, while 25% say management is partially aware and 13% say their managers are not aware.
Whether there’s a policy in place or not, not all workers who use generative AI make it fully known to their manager, according to the report.
Among organizations that lack an AI policy, 40% of employees report their managers are fully aware that they're using AI tools at work.
In organizations with AI policies under development, 53% of workers say their managers are fully aware of their AI use. The number is 56% among those in companies that have an established, finalized AI policy.
Employees who use generative artificial intelligence (AI) in the workplace are saving an average of 1.75 hours a day, according to a previous report.
Generative AI risks and policy essentials
Employers must come up with a policy on the use of generative AI in the workplace because the use of employees of this technology brings a lot of risks, according to lawyers at Davis Wright Tremaine. These risks concern:
- inaccuracy/bias
- ethical/moral hazards
- privacy
- trade secret security/protection of confidential business information
- copyright/contract claims
- copyright enforcement/IP ownership
- consumer protection and regulatory compliance
- defamation
The lawyers claim that policies around generative AI should include terms to ensure the organization takes the following steps:
- Identify and inventory all current and potential uses of GAI tools in the organization.
- Assess the risks of the current and planned uses of GAI tools.
- Clearly identify permissible and impermissible applications and use cases.
- Adopt transparency protocols to ensure that employees and external recipients of GAI outputs understand what content was created with GAI tools.
- Train managers and employees on the risks of GAI tools, and the organization's internal policy parameters around the use of such tools.
- Continually monitor emerging applications/use cases and compliance with the policy.
- Continually assess (and re-assess) what laws or regulations might apply to employees' use of GAI tools and how their policy could shape compliance.