Employees of Open AI, Google air open letter warning employers about retaliating against workers who voice concern
Some current and former employees of artificial intelligence firms are calling on their employers to allow staff to air concerns about AI without facing retaliation.
In an open letter, employees of Open AI, Google DeepMind, and Anthropic said the workforces of AI firms are among the few people who can hold their employers accountable to the public.
"Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues," the letter reads.
And even then, they have concerns that they could face retaliation for speaking out about their worries on the tech, according to the employees.
"Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated," they said.
"Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues."
To address these concerns, the employees urged AI firms to commit to four principles that will protect their workforce from retaliation.
This includes a commitment that employers "will not enter into or enforce any agreement that prohibits 'disparagement' or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit."
Organisations should also commit to the establishment of an anonymous process for current and former staff where they can raise risk-related concerns to the organisation.
Employers should also commit to a culture of open criticism and allow current and former employees to raise risk-related concerns about its technologies to the public as long as trade secrets and other intellectual property are protected.
Lastly, employers should also ensure that they don't retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed.
According to the signatories, they believe that risk-related concerns should always be raised through an adequate, anonymous process.
"However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public," they said.
The signatories made the call as they pointed out the risks imposed by AI.
"These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction," they said.
AI companies, however, have "strong financial incentives to avoid effective oversight."
"AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily," the signatories added.