Industry experts call for pause in AI experiments with group letter

'Should we let machines flood our information channels with propaganda and untruth?'

Industry experts call for pause in AI experiments with group letter

Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak along with other 2,200 or so people are calling for a six-month pause in developing systems more powerful than ChatGPT-4, citing risks to society and humanity.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” they say in an open letter.

Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources, the signatories of the open letter say, citing the widely-endorsed Asilomar AI Principles.

“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” they say.

As contemporary AI systems are now becoming “human-competitive at general tasks,” the signatories of the open letter say that we should be asking the following questions:

“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.”

The call follows OpenAI's launch of the fourth iteration of ChatGPT. Among companies already onboard with the tech innovation, 48 per cent say they’ve replaced workers since the new tool became available in November 2022.

‘Risks will be manageable’

During the six-month break, AI labs and independent experts should come together to jointly develop and implement “a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” say the signatories to the open letter.

“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt,” they say.

Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

However, the pause is not a pause on AI development in general, but simply “a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” they say.

Some experts believe that employers should develop policies around ChatGPT. And with the growing popularity of ChatGPT, nearly half of HR leaders are coming up with guidelines to regulate its use in the workplace.

‘Human rights’

However, six months is not enough for what the signatories want to happen, Wendy Wong, a political scientist at University of British Columbia Okanagan campus, tells Chris Walker on CBC's Daybreak South. 

“Some of the things they want us to do in six months is to develop regulatory authorities to govern AI, which if we could do something like that in six months, I don't think we'd be here,” she says in the interview, a transcript of which was published in a CBC report.

“I'm also thinking if they are giving us such short timelines to develop auditing and certification, or to create well-resourced institutions to cope with economic and political disruption, what can we do right in response?”

It’s fundamental that we acknowledge and recognize how AI is changing the human experience in fundamental ways, she says.

“We've done that a little bit here and there, but we can't really move forward on thinking about how to govern emerging technologies like AI without thinking about the values embedded in human rights.”