'It's not just science fiction — it is a real risk and we need to figure out in advance how to deal with it'
One artificial intelligence expert continues to raise concern about the bad side of AI – saying that advancements around AI technology are pushing the world into “a period of huge uncertainty”.
It’s even possible for the technology to develop a desire to control humans said AI pioneer Geoffrey Hinton, the former Google executive dubbed as “the godfather of AI”, at a tech conference in Toronto on June 28.
“We have to take seriously the possibility that if they get to be smarter than us, which seems quite likely, and they have goals of their own, which seems quite likely, they may well develop the goal of taking control,” Hinton said, according to a CTV News report. “If they do that, we’re in trouble.”
Workers may be putting their employers at risk by using generative AI tools, finds a recent report from KPMG.
6 key dangers AI may pose to humans
Hinton said the following are the key dangers AI pose to humans:
Bias and discrimination: AI technology and large language models that are trained with data sets that are biased are capable of producing responses that are equally biased. However, it’s relatively easy for employers to limit the potential for bias and discrimination by freezing the behaviour exhibited by this technology, analyzing it and adjusting parameters to correct it, he said in the CTV News report.
Joblessness: While AI may help boost productivity for some workers, it may lead to joblessness for others, Hinton said.
"My worry is that those huge increases in productivity are going to go to putting people out of work and making the rich richer and poor poorer,” Hinton previously said. “The technology is being developed in a society that is not designed to use it for everybody's good."
Fake news: AI also has the ability to disseminate fake news, he said. Just like how some governments have made it a criminal offence to knowingly use or keep counterfeit money, something similar should be done with AI-generated content that is deliberately misleading. However, Hinton is unsure whether this kind of approach is possible.
Echo chambers: These are environments where users come into contact with beliefs or ideas similar to their own. As a result, these perspectives are reinforced while other opinions are not considered. The use of large language models could continue the development of these echo chambers, said Hinton.
Battle robots: AI technology could also lead to the creation of battle robots. Armed forces around the world producing lethal autonomous weapons such as battle robots could be a reality, Hinton said.
“Defence departments are going to build them and I don’t see how you can stop them doing it.”
This calls for the development of a treaty similar to the Geneva Conventions in order to establish international legal standards around prohibiting the use of this kind of technology, he said.
Existential risk: AI can even be a threat to the very existence of humans, CTV News reported, citing Hinton. While humans have a strong, built-in urge to obtain control, AI will be able to develop it too.
“The more control you get, the easier it is to achieve things. I think AI will be able to derive that, too. It’s good to get control so you can achieve other goals,” he said.
“It’s not just science fiction. It is a real risk that we need to think about and we need to figure out in advance how to deal with it.”
Previously, Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak, along with other 2,200 or so people, called for a six-month pause in developing systems more powerful than ChatGPT-4, citing risks to society and humanity.
Hinton said he has no idea how to make AI more likely to be a force for good than for bad. However, developers should work on understanding how AI might go wrong or try to overpower humans before the technology becomes incredibly intelligent – not just on improving the capabilities of the technology.
“We seriously ought to worry about mitigating all the bad side-effects of [AI],” he said.