The annual World Economic Forum discusses the impact of this ‘young’ technology
Without proper safeguards and policies, artificial intelligence may cause more harm than good, according to a report launched at this year’s World Economic Forum (WEF).
This may be why many discussions around AI at Davos focused on the challenges of AI governance.
Instead of simply diving into the impact of AI, sessions talked about how businesses and governments can guide technologies better to ensure that the benefits truly outweigh the risks.
This is in line with INSEAD’s annual report, launched at the United Nation’s Sustainable Development Goals (SDGs) Tent, which found that while the age of AI presents tremendous benefits for humanity, AI development and the resources required for it are unevenly distributed.
One solution to even out the playing field across countries and industries is the bank on the right policies and approaches.
This year’s report explored how the development of AI is not only changing the nature of work, but also forcing a re-evaluation of workplace practices, corporate structures and innovation ecosystems.
As machines and algorithms continue to advance and enable a growing set of tasks and responsibilities, jobs will be affected and in some cases reinvented.
READ MORE: Google CEO thinks AI will be more a profound change than fire
Ideally, before leaders jump on the bandwagon and adopt AI technologies, there are some factors worth considering. At a panel discussion around implementing ‘responsible AI’, leaders talked about whether it’s possible to build trust and ‘govern’ AI.
“Right governance in many ways also means taking the right responsibility,” said panellist Diana Paredes, CEO & co-founder at Suade Labs.
Paredes shared with the audience her experience in managing AI for her business, which offers software platform for the financial industry.
She encountered four key challenges.
Quality of data
When talking about quality, she zeroed in on why it’s crucial to manage any inherent biases in data. She observed that if you don’t address and revise biases in AI technologies from the beginning and on a regular basis, it “could have repercussions that are quite dire to society in general”.
Applied to the world of HR, biases in data can kill aspirations for AI-based tools. One example is in Amazon’s failed recruitment software: Is it impossible to overcome hiring bias?
Human involvement
“Do you actually have a human in the loop, out of the loop, over the loop,” Paredes said. When it comes to AI, the complexities in one technology can look very different for users depending on which country they’re operating in, the system it’s on, and even the industry it’s serving.
“A [governance] framework that works for the industry has to be flexible and really allow the right level of detail and scenarios that you can encounter,” she said.
Human oversight is thus crucial to better manage tech.
Ethics and trust
“We always speak about ethics around AI and how to do it properly,” Paredes said. “But ethics also [covers] taking along in the journey a certain amount of layman language and explainability to consumers to really understand what this AI is going to do in their life.
“Addressing those issues in the right way, fundamentally means that AI is going to be adopted at a much faster pace and embraced instead of resisted.”
She added that it is crucial to focus on enabling the acceptance of AI tech and enhancing users’ confidence, which means upskilling and bringing people along on your transformation journey.
Liability
Despite AI becoming more commonly used, there are no clear guidelines on the liabilities and limitations of technology. Paredes said this all comes down to ‘explainability’ of tech. However, some things are easily explained, while other aspects of the tech may be a bit “more subjective and difficult to capture”.
One solution is to depend on an AI framework that helps you look at your current level of governance and structure to fully understand “what liability you’re taking as a company when you’re adopting AI”.
Fellow panellist Bradford L. Smith, president at Microsoft summed up the need for safeguards and policies simply by saying it’s better to take the opportunity now than later — when it may be too late.
“There is no single answer for all time with technology that is this young,” Smith said. “But we should not wait for technology to mature before we start to put principles and ethics and even rules in place to govern AI.”