top of page
Search

Making Artificial Intelligence Applications Safe For Humans

Updated: Jun 27, 2019

Ovum view


Summary


The realization that artificial intelligence (AI) systems may have to be “sold” to an unwilling public may have prompted a recent initiative: “Partnership on AI to benefit people and society”, supported by large high-tech companies with a leading presence in the AI arena, including Amazon, Google DeepMind, Facebook, IBM, and Microsoft. In addition, two other organizations, OpenAI and GoodAI, have recently been set up to promote and further research into AI systems, the former to build safe AI, and the latter to build artificial general intelligence (AGI) systems. The issues of concern about AI tend to concern “Terminator” (of the famous film) scenarios where machines hunt humans, as well as the singularity or transcendence, where AI systems become smarter than humans and build next-generation AI systems that are even more intelligent ad infinitum.


The AI systems, largely based on recent deep learning advances, that can make a useful impact today are not a danger to humanity but they do pose some challenges. The debate therefore needs to move away from the Hollywood image of AI and toward a debate that assesses the impact of AI on jobs, which is probably the number-one issue, as well as questions of giving AI systems a sense of morality. We are still far enough from AGI that society can learn from the current state of AI to provide the right regulatory framework that will ensure a safe AGI.


There will be turmoil in the jobs market caused by AI systems


The jobs market is already affected by current economic forces including globalization, Brexit, the slow recovery of the financial system post-2008, increased automation, and digitalization. The introduction of AI systems will increase the “job displacement due to automation” trend. This disruption is inevitable, and will not just affect repetitive labor but also higher-skilled jobs deemed safe from automation. The key question is whether AI technology is a “zero-sum” game, where a job taken up by AI is not replaced by a job for a human. Ovum believes that AI will cause short-term job market disruption, but in the long-term new jobs for humans will arise and economies infused with AI systems will adjust and take off with new opportunities for people displaced by AI.


AI systems that interact with humans must have built-in moral guidance


The trolley problem in philosophy illustrates the dilemma of making difficult but rational choices in minimizing fatalities in the face of a calamity. Autonomous vehicles on the road will have to deal with such scenarios (see, for example, work on robot ethics at Georgia Technology Institute), and AI systems will need to understand moral choices and make the right decisions. Related to this is the question of how safe AI systems should be, and regulators have to balance the need for safety against not strangling an industry with impossible demands on safety that make manufacturing prohibitively expensive. In the automotive industry, the evidence to date shows autonomous vehicles are likely to reduce road accidents by around 90%.

17 views1 comment
bottom of page