TOPIC GUIDE: Artificial intelligence (2018)
"Humanity should fear advances in artificial intelligence"
PUBLISHED: 05 Mar 2018
AUTHOR: Rob Lyons
Share this Topic Guide:
TOPIC GUIDE PARTNER
The increasing ability of machines in recent years to replicate or even supersede human abilities in complex tasks has been impressive. Already, artificial intelligence (AI) techniques have been used to allow machines to beat the best players in the world at both chess [Ref: Time] and the Chinese board game Go [Ref: Guardian]. IBM’s Watson system has beaten the best human players on the long-running US quiz show, Jeopardy! [Ref: Techrepublic]. While such demonstrations of the potential for AI are intriguing, there are now more and more real-world applications for AI systems. For example, voice-recognition systems like Amazon’s Alexa and Apple’s Siri use AI techniques to learn what the most suitable answer to our questions might be. Driverless cars cannot be pre-programmed for every eventuality, but need to ‘learn’ through experience using AI. Combined with huge datasets, AI-enabled machines could learn to interpret x-rays and other scans, making diagnosis quicker and more accurate [Ref: Technology Review].
But the implications for society are only just becoming apparent. In 2015, the Bank of England’s chief economist, Andrew Haldane, suggested that as many as one third of jobs in the UK – 15 million – could be lost to automation [Ref: Guardian]. Examples could include drivers replaced by autonomous vehicles and administrative staff replaced by intelligent assistants like Alexa. Moreover, it’s not just low-skill jobs that are now under threat. In the future, smarter machines and artificial intelligence could affect a much broader range of jobs, including many high-paid, high-skilled positions. Facial recognition systems, combined with ubiquitous CCTV, could call into question our privacy. The world-famous physicist, Stephen Hawking, has even claimed that ‘AI may replace humans altogether’ as a ‘new form of life’ that can rapidly learn and improve, making people obsolete [Ref: Independent]. So should we welcome AI’s potential or are the perceived threats too great?
DEBATE IN CONTEXT
This section provides a summary of the key issues in the debate, set in the context of recent discussions and the competing positions that have been adopted.
What is AI?
The term ‘artificial intelligence’ was coined in 1956, but “AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage” [Ref: SAS]. In essence, AI is “a collection of technologies that can be used to imitate or even to outperform tasks performed by humans using machines” [Ref: The Conversation]. AI is being used in a wide range of applications, from search engines on the internet to self-teaching programs which have the ability to learn from experience, such as Google’s Deepmind technology [Ref: Financial Times]. Now, it seems, “Machines are rapidly taking on ever more challenging cognitive tasks, encroaching on the fundamental ability that sets humans apart as a species: to make complex decisions, to solve problems – and, most importantly, to learn” [Ref: Financial Times].
The ethics of AI
AI poses some fundamental ethical questions for society. For example, how should we view the potential for AI to be used in the military arena? Although there is currently a consensus that, “giving robots the agency to kill humans would trample over a red line that should never be crossed” [Ref: Financial Times], it should be noted that robots are already present in bomb disposal, mine clearance and anti-missile systems. Some, such as software engineer Ronald Arkin, think that developing ‘ethical robots’ which are programmed to strict ethical codes could be beneficial in the military, if they are programmed never to break rules of combat that humans might flout [Ref: Nature]. Similarly, the potential for the increased autonomy and decision making that AI embodies opens up a moral vacuum that some suggest needs to be addressed by society, governments and legislators [Ref: The Times], while others argue that a code of ethics for robotics is urgently needed [Ref: The Times]. After all, who would be responsible for a decision badly made by a machine? The programmer, the engineer, the owner or the robot itself? Furthermore, critics say that driverless cars may be involved in situations where there is a split-second decision either to swerve, possibly killing the passengers, or not to swerve, possibly killing another road user. How should a machine decide? To what extent should we even allow machines to decide? [Ref: Aeon] Others argue that technology is fundamentally ‘morally neutral’, as: “The same technology that launched deadly missiles in WWII brought Neil Armstrong and Buzz Aldrin to the surface of the moon. The harnessing of nuclear power laid waste to Hiroshima and Nagasaki but it also provides power to billions without burning fossil fuels”. In this sense: “AI is another tool and we can use it to make the world a better place, if we wish.” [Ref: Gadgette]
A threat to humanity?
For some critics, advances in AI pose very real existential problems for humanity in the future. Oxford professor Nick Bostrom, for instance, has voiced concerns about what might happen if the ability for machines to learn for themselves accelerates very rapidly - what he calls an ‘intelligence explosion’. Bostrom believes “at some point we will create machines that are superintelligent, and that the first machine to attain superintelligence may become extremely powerful to the point of being able to shape the future according to its preferences” [Ref: Vox]. Technology entrepreneurs Bill Gates and Elon Musk have also publicly stated fears about the dangers of artificial intelligence, and caution that there are very real risks associated with the march of the technology if left unchecked [Ref: Guardian], although Gates has sounded a more optimistic note recently [Ref: CNBC]. Autonomy is a key issue that some critics are especially concerned about, with technologist Tom Ditterich warning that despite proposals to have driverless cars, autonomous weapons and automated surgical assistants, AI systems should never be fully autonomous, because: “By definition a fully autonomous system is one that we have no control over, and I don’t think we ever want to be in that situation.” [Ref: Business Insider] Additionally, there are also practical issues critics are keen to explore, such as the future of work, with many suggesting that advances in automation will result in certain jobs becoming obsolete. Commentator Claire Foges reflects on these developments, and draws parallels with the Luddites 200 years ago, who attempted to resist the increasing automation of their jobs during the onset of the industrial revolution [Ref: History.com]. She notes that amid recent forecasts that up to five million people could lose their jobs because of automation [Ref: The Times]: “Two hundred years on, a braver newer world is arriving at astonishing speed, and threatens to make luddites out of us all. The robots are coming, they are here; creeping stealthily into factory, office and shop.” [Ref: The Times]
A brave new world?
For advocates, the advance of AI has the potential to change the world in unimaginable ways, and they largely dismiss warnings about the dangers that it may pose. As Adam Jezard observes: “Such concerns are not new…From the weaving machines of the industrial revolution to the bicycle, mechanisation has prompted concerns that technology will make people redundant or alter society in unsettling ways.” [Ref: Financial Times] Moreover, supporters ask us to consider the benefits that AI has already brought to us, such as speedier fraud detection [Ref: The Banker], which will continue to develop and revolutionise the way we live our lives. In the field of medicine, one commentator posits the increasingly plausible idea of having a program which may in future be able to recognise the difference between cancer tumours and healthy tissue infinitely better than humans, which would revolutionise healthcare [Ref: The Times]. Others also criticise arguments that advances in AI signal the end of humanity, and point to the fact that: “After so much talking about the risks of super intelligent machines, it’s time to turn on the light, stop worrying about sci-fi scenarios, and start focusing on AI’s actual challenges.” [Ref: Aeon] Perhaps more profoundly, others question why we are so quick to underestimate our abilities as humans, and fear AI. Author Nicholas Carr observes that although: “Every day we are reminded of the superiority of computers…What we forget is that our machines are built by our own hands”, and in actual fact: “If computers had the ability to be amazed, they’d be amazed by us.” [Ref: New York Times] In addition, fundamental to the pro-AI argument is the idea of technological progress being a good thing in and of itself. Futurist Dominic Basulto summarises this point when he speaks of ‘existential reward’, arguing that, “humanity has an imperative to consider dystopian predictions of the future. But it also has an imperative to push on, to reach its full potential.” [Ref: Washington Post] From the industrial revolution onwards we have gradually made our everyday lives easier and safer through innovation, automation and technology. For instance, the onset of driverless vehicles is predicted to reduce drastically the number of road traffic incidents in the future, and: “Machines known as automobiles long ago made horses redundant in the developed world – except riding for a pure leisure pursuit or in sport” [Ref: The Times]. So with all of the arguments in mind, are critics right to be wary of the proliferation of AI in our lives, and the ethical and practical problems that it may present humanity in the future? Or should we embrace the technological progress that AI represents, and all of the potential that it has to change our lives for the better?
It is crucial for debaters to have read the articles in this section, which provide essential information and arguments for and against the debate motion. Students will be expected to have additional evidence and examples derived from independent research, but they can expect to be criticised if they lack a basic familiarity with the issues raised in the essential reading.
John Thornhill Financial Times 20 January 2016
Moshe Y Vardi Guardian 7 April 2016
Ben McIntyre The Times 11 March 2016
Dylan Matthews Vox 19 August 2014
Tom Chatfield Aeon 31 March 2014
Dominic Lawson The Times 20 March 2016
Timothy B. Lee Vox 29 July 2015
Nicholas Carr New York Times 20 May 2015
Dominic Basulto Washington Post 20 January 2015
Brian Patrick Green Markkula Center for Applied Ethics 21 November 2017
Julia Bossman World Economic Forum 21 October 2016
Raffi Khatchadourian New Yorker 25 November 2015
Ross Anderson Aeon 25 February 2013
Definitions of key concepts that are crucial for understanding the topic. Students should be familiar with these terms and the different ways in which they are used and interpreted and should be prepared to explain their significance.
Useful websites and materials that provide a good starting point for research.
Kevin McCullagh Icon 26 January 2018
James O'Malley TechRadar 10 January 2018
João Duarte Hackernoon 13 November 2017
Christopher Fitzgerald and Fernando Florez ACCA 1 May 2017
Luciano Floridi Aeon 9 May 2016
Danushka Bollegala The Conversation 5 May 2016
John Thornhill Financial Times 25 April 2016
Clare Foges The Times 25 April 2016
Jennifer Harrision Gadgette 6 April 2016
Tom Chatfield Guardian 18 March 2016
Michael Brooks New Statesman 18 March 2016
Boer Deng Nature 1 July 2015
Economist 9 May 2015
Andrew Keen The Times 22 February 2015
Paul Ford MIT Technology Review 11 February 2015
Links to organisations, campaign groups and official bodies who are referenced within the Topic Guide or which will be of use in providing additional research information.
IN THE NEWS
Relevant recent news stories from a variety of sources, which ensure students have an up to date awareness of the state of the debate.
Jonathan Bush Harvard Business Review 5 March 2018
Jason Hiner ZDnet 2 March 2018
Tom Simonite Wired 28 February 2018
Dave Gershgorn Quartz 22 February 2018
Jonathan Vanian Fortune 21 February 2018
Economic Times 15 February 2018
Will Knight Metro 3 October 2017
Steven Finlay Fortune 18 August 2017
This site contains links to websites operated by parties other than Debating Matters. Although we make every effort to ensure links are current, they will sometimes break after Topic Guide publication. If a link does not work, then the publication reference and date should enable you to find an alternate link. If you find a broken link do please send it to the webmaster for review.
TOPIC GUIDE MENU
Select the relevant option
Related topic guides