New Content Item

By Teresa Heffernan, Series Editor of Social and Cultural Studies of Robots and AI (Palgrave Macmillan)*

As science was emerging as a discrete and soon to be dominant way of knowing and as the industrial revolution was transforming the English country-side, Thomas Love Peacock in his “Four Ages of Poetry” (1820) argued that poetry was increasingly useless and retrograde in the age of scientific invention: “A poet in our times is a semi-barbarian in a civilized community. He lives in the days that are past. His ideas, thoughts, feelings, associations, are all with barbarous manners, obsolete customs, and exploded superstitions. The march of his intellect is like that of a crab, backward.”


Staged production of R.U.R. (source

In the age of robotics and artificial intelligence this dismissal of fiction, and the humanities more generally, has only escalated as literature departments, often treated as relics of the past, exist on life support while think tanks like the well-funded Singularity University, founded by Peter Diamandis and Ray Kurzweil and located in Silicon Valley, thrive. This for-profit uncredited institution, sponsored by companies such as Google, Deloitte, and Genentech, says its mission is to teach people “to utilize accelerating technologies to address humanity’s grand challenges.” Despite its declared interest in “humanity,” the Singularity University offers no courses in the humanities and culture—nothing, for instance, on literature, linguistics, history, art, classics, gender studies, music, cultural studies, postcolonialism or philosophy. Promising to catapult us into a shiny future full of instant fixes, the complicated terrain of thousands of years of culture is cast aside in favour of the truth and practicality of technoscience harnessed to corporate interests. Humanity’s “hardest problems”–social inequity, colonialism, war, genocide, climate change, pollution, water scarcity, dying oceans, mental health, superbugs, and disappearing species—this “university” promises can all be solved by “exponential” technology. These problems are never, it seems, about the paucity of the ethical imagination.

C-3PO 1977 (Star Wars)

Roboworld, Pittsburg (photo k.e.asp)

While fiction is often credited with inspiring or predicting technological inventions, when it comes to “serious” discussions about the future of robots and AI, fiction is reduced to cheerleading. The “truth” of technoscience, steered by corporate and military interests, takes over as AI and robotics engineers, computer scientists, and CEOs mine the rich array of “humanized” machines and artificial people that have populated literature. For instance, Amit Singhal, a software engineer and former vice-president at Google, wrote: “My dream Star Trek computer is becoming a reality, and it is far better than what I ever imagined.” So too, Cynthia Breazeal, director of the Personal Robots Group at the MIT Media Laboratory was inspired by R2D2 and C3PO from Star Wars, concluding that:  “While emotional robots have been a thing of science fiction for decades, we are now finally getting to a point where these kinds of social robots will enter our households.” Two of the wealthiest and most powerful men in the world–Elon Musk and Jeff Bezos—also credit Star Trek for their companies, SpaceX and Blue Origin. Bezos announced at the 2016 Code Conference: “It has been a dream from the early days of sci-fi to have a computer to talk to, and that’s coming true.” The firm SciFutures hires fiction writers to use storytelling, defined as “data with soul,” as a way of accelerating and advertising “preferred” futures; its corporate clients include, among others, Ford, Visa, and Colgate. Yet this utilitarian and overly literal approach—the claim that fiction is coming true—shuts down the ethical potential of fiction.

Ursula K Le Guin at the lectern at the National Book Awards.

Ursula LeGuin (source The Guardian 2014)

Ursula K LeGuin, in her powerful speech at the National Book Awards (2014) that went viral, argued that what we need are people who can imagine “alternatives to how we live now, and can see through our fear-stricken society and its obsessive technologies to other ways of being, and even imagine some real grounds for hope.” She died in January but her words about needing to get over our obsession with the latest technology grow more relevant by the day as we confront a host of new problems that have emerged from the blind investment in technoscience: from autonomous weapons and a new arms race to the erosion of democracy with the mining and selling of data, to the built-in prejudice of proprietary black box solutions that are marketed as objective to name a few. As a literary critic, I want to retain the critical edge that fiction has to offer. Robots were born in fiction: the 1920s play R.U.R. by Karel Čapek first used the term, derived from the Slavic term robota (forced laborer), to discuss the mechanization of humans under factory capitalism with its drive for efficiency. Fictional robots or talking computers are no more “real” than talking lions, clever rabbits, witches, demons or Captain Picard. From Greek mythology to Aesop’s Fables to Star Trek—literature has always been about exploring and negotiating what it means to be human, about who falls inside and outside that category, and about what sort of world we want to inhabit. The very nature of fiction calls for interpretation, it traffics in metaphor and metonymy, and it refuses to be rendered literal or forced into a singular future.

A_Defense_of_PoetryPercy Bysshe Shelley, responding to Peacock with his spirited “A Defence of Poetry” in 1821, wrote: “The cultivation of those sciences which have enlarged the limits of the empire of man over the external world, has, for want of the poetical faculty, proportionally circumscribed those of the internal world; and man, having enslaved the elements, remains himself a slave.” Shelley’s “Defense” might serve as a useful reminder of the limits of the reductive approach to fiction that seems to dominate. In the periods in history when calculation trumped imagination, Shelley argued, there was the greatest social inequality: the rich got richer and the poor got poorer as the society was torn between “anarchy and despotism.”

As we witness the rise of global despots, the displacement of humans by wars and climate change, the increasing concentration of wealth in the hands of a few, and the disregard for the planet and fellow species in a world motivated by profit, we cannot look to new technologies alone to solve these problems. The cultivation of an ethical imagination that Shelley promoted at the outset of the industrial revolution seems newly urgent. Machine learning and robotics have lots to offer but as these technologies impact all humans, other animals and the planet they cannot continue to operate in a silo. For the record, crabs don’t march backward they move sideways.

*This blog was originally posted on the host site for Robotics & AI: The Future of Humanism, a Palgrave Macmillan book series on the social and cultural impacts of AI and robotics.

By Bryn Shaffer*

CBC The National April 2018

CBC news video on lethal autonomous weapons. Includes a clip from the debate between Noel Sharkey and Duncan MacIntosh at Saint Mary’s University, March 2018.

This March, Saint Mary’s University played host to an engaging and heated debate on the use of autonomous weapons of war, as part of the Automaton! From Ovid to AI public lecture series. The debate was between acclaimed BBC commentator and activist Dr. Noel Sharkey, and Dalhousie philosophy professor Dr. Duncan MacIntosh. It was moderated by Dalhousie professor of philosophy Dr. Letitia Meynell. You could tell before the debate began, by the way the crowd was cramming themselves into every inch of standing space in the already large auditorium, that we were in for an engaging and thought-provoking discussion.

After introductions by Dr. Sarty, dean of graduate studies at Saint Mary’s University, our two contestants positioned themselves: Dr. MacIntosh would be arguing in favour of autonomous weapons of war, and Dr. Sharkey would be arguing against them. Dr. Meynell stood up to the mic to lay out the ground rules, then she rang the proverbial bell and we were off.

Debate3Dr. MacIntosh started off by asserting that regardless of the points he would raise in favour of autonomous weapons of war, he did not want to be misconstrued as in favour of war or violence itself. His position at its core was that sometimes peaceful means of engagement are not possible, and in those cases, autonomous weapons are the superior choice.

He went on to argue that one of the major advantages of autonomous weapons is that they spare more human lives in the long run, and that they prevent soldiers from experiencing the horrors of war. Further, he posited that the use of autonomous weapons would be comparable to the use of soldiers, as they are much the same in how they operate based on commands. However, the advantage to using autonomous weapons compared to soldiers is that they are able to carry out hazardous missions humans cannot be sent on.

Another point Dr. MacIntosh raised in favour of the use of autonomous weapons, is that they would execute less prejudice informed killings, thus resulting in fewer deaths. However, due to the inability of robots to make informed judgment calls, he stated that they should not be sent on missions that require the discretion of people. As an example of such a mission, he described a situation where a child soldier is looking to surrender to opposing forces.

N.Sharkey Mar 21 2018 2Then Dr. MacIntosh’s time ran out, and it was Dr. Sharkey’s turn to take the floor. Dr. Sharkey began by asserting that militaries have a science fiction perspective on the ability of robots. To explain further, he presented slides demonstrating how autonomous weapons actually function and provided some examples of current weapons being used and in development. He articulated that a simplified way of understanding autonomous weapon function is that they receive a sensory input (such as from a heat sensor), which triggers a motor output (such as firing a gun).

Dr. Sharkey then explained that autonomous weapons once activated, are completely under computer control. This can be a problem because, as he said, “robots are easy to fool”. This is also a problem because current technology is not reliable enough in discerning civilians from armed combatants. Further, because we cannot predict how the algorithms operating the weapons will behave in every circumstance, especially when autonomous weapons come into contact with each other, unpredictable and dangerous outcomes could occur on the battlefield. These algorithms are also biased in nature, which could lead to indiscriminate targeting practices.

Dr. Sharkey also argued the use of these weapons will accelerate and amplify the pace of battle. And while autonomous weapons can be more accurate in targeting, the problem rests with what they are choosing to target. Dr. Sharkey was clear in reminding us that these machines are weapons, which are ultimately in the hands of humans. Meaning that while the robots may not be able to be prejudice, as Dr. MacIntosh said, they are the tools of those who are prejudiced. Further, these weapons won’t be able to be reserved to those in the military. Here, Dr.Sharkey provided the example of drone swarms operated by police forces using pepper spray on crowds.

After both parties provided their views, they were each given a brief opportunity to respond and defend their points. Dr. MacIntosh started by asserting his position that autonomous weapons should be used as threats to deter violence, rather than actually put into use. To this, Dr.Sharkey responded that while we often hope for weapons to be never be used, this is not the reality, as they are always implemented in the execution of mass murder.

The floor was then opened up for questions from the audience, which sparked interesting responses from both speakers. Questions ranged from issues of legality, and of science fiction, to questions of morality and implementation. By the end of the event, the audience was left with much to think on concerning the future of robotics and warfare, and what role we all will play in in that future.

Throughout the debate I found myself thoroughly engaged, on the edge of my seat, biting my tongue. Not only did the debate’s topic engage me, but also the often heated interactions between the speakers. When Dr. MacIntosh proposed the need for autonomous weapons as a means of intervention in what he termed dictator and third world states, Dr. Sharkey was quick to respond on the neo-colonial narrative being put forward. On another point, Dr. Sharkey called out Dr. MacIntosh as naive in his proposal that it is currently possible to create unbiased programming. At these points I found myself really aligning with Dr. Sharkey, wondering to what lengths nations such as the USA might go to justify and reframe their use and creation of autonomous weapons as ethical and unbiased.

So what can we take away from this battle of wits on autonomous weapons? Are killer robots truly an inevitable future of warfare? Is there hope for a more peaceful alternative? As someone who gets chills at the prospect of an Amazon drone landing in my yard, I greatly hope for the latter. As Dr. Sharkey argued, these weapons of destruction are currently being thought of in science fiction terms, but are being used and created in the real world, where real consequences exist. While the notion of human lives being spared is an alluring prospect of autonomous war, we must realize that those who build and advocate for killer robots understand this fantasy as truly nothing more than a carrot driving their horses into battle. War without death, under the implementation of autonomous weapons, is an oxymoron at best. At worst, killer robots are a near future that will affect all of us, both on and off the battlefield.

If you are interested in learning more about the speakers or about how you can join the campaign against killer robots check out the links below.

CBC’s The National coverage of the event can be viewed in their video “Stopping Killer Robots Before They Get To Us First.”

Dr. Noel Sharkey’s Twitter for the campaign against killer robots can be accessed here.

Dr. Duncan MacIntosh’s profile can be read here.

* Bryn Shaffer is an MA student in the Women and Gender Studies Program at Saint Mary’s University where she is exploring the topics of gendered robot design in both science fiction and reality.