EnchantingJust out in the Palgrave book series called Social and Cultural Studies of Robots and AI, which I co-edit with Kathleen Richardson and Cathrine Hasse:

Enchanting Robots: Intimacy, Magic, and Technology  By Maciej Musiał

“This book argues that robots are enchanting humans (as potential intimate partners), because humans are enchanting robots (by performing magical thinking), and that these processes are a part of a significant re-enchantment of the “modern” world. As a foundation, the author examines arguments for and against intimate relationships with robots, particularly sex robots and care robots. Moreover, the book provides a consideration of human-robot interactions and philosophical reflections about robots through the lens of magic and magical thinking as well as theoretical and practical re-evaluations of their status and presence. Furthermore, the author discusses the abovementioned issues in the context of disenchantment and re-enchantment of the world, characterizing modernity as a coexistence of these two processes. The book closes with a consideration of future scenarios regarding the meaning of life in the age of rampant automation and the possibility that designing robots becomes a sort of new eugenics as a consequence of recognizing robots as persons.”

keasp photo Big Think Nov14,2018The Big Thinking public panel on the social implications of AI, held at the Halifax Public Library on November 14, was a lively and well attended event. The discussants included Teresa Heffernan, Ian Kerr, Fuyuki Kurasawa and Duncan MacIntosh, with Howard Ramos (Dalhousie) as moderator.  Courtney Law provides a sense of the event here, noting for example that Teresa Heffernan “reminded the audience that AI and humans are inextricably linked because AI is built on data created by humans” and that “we sometimes assume ‘fauxtonomy’ when it comes to AI, attributing more complexity to machinery than it is due because we are influenced by fictional representations.” The event was sponsored by Dalhousie University, the Federation of Social Sciences and Humanities (SSHRC), and the Halifax Public Library.

SSHRC has published a video recording of the entire event that you can view below or access here.

llwlYou are invited to join Teresa Heffernan (Saint Mary’s University), Ian Kerr (U of Ottawa), Fuyuki Kurasawa (York U) and Duncan MacIntosh (Dalhousie) for a panel discussion on ‘the potential social impacts of artificial intelligence and the role humanities and social sciences will play in identifying the legal, ethical and policy issues we should start considering today.’

Where: Paul O’Regan Hall, Halifax Public Library Central

When: Wednesday, November 14, 2018. 6:30 PM – 8:00 PM

Moderated  by Gabriel Miller, Executive Director, Federation for the Humanities and Social Sciences

The event is sponsored by Dalhousie University (Offices of the President and the Vice-President Research, plus the Faculties of Arts and Social Sciences, Computer Science, Law, and Management), the Federation for the Humanities and Social Sciences and the Halifax Public Libraries.

The event will be free and open to the public with a reception to follow.

Questions? Contact fassalum@dal.ca

Join the Facebook event.

halconTeresa Heffernan, professor of English at Saint Mary’s University, will give a talk at the upcoming HAL-CON science fiction, fantasy and gaming convention, a massive multi-format event attended by some 9,200 people in 2017.

Dr. Heffernan’s talk, “Fiction Meets Science: Ex Machina, Artificial Intelligence and the Robotics Industry,” is scheduled for 6:15 pm, Friday, October 26, 2018.  Location: Room 502, Ballroom level 5 at the Halifax Convention Centre — 1650 Argyle Street, Halifax, NS. For information and tickets go to HAL-CON.com.

ABSTRACT: The conflation of AI and fiction in the cultural imaginary helps to drive the fantasy aspect of the robotics/AI industry that encourages the view that there is no difference between a computing machine and a human. If fiction offers an exploration and interrogation of the shifting terrain of what it means to be human, the industry’s overly literal readings of fiction fetishize the technology, strip it of its cultural and historical context, and claim it for the here and now. While the industry exploits fiction to help animate machines and bring them to “life” in the name of a certain technological future, it erases the “fictiveness” of the fiction that keeps open the question of the future and what it means to be human.

My talk–“Fiction Meets Science: Ex Machina, Artificial Intelligence and the Robotics Industry”–will argue that we need to restore the gap between the literary and scientific imaginings of AI and robots. Resisting literal readings of fiction, it considers the ways in which metaphors shape our reading of humans and other animals. For instance, in the field of AI, rather than the computer serving as a metaphor for the brain, the brain has come to serve as a metaphor for the computer. The film Ex Machina, as a modern day Frankenstein story, exposes the consequences of this metaphor that reduces humans to computing machines that in turn entraps them in an algorithmic logic under corporate control. In this film, it is not Ava, the programmed machine, that is the subject of the experiment, but rather Caleb who finds himself locked in the robot lab by the end of the story.

By Bryn Shaffer*

CBC The National April 2018

CBC news video on lethal autonomous weapons. Includes a clip from the debate between Noel Sharkey and Duncan MacIntosh at Saint Mary’s University, March 2018.

This March, Saint Mary’s University played host to an engaging and heated debate on the use of autonomous weapons of war, as part of the Automaton! From Ovid to AI public lecture series. The debate was between acclaimed BBC commentator and activist Dr. Noel Sharkey, and Dalhousie philosophy professor Dr. Duncan MacIntosh. It was moderated by Dalhousie professor of philosophy Dr. Letitia Meynell. You could tell before the debate began, by the way the crowd was cramming themselves into every inch of standing space in the already large auditorium, that we were in for an engaging and thought-provoking discussion.

After introductions by Dr. Sarty, dean of graduate studies at Saint Mary’s University, our two contestants positioned themselves: Dr. MacIntosh would be arguing in favour of autonomous weapons of war, and Dr. Sharkey would be arguing against them. Dr. Meynell stood up to the mic to lay out the ground rules, then she rang the proverbial bell and we were off.

Debate3Dr. MacIntosh started off by asserting that regardless of the points he would raise in favour of autonomous weapons of war, he did not want to be misconstrued as in favour of war or violence itself. His position at its core was that sometimes peaceful means of engagement are not possible, and in those cases, autonomous weapons are the superior choice.

He went on to argue that one of the major advantages of autonomous weapons is that they spare more human lives in the long run, and that they prevent soldiers from experiencing the horrors of war. Further, he posited that the use of autonomous weapons would be comparable to the use of soldiers, as they are much the same in how they operate based on commands. However, the advantage to using autonomous weapons compared to soldiers is that they are able to carry out hazardous missions humans cannot be sent on.

Another point Dr. MacIntosh raised in favour of the use of autonomous weapons, is that they would execute less prejudice informed killings, thus resulting in fewer deaths. However, due to the inability of robots to make informed judgment calls, he stated that they should not be sent on missions that require the discretion of people. As an example of such a mission, he described a situation where a child soldier is looking to surrender to opposing forces.

N.Sharkey Mar 21 2018 2Then Dr. MacIntosh’s time ran out, and it was Dr. Sharkey’s turn to take the floor. Dr. Sharkey began by asserting that militaries have a science fiction perspective on the ability of robots. To explain further, he presented slides demonstrating how autonomous weapons actually function and provided some examples of current weapons being used and in development. He articulated that a simplified way of understanding autonomous weapon function is that they receive a sensory input (such as from a heat sensor), which triggers a motor output (such as firing a gun).

Dr. Sharkey then explained that autonomous weapons once activated, are completely under computer control. This can be a problem because, as he said, “robots are easy to fool”. This is also a problem because current technology is not reliable enough in discerning civilians from armed combatants. Further, because we cannot predict how the algorithms operating the weapons will behave in every circumstance, especially when autonomous weapons come into contact with each other, unpredictable and dangerous outcomes could occur on the battlefield. These algorithms are also biased in nature, which could lead to indiscriminate targeting practices.

Dr. Sharkey also argued the use of these weapons will accelerate and amplify the pace of battle. And while autonomous weapons can be more accurate in targeting, the problem rests with what they are choosing to target. Dr. Sharkey was clear in reminding us that these machines are weapons, which are ultimately in the hands of humans. Meaning that while the robots may not be able to be prejudice, as Dr. MacIntosh said, they are the tools of those who are prejudiced. Further, these weapons won’t be able to be reserved to those in the military. Here, Dr.Sharkey provided the example of drone swarms operated by police forces using pepper spray on crowds.

After both parties provided their views, they were each given a brief opportunity to respond and defend their points. Dr. MacIntosh started by asserting his position that autonomous weapons should be used as threats to deter violence, rather than actually put into use. To this, Dr.Sharkey responded that while we often hope for weapons to be never be used, this is not the reality, as they are always implemented in the execution of mass murder.

The floor was then opened up for questions from the audience, which sparked interesting responses from both speakers. Questions ranged from issues of legality, and of science fiction, to questions of morality and implementation. By the end of the event, the audience was left with much to think on concerning the future of robotics and warfare, and what role we all will play in in that future.

Throughout the debate I found myself thoroughly engaged, on the edge of my seat, biting my tongue. Not only did the debate’s topic engage me, but also the often heated interactions between the speakers. When Dr. MacIntosh proposed the need for autonomous weapons as a means of intervention in what he termed dictator and third world states, Dr. Sharkey was quick to respond on the neo-colonial narrative being put forward. On another point, Dr. Sharkey called out Dr. MacIntosh as naive in his proposal that it is currently possible to create unbiased programming. At these points I found myself really aligning with Dr. Sharkey, wondering to what lengths nations such as the USA might go to justify and reframe their use and creation of autonomous weapons as ethical and unbiased.

So what can we take away from this battle of wits on autonomous weapons? Are killer robots truly an inevitable future of warfare? Is there hope for a more peaceful alternative? As someone who gets chills at the prospect of an Amazon drone landing in my yard, I greatly hope for the latter. As Dr. Sharkey argued, these weapons of destruction are currently being thought of in science fiction terms, but are being used and created in the real world, where real consequences exist. While the notion of human lives being spared is an alluring prospect of autonomous war, we must realize that those who build and advocate for killer robots understand this fantasy as truly nothing more than a carrot driving their horses into battle. War without death, under the implementation of autonomous weapons, is an oxymoron at best. At worst, killer robots are a near future that will affect all of us, both on and off the battlefield.

If you are interested in learning more about the speakers or about how you can join the campaign against killer robots check out the links below.

CBC’s The National coverage of the event can be viewed in their video “Stopping Killer Robots Before They Get To Us First.”

Dr. Noel Sharkey’s Twitter for the campaign against killer robots can be accessed here.

Dr. Duncan MacIntosh’s profile can be read here.

* Bryn Shaffer is an MA student in the Women and Gender Studies Program at Saint Mary’s University where she is exploring the topics of gendered robot design in both science fiction and reality.


Source: US D.O.D. Illustration by Staff Sgt. Alexandre Montes.

On March 6th news broke that Google was participating in a pilot project with the US military, supplying artificial intelligence capabilities to automate the analysis of drone surveillance footage (see Gizmodo and New York Times). Since then, Google employees have signed a petition opposing their company’s involvement, twelve employees are “resigning in protest,” and “tech workers” in the broader industry have circulated their own petition.

Now, a coalition of academics and researchers has released an “Open Letter” in support of the Google employees and tech workers. The Open Letter, co-authored by professors Peter Asaro (The New School, New York), Lilly Irani (University of California, San Diego) and Lucy Suchman (Lancaster University, UK), begins with the following statement:

As scholars, academics, and researchers who study, teach about, and develop information technology, we write in solidarity with the 3100+ Google employees, joined by other technology workers, who oppose Google’s participation in Project Maven. We wholeheartedly support their demand that Google terminate its contract with the DoD, and that Google and its parent company Alphabet commit not to develop military technologies and not to use the personal data that they collect for military purposes. The extent to which military funding has been a driver of research and development in computing historically should not determine the field’s path going forward. We also urge Google and Alphabet’s executives to join other AI and robotics researchers and technology executives in calling for an international treaty to prohibit autonomous weapon systems.

You can read the entire letter/petition here: Open Letter in Support of Google Employees and Tech Workers.


Feature image source: US D.O.D. 2017. “Project Maven to Deploy Computer Algorithms to War Zone by Year’s End.



New Content Item

By Teresa Heffernan, Series Editor of Social and Cultural Studies of Robots and AI (Palgrave Macmillan)*

As science was emerging as a discrete and soon to be dominant way of knowing and as the industrial revolution was transforming the English country-side, Thomas Love Peacock in his “Four Ages of Poetry” (1820) argued that poetry was increasingly useless and retrograde in the age of scientific invention: “A poet in our times is a semi-barbarian in a civilized community. He lives in the days that are past. His ideas, thoughts, feelings, associations, are all with barbarous manners, obsolete customs, and exploded superstitions. The march of his intellect is like that of a crab, backward.”


Staged production of R.U.R. (source Smithsonian.com)

In the age of robotics and artificial intelligence this dismissal of fiction, and the humanities more generally, has only escalated as literature departments, often treated as relics of the past, exist on life support while think tanks like the well-funded Singularity University, founded by Peter Diamandis and Ray Kurzweil and located in Silicon Valley, thrive. This for-profit uncredited institution, sponsored by companies such as Google, Deloitte, and Genentech, says its mission is to teach people “to utilize accelerating technologies to address humanity’s grand challenges.” Despite its declared interest in “humanity,” the Singularity University offers no courses in the humanities and culture—nothing, for instance, on literature, linguistics, history, art, classics, gender studies, music, cultural studies, postcolonialism or philosophy. Promising to catapult us into a shiny future full of instant fixes, the complicated terrain of thousands of years of culture is cast aside in favour of the truth and practicality of technoscience harnessed to corporate interests. Humanity’s “hardest problems”–social inequity, colonialism, war, genocide, climate change, pollution, water scarcity, dying oceans, mental health, superbugs, and disappearing species—this “university” promises can all be solved by “exponential” technology. These problems are never, it seems, about the paucity of the ethical imagination.

C-3PO 1977 (Star Wars)

Roboworld, Pittsburg (photo k.e.asp)

While fiction is often credited with inspiring or predicting technological inventions, when it comes to “serious” discussions about the future of robots and AI, fiction is reduced to cheerleading. The “truth” of technoscience, steered by corporate and military interests, takes over as AI and robotics engineers, computer scientists, and CEOs mine the rich array of “humanized” machines and artificial people that have populated literature. For instance, Amit Singhal, a software engineer and former vice-president at Google, wrote: “My dream Star Trek computer is becoming a reality, and it is far better than what I ever imagined.” So too, Cynthia Breazeal, director of the Personal Robots Group at the MIT Media Laboratory was inspired by R2D2 and C3PO from Star Wars, concluding that:  “While emotional robots have been a thing of science fiction for decades, we are now finally getting to a point where these kinds of social robots will enter our households.” Two of the wealthiest and most powerful men in the world–Elon Musk and Jeff Bezos—also credit Star Trek for their companies, SpaceX and Blue Origin. Bezos announced at the 2016 Code Conference: “It has been a dream from the early days of sci-fi to have a computer to talk to, and that’s coming true.” The firm SciFutures hires fiction writers to use storytelling, defined as “data with soul,” as a way of accelerating and advertising “preferred” futures; its corporate clients include, among others, Ford, Visa, and Colgate. Yet this utilitarian and overly literal approach—the claim that fiction is coming true—shuts down the ethical potential of fiction.

Ursula K Le Guin at the lectern at the National Book Awards.

Ursula LeGuin (source The Guardian 2014)

Ursula K LeGuin, in her powerful speech at the National Book Awards (2014) that went viral, argued that what we need are people who can imagine “alternatives to how we live now, and can see through our fear-stricken society and its obsessive technologies to other ways of being, and even imagine some real grounds for hope.” She died in January but her words about needing to get over our obsession with the latest technology grow more relevant by the day as we confront a host of new problems that have emerged from the blind investment in technoscience: from autonomous weapons and a new arms race to the erosion of democracy with the mining and selling of data, to the built-in prejudice of proprietary black box solutions that are marketed as objective to name a few. As a literary critic, I want to retain the critical edge that fiction has to offer. Robots were born in fiction: the 1920s play R.U.R. by Karel Čapek first used the term, derived from the Slavic term robota (forced laborer), to discuss the mechanization of humans under factory capitalism with its drive for efficiency. Fictional robots or talking computers are no more “real” than talking lions, clever rabbits, witches, demons or Captain Picard. From Greek mythology to Aesop’s Fables to Star Trek—literature has always been about exploring and negotiating what it means to be human, about who falls inside and outside that category, and about what sort of world we want to inhabit. The very nature of fiction calls for interpretation, it traffics in metaphor and metonymy, and it refuses to be rendered literal or forced into a singular future.

A_Defense_of_PoetryPercy Bysshe Shelley, responding to Peacock with his spirited “A Defence of Poetry” in 1821, wrote: “The cultivation of those sciences which have enlarged the limits of the empire of man over the external world, has, for want of the poetical faculty, proportionally circumscribed those of the internal world; and man, having enslaved the elements, remains himself a slave.” Shelley’s “Defense” might serve as a useful reminder of the limits of the reductive approach to fiction that seems to dominate. In the periods in history when calculation trumped imagination, Shelley argued, there was the greatest social inequality: the rich got richer and the poor got poorer as the society was torn between “anarchy and despotism.”

As we witness the rise of global despots, the displacement of humans by wars and climate change, the increasing concentration of wealth in the hands of a few, and the disregard for the planet and fellow species in a world motivated by profit, we cannot look to new technologies alone to solve these problems. The cultivation of an ethical imagination that Shelley promoted at the outset of the industrial revolution seems newly urgent. Machine learning and robotics have lots to offer but as these technologies impact all humans, other animals and the planet they cannot continue to operate in a silo. For the record, crabs don’t march backward they move sideways.

*This blog was originally posted on the host site for Robotics & AI: The Future of Humanism, a Palgrave Macmillan book series on the social and cultural impacts of AI and robotics.