Archive for the ‘Ethics’ Category

Public Debate: AI/Robots and Our Future

Posted: October 1, 2017 by keasp1 in AI, Ethics, Events, Robots

The aim of this public debate is to foster a broad and inclusive discussion which informs our understanding of the dynamics and consequences of the rise of AI and Robotics and how to govern its impact on humanity and our world.

Time/Date: 7:00 -8:30 pm, October 19, 2017 @ Acadia University, Wolfville, Nova Scotia. Room BAC 241.

Panel Members: Paul Abela, Acadia University; Teresa Heffernan, Saint Mary’s University; Stan Matwin, Dalhousie University; Danny Silver, Acadia University.

Moderator: Ian Wilks, Acadia University

A policy debate format will be used. Members of the Pro and Con teams will center their presentations on the following topics:
• AI/Robots and the impact on civil society (jobs and economic sustainability, governance)
• AI/Robots and our conception of what it is to be human (transhumanism, mortality, dominance/subservience/equality with the machine?)
• AI/Robots and our safety and security (social, political and military notions of responsibility and authority: where does the buck stop?)
• AI/Robots and human flourishing (privacy, a literate culture, an open and vibrant democracy)

For more information see: Panel Debate: AI/Robots and our Future


by Teresa Heffernan


We had a wonderful group of international and interdisciplinary speakers at Saint Mary’s University on March 31/April 1, 2017. They all took time out from their very busy schedules to come to Halifax to discuss robots and artificial intelligence at the Cyborg Futures Workshop. Academics from literary theory, digital culture, anthropology, sociology, environmental studies, robotics, and evolutionary biology, along with students and the public, convened for a lively discussion about technologies that are impacting us all. This workshop is part of a larger SSHRC-funded project–Where Science Meets Fiction: Social Robots and the Ethical Imagination–that is about shifting the conversation about robots and AI, which has been animated by fiction but dominated in the real world by the military and industry. Opening the discussion up to wider social and cultural contexts–from the impact of technology on human relations; to non-human animals, the environment and trash; to racism, imperialism and misogyny; to automation, labour and capitalism; to killer robots and the military; to the problematic collapse of science and fiction—this workshop considered both the infrastructure currently being laid that is forcing us down a troubling path and imaginative alternatives to it.  What follows cannot possibly do justice to the richness and complexity of the talks, so please click on the hyperlinks to listen to them.

1. Born in Fiction: Robots, Artificial People, and Animate Machines

Mr. Rastus Robot (Paleofuture)

The term robot was popularized in the 1920 play by the Czech writer Karel Čapek, R.U.R. (Rossum’s Universal Robots). The mass produced and servile humanoid machines in the play rise up against their masters and overthrow them. The play is about the anxieties of industrialization, technological change, and the mechanization and exploitation of human labour. Yet as the literary theorist Louis Chude-Sokei argues, the anxieties expressed in these fictional works about robots were framed by nineteenth-century discourses about race that linked blacks to machines. Questions about whether machines could think, whether they could feel, whether they had souls, whether they were worthy of rights, whether they would revolt–were all thoroughly steeped in the legacy of colonialism, racial coding and slavery. From Mr. Rastus Robot, “the most lifelike of mechanical men,” built by Westinghouse in the 30s to Norbert Wiener’s work in cybernetics–the industry also explicitly borrowed from the material histories of blacks, who were positioned as the prosthetic extensions of white masters. The history of technology and industrialization, Chude-Sokei, contends is haunted by colonialism and racism.


Despina Kakoudaki examines the ways in which robots, androids, cyborgs, and automata are constructed in relation to people in literature, film, and popular culture. Attentive to the gap between current technological innovations and the fantasies and desires that give rise to “unreal” artificial people, she suggests that even the most contemporary versions of them are informed by ancient tropes that perform the cultural work of elucidating and negotiating what it is to be human. Rather than understanding artificial people (whether real or fictional: Frankenstein, Electro, ASIMO, etc.) only in terms of an impending robot future, she argues that they have been the constant and long-standing companions of humans. In other words, her poetics of robots counters the prevalent readings of artificial people as continuous with developments in science and instead offers innovative readings of them as always having been part of an imaginative landscape. Her encyclopedic review of artificial people–from ancient stories to origin myths to Aristotle’s theories of animation to Frankenstein to Star Trek to Battlestar Galactica to Ex Machina—exposes the plot lines and tropes that persist in the depictions of artificial births, mechanical and enslaved bodies, and angst about authenticity. Yet whether the artificial person is imagined as other or as passing, as subjugated or rebellious, as having some level of consciousness or agency or not, the representations also speak to the particular cultural moment in which these fictions are conceived.

Kismet (MIT News 2007)

Lucy Suchman argues that we should be wary of the fiction of “autonomous” robotics and artificial intelligence that replicates the myth of the liberal human subject. So often robots are presented as spectacular and “life-like,” a technology that seems to operate miraculously and seamlessly on its own. Yet what gets erased from the picture in promotional videos and media clips of celebrity machines–from Deep Blue to Kog to Kismet–is the enormous infrastructure that enables these systems to function, which includes the many human “appendages” needed for them to operate. The oft-cited Turing Test is typically described as the point at which a machine’s ability to exhibit intelligence is deemed equivalent to or indistinguishable from that of a human. The evaluator in the test is aware that one of the two “invisible” partners in this conversation is a machine, and thus asks questions in order to determine the human from the machine. But what is so often left out of this description, Suchman reminds us, is the initial design of this test, which involved a man and a woman and an evaluator, with the man trying to confuse or trick the evaluator into thinking he is a woman and the woman supporting the evaluator by insisting she is the woman. The bodies are hidden from the evaluator in the course of this performance, but gendered assumptions (such as questions about hair length) all point to embodiment. When the machine takes the part of the man, however, the material body is abstracted and rendered invisible. As various narrations about AI and robotics encourage slippages between humans and machines and between the environment and the lab, Suchman argues we need to be attentive to this magic act that hides not only the enabling props but also the cultural and historical specificities of the technology.

Teresa Heffernan’s complete summary of the workshop can be found here. It includes:

1. Born in Fiction: Robots, Artificial People, and Animate Machines

2. Sex, War and Work: Machine-Human Relationships in the Twenty-First Century

3. The Singularity: Capitalism, Ancient Cultures, and Evolution

4. Some Concluding Remarks

What is the K5 “autonomous data robot”? This seems to be the underlying question in various news reports covering the recent commercial launch of Silicon Valley-based Knightscope’s new security technology. And it is quickly followed by more questions that in essence ask, what does it mean for us? Does it mean that security guards will no longer be required to do boring and dangerous patrol work, or does it imply job losses due to automation? Should we celebrate the opportunity for improved surveillance of private property, or worry about further diminishment of privacy rights? These questions are being framed through analogical references to movies and movie characters, but the references by no means settle them. In this respect, it is not surprising that the cartoonish Star Wars character R2-D2 is frequently evoked to describe the K5, given the latter’s unequivocally non-humanoid design, its capacity for autonomous movement, and its data collection and social interaction features (Markoff 2013, McDuffee 2014, Vazquez 2014). Its seemingly benign alien appearance has even evoked feelings of endearment on the part of some who have encountered it, according to Knightscope. People have referred to it as “cute” and have tried hugging it. But just how like R2-D2 is the K5?

400px-R2d2K5 Hug PnP-Expo-21-333x500Dalek

The Atlantic ran a headline stating that the K5 is “less RoboCop and more R2-D2” because the robot is not “weaponized” (McDuffee 2014). Like RoboCop it is on the side of the good in terms of protecting people and property, nonetheless it is more of a scout than a warrior. Yet the K5 is equipped with 360-degree surveillance sensors, live video tracking, predictive analytic software and an optical character recognition feature that enables it to read license plates. Given these attributes and capacities, a privacy rights organization representative has said that the K5 “is like R2D2’s evil twin” (Markoff 2013). This account suggests that while the K5 may look like R2-D2, the similarity is an illusion because the K5 is designed to perform inherently invasive tasks, tasks that can facilitate even more illegitimate assaults on individual rights and freedoms.

Perhaps despite themselves, the K5’s developers further explicate these potentially negative implications by defining the K5 through references to movies: “We don’t want to think about ‘RoboCop’ or “Terminator,’ we prefer to think of a mash-up of ‘Batman,’ ‘Minority Report’ and R2D2” (Markoff 2013). In this constellation of references, the ethical and political implications of the “pre-cog” surveillance system, as portrayed in Minority Report, readily negate R2-D2’s seemingly benevolent aspect. Here, the K5 both is, and is not, like R2-D2. Indeed, one might say that the K5 is like R2-D2 with respect to the functional attributes of an apparently ethically neutral “autonomous data machine” – a self-piloting, socially and environmentally interactive computer on wheels. And it is unlike R2-D2 insofar as it is haunted by an ambiguous ethical purpose articulated in a “mash-up” of “pre-cog” technology and that very same, cartoonish figure of R2-D2.

In describing her encounter with the K5, MIT Technology Review writer Rachel Mech (2014) observed that, “The robots managed to appear both cute and intimidating. This friendly-but-not-too-friendly presence is meant to serve them well in jobs like monitoring corporate and college campuses, shopping malls, and schools.” On this account, the robots are intended to induce mixed feelings. Yet in what seems to be a casual introductory reference, Mech invokes “Daleks” rather than R2-D2 to aid in describing the K5’s appearance, ostensibly because the former are tall in stature, like the K5, which is 3.47 meters (5 feet) in height. R2-D2, on the other hand, is a mere 3.1 meters tall. But Daleks, who featured in the Dr. Who TV series, were intimidating for many reasons; they were, after all, a non-empathic race of robot-looking cyborgs bent on universal domination. Mech is not alone in referring to Daleks rather than R2-D2 to characterize the K5, and some, such as Sebastian Anthony (2014), suggest certain more sinister implications of that reference. Among other things, Daleks certainly would not invite hugs. Fiction may be helping commentators frame their questions concerning the new K5 security robot, but it is not providing them with neatly delineated boundaries or easy answers.

By Karen Asp


Following the media sensation around Jibo, the “world’s first family robot,” in an article posted on WIRED magazine (Sept 5, 2014) philosopher Evan Selinger reflects on the impacts that robot servants may have on our lives. While there may be advantages to living with domestic robots, notably having to do less manual work ourselves and having more leisure time, the irony, according to Selinger, is that we may also experience a decline in quality of life. This possibility arises with the development of robots, like Jibo, designed to take on personal and intimate functions. One of the risks of such outsourcing is the impairment of our capacity for practical reasoning, an impairment that Selinger associates with the automatization of “predictive” thinking and its transfer to domestic robots and personal devices like Jibo, Siri and Google Now. Referring to the work of philosopher of technology Albert Borgmann, he asks, “Will we be as inclined to ask ourselves questions like: What do I really want, and why should I want it? And what will happen to our inclination to develop virtues associated with willpower when technology increasingly does our thinking for us and preemptively satisfies our desires?” Read the full article here.