Summary

Cyborg Futures: Animal Life and Social Robot Futures Workshop

Event Summary and Conclusions by Teresa Heffernan

Poster

We had a wonderful group of international and interdisciplinary speakers at Saint Mary’s University on March 31/April 1, 2017. They all took time out from their very busy schedules to come to Halifax to discuss robots and artificial intelligence at the Cyborg Futures Workshop. Academics from literary theory, digital culture, anthropology, sociology, environmental studies, robotics, and evolutionary biology, along with students and the public, convened for a lively discussion about technologies that are impacting us all. This workshop is part of a larger SSHRC-funded project–Where Science Meets Fiction: Social Robots and the Ethical Imagination–that is about shifting the conversation about robots and AI, which has been animated by fiction but dominated in the real world by the military and industry. Opening the discussion up to wider social and cultural contexts–from the impact of technology on human relations; to non-human animals, the environment and trash; to racism, imperialism and misogyny; to automation, labour and capitalism; to killer robots and the military; to the problematic collapse of science and fiction—this workshop considered both the infrastructure currently being laid that is forcing us down a troubling path and imaginative alternatives to it.  What follows cannot possibly do justice to the richness and complexity of the talks, so please click on the hyperlinks to listen to them.

Born in Fiction: Robots, Artificial People, and Animate Machines

Mr. Rastus Robot (Paleofuture)

The term robot was popularized in the 1920 play by the Czech writer Karel Čapek, R.U.R. (Rossum’s Universal Robots). The mass produced and servile humanoid machines in the play rise up against their masters and overthrow them. The play is about the anxieties of industrialization, technological change, and the mechanization and exploitation of human labour. Yet as the literary theorist Louis Chude-Sokei argues, the anxieties expressed in these fictional works about robots were framed by nineteenth-century discourses about race that linked blacks to machines. Questions about whether machines could think, whether they could feel, whether they had souls, whether they were worthy of rights, whether they would revolt–were all thoroughly steeped in the legacy of colonialism, racial coding and slavery. From Mr. Rastus Robot, “the most lifelike of mechanical men,” built by Westinghouse in the 30s to Norbert Wiener’s work in cybernetics–the industry also explicitly borrowed from the material histories of blacks, who were positioned as the prosthetic extensions of white masters. The history of technology and industrialization, Chude-Sokei, contends is haunted by colonialism and racism.

I-Robot

Despina Kakoudaki examines the ways in which robots, androids, cyborgs, and automata are constructed in relation to people in literature, film, and popular culture. Attentive to the gap between current technological innovations and the fantasies and desires that give rise to “unreal” artificial people, she suggests that even the most contemporary versions of them are informed by ancient tropes that perform the cultural work of elucidating and negotiating what it is to be human. Rather than understanding artificial people (whether real or fictional: Frankenstein, Electro, ASIMO, etc.) only in terms of an impending robot future, she argues that they have been the constant and long-standing companions of humans. In other words, her poetics of robots counters the prevalent readings of artificial people as continuous with developments in science and instead offers innovative readings of them as always having been part of an imaginative landscape. Her encyclopedic review of artificial people–from ancient stories to origin myths to Aristotle’s theories of animation to Frankenstein to Star Trek to Battlestar Galactica to Ex Machina—exposes the plot lines and tropes that persist in the depictions of artificial births, mechanical and enslaved bodies, and angst about authenticity. Yet whether the artificial person is imagined as other or as passing, as subjugated or rebellious, as having some level of consciousness or agency or not, the representations also speak to the particular cultural moment in which these fictions are conceived.

Kismet (MIT News 2007)

Lucy Suchman argues that we should be wary of the fiction of “autonomous” robotics and artificial intelligence that replicates the myth of the liberal human subject. So often robots are presented as spectacular and “life-like,” a technology that seems to operate miraculously and seamlessly on its own. Yet what gets erased from the picture in promotional videos and media clips of celebrity machines–from Deep Blue to Kog to Kismet–is the enormous infrastructure that enables these systems to function, which includes the many human “appendages” needed for them to operate. The oft-cited Turing Test is typically described as the point at which a machine’s ability to exhibit intelligence is deemed equivalent to or indistinguishable from that of a human. The evaluator in the test is aware that one of the two “invisible” partners in this conversation is a machine, and thus asks questions in order to determine the human from the machine. But what is so often left out of this description, Suchman reminds us, is the initial design of this test, which involved a man and a woman and an evaluator, with the man trying to confuse or trick the evaluator into thinking he is a woman and the woman supporting the evaluator by insisting she is the woman. The bodies are hidden from the evaluator in the course of this performance, but gendered assumptions (such as questions about hair length) all point to embodiment. When the machine takes the part of the man, however, the material body is abstracted and rendered invisible. As various narrations about AI and robotics encourage slippages between humans and machines and between the environment and the lab, Suchman argues we need to be attentive to this magic act that hides not only the enabling props but also the cultural and historical specificities of the technology.

Sex, War and Work: Machine-Human Relationships in the Twenty-First Century

Sex doll factory (Toronto Star 2016)

Kathleen Richardson works on the ways in which the marketing of sex robots capitalizes on a long history of patriarchy. Proponents of human-robot relationships often point to, for instance, the former taboo on interracial marriages. But anti-miscegenation laws were the legacy of the othering of Asians, Native Americans, and blacks, so this analogy perpetuates rather than dismantles this history as it suggests humans are interchangeable with objects. Factories producing sex dolls are full of silicon vaginas, breasts, heads and legs, and on-line advertisements for robots that reduce the female body to a list of components for sale that can be mixed and matched to create the “ideal” girlfriend. While the industry promises that this technology will allow for new and improved explorations of sexuality, these embodied representations, Richardson argues, encourage the reactionary view of real women as objects—returning us to the history of women as the legal property of men and as commodities to be exchanged on the market.

The study of swarms (Starflag Project)

The American military has been developing lethal autonomous weapons systems that they argue will be more “ethical” killers than humans. These weapons have in turn been met by a campaign to ban killer robots (www.stopkillerrobots.org/). But the military has also turned to “nature,” investing in models derived from life forms such as ants and birds in order to design swarming robots. Patrick Crogan considers how these new cognitive assemblages reshape the ethics and rules of warfare. The use of a swarming collectivity as the latest war weapon that promises to be faster and more effective than more conventional networks, Crogan argues, also render war and humans more automatic than thoughtful. If humans are irreducibly technological creatures (from diapers to drones), we also operate with a noetic soul—variously defined as an inner wisdom, mental activity, intellect—that allows us to adopt rather than merely adapt to new technologies. Yet the logic of faster and faster automated systems that run on big data and outpace the intelligence of humans increasingly shuts down the noetic that might find alternatives to war in political diplomacy and imaginative solutions to global problems.

“Is a robot after your job?” (Manufacturing Tomorrow 2016)

Illah R. Nourbakhsh argues that robotics and AI built by corporate America and the military have superseded fictional scenarios in the cultural imaginary. Billionaires seeking immortality have bought into the view that humans are machine readable entities and that consciousness can be downloaded into machines, which means that valuable resources and research funds are being redirected to fantasy projects. Instead of being distracted by fantastical scenarios and dreams of immortality, Nourbakhsh contends, greater attention needs to be paid to how this technology is re-shaping society. The real-world impact of the view that humans are interchangeable with machines erodes human’s confidence in their own agency and encourages the roboticization of human labour. Predictive analytics, free digital labour, and increasing automation will leave ever more unemployed in the name of efficiency and profit, further eroding civic life and aggravating global wealth disparity. He proposes a different model, where we demystify the aura around technology and big data and teach people, and particularly children, how it works in order to put humans in the driver seat rather than rendering them the passive consumers of a technology that exploits them.

The Singularity: Capitalism, Ancient Cultures, and Evolution

Karen Asp discusses the ways in which the recent positioning of artificial intelligence as an “existential risk” and the much publicized fear that it could spell the end of humanity is an upside-down representation of the workings of capital driving our current human-made environmental crisis. While autonomous AI is discussed as an inhuman, external force that is threatening civilization, and while those in the industry like Peter Thiel are calling for more resources to be re-directed to AI from climate change, Asp suggests there is nothing “mystical” about the pursuit of more and more efficient labour costs through increasing automation and abstraction. On the contrary, the resulting labour surplus “problem” is the goal, not the by-product, of capitalist competition. As an example of the misunderstood issue of “existential risk,” Asp points to the film Transcendence that offers up both the “healing” potential and the destructive possibilities of the “singularity”— the view that “super-intelligent” machines will exceed the limits of human cognition. This film prompted Steven Hawking to warn that this catastrophic scenario could well come true as humans pursue the “explosion” of the new knowledge potential of this technology. Yet while we are focussed on technology going awry in some distant future, we are encouraged to embrace the “good technology” that promises a continually upgradeable and better life. Spurred on by a blind faith in progress, these narratives about technology misrepresent the immediate catastrophes of species extinction, dying oceans, and trash that is floating around in space; the mantra that deploying more technology will get us out of the conundrum of capitalist-driven problems rings hollow.

Pāṇini commemorative stamp (2004)

Coder and novelist, Vikram Chandra, considers what has been left out of the story of technology. The “mythic” model of the lone western alpha male entrepreneur championed in the fictional novels of Ayn Rand and embraced as the iconic figure of  Silicon Valley, erases women’s and Asians’ early contribution to the field. Moreover, the first coding languages were modelled on modern linguistic theory, which in turn borrowed from Classical Sanskrit, developed by Pāṇini in 500 BCE. Classical Sanskrit was the basis of the first generative grammar, that is, a general theory that attempts to reveal the rules and laws that govern the structure of language. It was an attempt to eradicate confusion and ambiguity in the communication of the Vedas, the ancient sacred texts that required precise transmission. But these abstracted rules for communication proved fraught as the idiosyncrasies of context and culture and the openness and suggestiveness of language remained resistant to regulation, just as “bugs” hamper contemporary computer codes (a disaster waiting to happen in Chandra’s view). The new religion of “big data” that dreams of perfect communication between humans and machines shares the dream of this ancient grammar that understood the universe as rooted in language. Yet rather than despair over the impossibility of creating a perfect code, a language that trumps all others, Chandra argues that the beauty of the aesthetic was found to reside not in clarity but in the meditative “pleasures of ambiguity.” The aesthetic experience was understood as re-activating all the layers of consciousness, feeling and memory but now released from the ego. The ambiguity of language thus resonates with the complexities of human consciousness.

“Evolving tadpole robots” (John Long Lab)

John Long, an evolutionary biologist who builds robotic models that mimic certain biological traits of early life forms, is quick to disentangle the animals he studies from the robots he builds. Resisting the mystification of technology as life-like, he reveals that his models are made out of such things as tupperware, motors, neural nets, and jelly, materials that share little in common with biological bodies. This careful parsing of the differences between the two is a rare exception in a field that continually collapses animals (including humans) with robots/AI. Evolution–or “descent with modification,” as he calls, is neither “predictive” nor linear, much to the irritation of computer engineers. Further, he argues, simulated models that “are designed to succeed” erase the embodiment, embeddedness and complexities of time and place that his robots are forced to navigate. Long’s work demonstrates a keen awareness that human history is hardly a blink in the long history of the planet—“how miserable, how shadowy and transient, how aimless and arbitrary the human intellect looks within nature”—the German philosopher, Friedrich Nietzsche wrote. The limits of a human-centric frame provoked Long to turn to the much more ancient life forms of fish for his models. It is this study that has caused him to challenge research models of AI “intelligence” that focus solely on the brain and neural activities. Long’s work makes clear that there is no clean division between the brain and the body, and that a body in an environment is key to understanding animal intelligence.

Some Concluding Remarks

1) Fiction is Not Coming “True”

Data (Star Trek)

Fiction long predates the field of robotics and artificial intelligence. Yet increasingly fiction and science are touted as having merged in the oft-cited mantra “fiction is becoming reality.” This conflation in the cultural imaginary helps to drive the robotics/AI industry. Two of the wealthiest and most powerful men in the world–Elon Musk and Jeff Bezos–credit Star Trek for inspiring their companies, SpaceX and Blue Origin. As if there is an inevitable trajectory from fiction to science, Bezos announced at the 2016 Code Conference: “It has been a dream from the early days of sci-fi to have a computer to talk to, and that’s coming true.” But it is this invocation of “dreams coming true” that needs to be challenged. If fictions offer an exploration and interrogation of the shifting terrain of what it means to be human, the industry’s overly literal readings fetishize the technology, strip it of its cultural and historical context, and claim it for the here and now. While the industry exploits fiction to help animate machines and bring them to “life” in the name of a certain technological direction, it erases the embedded worlds of these fictions that keep open the question of the future.

2) Demystifying Technology: Robot Labour?

Ray Kurzweil, now director of engineering at Google, and others in Silicon Valley are invested in the idea that computers can replicate and surpass humans. With support from corporations, governments and big money, this sector of the tech industry is one of the fastest growing. Corporations present robots and AI–Big Blue, Siri, Jibo, Pepper, Robi, Watson, Google’s search engine–as increasingly autonomous and alive, but behind these “magic” machines are the humans producing the books, research, images, maps that are the source of the “machine” knowledge. Take computer language translation, for instance, which is only possible through the labour of millions of human translators, many of whom are now unemployed because of the technology. Moreover, as social robots are equipped with cameras, microphones and internet connections, their data mining possibilities are endless. The corporations behind the robots accumulate money by devaluing the human labour that produces it. In a world of ever increasing wealth disparity and precarious employment, robots are predicted to replace up to seventy percent of all jobs. The new religion of AI sells algorithms as magical solutions to human problems to a populace in awe of technology. What we need is more tech literacy and less myth-making.

3) Machines are Not Animals, Human or Otherwise

Wall-E

Social robots and sex robots are marketed as companions and even eventual marriage partners by those in the industry, like David Levy. Yet numerous studies have exposed the ways in which our addiction to technology with its promise to keep us continually “connected” has in fact left people feeling depleted, anxious, alienated and isolated. As digital technologies reshape human relations, this latest technology, with its seductive promise of robots that will cater to our every emotional need and sexual fantasy, promises to further aggravate human relations as it encourages humans to retreat into ever more solipsistic relationships with their machines. When new and improved versions come on the market, will we need to put these “friends” or “lovers” or “wives” out with the trash and will they join the ever-growing piles of e-waste currently generated by other short-term commodities like cells phones and computers with their built-in obsolescence? As humans are encouraged to develop affective relationships with machines, discussions of robots rights are also on the rise and often begin with something like “we must get tougher on technology abuse or it undermines laws about abuse of animals.” This problematic conflation of machines and animals dates back to Descartes, but machines are not animals—human or otherwise. As industrialization is on the one hand invested in the rapid rise of robots and on the other causing the rapid extinction of species, it is all the more urgent to resist this conflation.

4) The New Imperialism

Silicon Valley, with its mania for “big data,” vows to liberate us from the whims and limits of humans—the AI/robot doctor, soldier, teacher, lawyer—will be smarter than any single human, or so the thinking goes. Yet the very algorithms that are driving the industry are necessarily informed by the biases of coders and the prejudices of the human-created material they mine, and they replicate all the toxic inequities that have infected human culture. The promise of an “objective system” that promotes the supposed “neutrality” and “universality” of big data echoes and repeats the British colonial system’s faith in a global Empire even as it cannibalized and enslaved other cultures. While there are many areas where big data will prove useful in addressing global problems, it is important to remember that it is neither objective nor intrinsically creative.

5) Evolution is Not Teleological

Ex Machina

Techno-utopians look forward to the merging of machines and man as the next step in human evolution. In the film Ex Machina, the lead character who is a CEO of a Google-like corporation proposes that “one day the AI’s are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction.” Yet this linear, teleological model has much more in common with religious narratives than with messy reality. Humans may indeed join the other species headed for extinction, but it will have nothing to do with the logic of evolution and “Lucy” fossils. As we enter into what has been called the sixth extinction, caused by humans who are contributing to the rapid eradication of the biodiversity of the planet on which they depend, machines efficiently building and generating more things for profit are unlikely to be our best exit strategy.

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s