Archive for the ‘Theory’ Category

The Robotic Imaginary: The Human & the Price of Dehumanized Labor by Jennifer Rhee (University of Minnesota Press, 2018)

Book review by Teresa Heffernan (forthcoming in Novel)

Debates about whether robots will take over jobs or open up as yet unimagined career possibilities dominate the headlines. Silicon Valley and the techno optimists promise us that robots will automate boring jobs and create new ones, leaving humans free to pursue their interests in the arts and sciences and ushering in a great era of equality, creativity, and freedom. Others warn that robots will take over close to half of all human jobs dramatically increasing unemployment. Those owning the machines and platforms will throw workers into poverty, increasing the already unconscionable gap between rich and poor and further ripping apart the social fabric of democracy. These competing scenarios typically frame questions about the impact of robots on labour in world economic forums and in the media.

Robot Imaginary imageJennifer Rhee’s The Robotic Imaginary: The Human & the Price of Dehumanized Labor interrupts this debate to ask more basic questions about how robot labor is imagined by research labs, by the artificial intelligence industry, and in film, art and literature. Bringing this technology into conversation with cultural and literary studies and the humanities, Rhee considers the ways in which it envisions the historical and current understanding of what it means to be human. Organized around chapters on caring and care labor, thinking and domestic labor, feeling and emotional labor, and dying and drone labor; Rhee’s book is concerned with how the contested terrain of the human is constituted and reconstituted by these new anthropomorphic technologies. This labor imagined in robotic form renders the human knowable, calculable, and recognizable while exposing the dehumanized others that exist outside of the boundary of what is considered familiar and normal. Each chapter concludes with a short review of robotic art that offers an alternative imagining, a reconfiguring of the human as unknowable, particular, and irreducible.

The introduction offers an overview of the origins of robotics, which found its first expression in literature, was developed by scientists, and grew with military funding. The term artificial intelligence emerged out of the Dartmouth Project, which brought together a small group of men in 1956 to debate the hypothesis that machines could be made to simulate human intelligence. The collapse of the human and the machine, the anthropomorphic metaphor underpinning the field, expands and continues to expand the boundary of the human beyond this initial metaphoric union, Rhee argues, invoking Paul Ricoeur’s description of the workings of metaphor. The other critical factor shaping robotics has been DARPA (a branch of the American Department of Defence devoted to technological and military superiority), which has funded most of the research in the field since its creation in 1958.

Two of the founding texts in the field, Alan Turing’s test for machine intelligence and Masahiro Mori’s theory of the uncanny valley, illustrate Rhee’s central argument. In the first example, the imitation game begins with a man and a woman who are both trying to convince a judge via a teleprinter that they are female while the judge, who is in a separate room asking questions, tries to correctly identify the woman. Turing then suggests replacing one of the humans with a computer. The game is famously set up to police the boundaries between the human and the machine, but, as Rhee points out, the judge needs to conceptualize the human before s/he can possibly assess human likeness. Hence the game also opens up the possibility for the judge to misrecognize the human rendering the very category “human” unstable and open while exposing the biases and normative assumptions at the heart of this policing exercise. In contrast, Mori’s theory of the uncanny valley, which sets out to determine the robot design that people would best relate to, enforces narrow normative versions of the human, Rhee observes, that are measured against disability and illness. In several graphs, Mori charts the point at which human-like replicas evoke positive affinity as opposed to eeriness. The “healthy” human occupies the highest point on the graph, while the corpse falls at the bottom of the stillness scale and the zombie at the bottom of the movement scale and the ill person gets slotted below the healthy one. In another of his graphs a prosthetic hand occupies the point of negative affinity. As Mori’s theory is in wide circulation and impacts the development of humanoid technologies and social robots, it is important to expose the biases informing his design model Rhee insists.

Karel Čapek’s play R.U.R (1920) first uses the term robot, derived from Czech words for serfdom and forced labor, long before the development of the field. Driven by the capitalist goals of profit, productivity, and efficiency, designers of organic humanoid robots promise to liberate humans from labor and usher in a new era of freedom and leisure. The question of the robots’ “humanness” drives the play as Helena Glory hopes to liberate them from exploitation while their creators argue they are nothing but soulless machines. The play draws on the cultural memory of slavery and fears of slave rebellions to explore the dehumanization of workers under factory capitalism that promises freedom for some at the expense of others. Alienated from their labor, however, the humans in the play fail to thrive and stop reproducing while the robots, claiming their “humanity” by mimicking human’s capacity for domination and violence, revolt and kill all the humans.

Rhee returns to these founding literary and scientific texts in order to open up the entwined questions of anthropomorphization and dehumanization that frame the next four chapters of her book. Chapter one considers Turing’s model of AI as a child that needs to be educated and Weizenbaum’s early “therapist” ELIZA, demonstrating how care labor has been integral to AI. Gendered female, these often humanized AIs serve as emotional interlocutors, child educators, and romantic partners or spouses that perform both domestic and affective work. In contrast, “male” AIs, like Watson, are machines that are positioned as universal experts that disseminate knowledge in fields like medicine and law. Analyzing Richard Powers’ Galatea 2.2 and Spike Jonze’s Her, Rhee maintains that the gendering of AI thus replicates the historically devalued and underpaid reproductive labor of women that has sustained capitalism. Countering this devaluation, however, Rhee points to robotic art “that highlights affect’s constitutive role in cybernetics, transforming cybernetic circuits of communication and control into those of affect and care” (57). Nam June Paik’s Robot K-456, Norman White’s The Helpless Robot, Momoyo Torimitsu’s Miyata Jiro, and Simon Penny’s Stupid Robot and Petit Mal are presented as examples of robot artwork provoking affective responses from their audiences, demanding that cybernetics be grounded in an ethics of care and interdependence, and foregrounding these traits as critical components of being human.

The second chapter on “thinking” further builds on the marginalization of reproductive labor in the field of AI. Early closed-world versions of AI that relied on highly schematic and simplified models of reality were followed by the hope that the combining of multiple “micro-worlds” would lead to greater complexity in AI systems. Rhee argues that the micro-world approach of AI, which is built on stereotypes and familiar norms and erases the unruliness of the real world, finds it parallel in The Stepford Wives. Ira Levin’s 1972 novel, inspired by Betty Friedan’s The Feminine Mystique, famously recounts the murder of women and their replacement with docile immaculate generic robots that are programmed to do housework and serve their husbands. Like the closed-world AI models, the female robots remain sealed off from the public world of wages, politics, and intellectual work while real women with their complicated desires, politics, and aspirations must be killed off in order to sustain the unchanging ahistorical gendered hierarchy of Stepford.

Yet Rhee also argues that the fate of the real women in Stepford is sealed in part because of their refusal to acknowledge the working-class women, who as “outsiders” of the suburban enclave, are able to document the crimes committed in the area. Concerned with the fate of middle and upper class white housewives, Friedan’s work also ignores the many white working class women, single women and women of colour who were working outside the home in jobs that offered neither economic self-sufficiency nor independence from men, as bell hooks has noted. Moreover the presentation of domestic labor and child rearing, the task of raising another human being, as unskilled and “mindless” perpetuates the devaluation of “women’s” work. The Stepford Wives and its contemporary adaptation, Ex Machina (2015), highlight the exclusionary and at times exploitive narrative of white middle-class feminism that finds racialized and classed women aiding white women’s liberation even as they are excluded from it.

Rejecting the symbolic micro-world models, Rodney Brooks developed an embodied approach to robotics in the 1980s that encouraged robots to interact with messy dynamic environments to develop machine “intelligence” with the hope that they would “evolve” upward to humanoid AI. Yet while Brooks’ robots are physically situated in the world, they, as several critics have pointed out, are culturally and historically “dumb,” perpetuating the closed world approach to AI. In addition to military robots, Brooks’ company iRobot designs autonomous robots, like the Stepford robots, as mindless domestic laborers. In contrast to closed-world AI, Rhee concludes this chapter with several examples of robotic art—including Stelarc’s Fractal Flesh and Ping Body—that stress interdependence, open worlds and the vulnerability of the body.

In the third chapter, one of the most fascinating, Rhee explores social robots and emotional labor as another aspect of devalued reproductive labor and its ties to the military. In the 1990s, with new research on the importance of emotions in intelligence, robots, funded by DARPA, were developed based on the contested theory of “universal” emotions. Rhee argues that both the myth of universal emotions and the work of producing legible emotions are ways of policing the boundary of the human. Technologies developed from this theory that assume the external body reveals the truth of the individual, such as SPOT (screening of passengers by observation techniques) adopted by US Homeland Security, not only have had little success but expose the power relations embedded in them. Rhee explores the gendering and racializing of emotional labor and the dehumanization that is perpetuated by these technologies in her reading of Philip K Dick’s We Can Build You and his later novel Do Androids Dream of Electric Sheep. The Voigt-Kampff test, at the heart of this latter novel, imposed by those in control, measures emotional responses to scenarios or images to determine the “human” status of the responder. The test of course is never used on its android hunters and Deckard’s sense of shame in brutally eliminating the androids at the command of the state remains inside him in any case and is never visible on the surface. The chapter concludes with two feminist robotic works, Omo by Kelly Dobson and Swarming Emotional Pianos by Erin Gee, which challenge the theory of the universality of emotions and its use in developing dehumanizing policing technologies.

The final chapter on dying considers the entanglement of reproductive labor and drone warfare. Targeting victims and perpetrators outside of any judicial system and under a veil of secrecy, drone warfare perpetuates the colonial and racial legacy of determining who gets included and who gets excluded from the category of human, which has been part of both post Enlightenment subjectivity and US labor history. Rhee reviews American drone policy that identifies any military-aged man in certain areas as the enemy and that refuses to investigate those killed in the strikes or accurately document civilian deaths. She also reviews the history of cybernetics as a “war science” and Norbert Wiener’s early work on defense systems, which encouraged fighter pilots to identify with cybernetic German pilots to better understand the enemy other. The racialized Japanese enemy, however, were characterized as insects and vermin rather than as cyborgs, so no identification was encouraged. This dehumanizing racialization continues not only in drone policy but also in the asymmetry of drone targeting fueled by the massive gulf between operators and their targets, viewed as “ants.” From the high accident rate of the machines to the “ambiguous” information that ends with dead civilians, the technology also reveals itself as highly fallible exposing the misguided faith in technological omnipotence and quantifiable information that drives this form of warfare.

Reviewing drone art, Rhee provides a provocative analysis as she unpacks the differences between works that invite their western audiences to identify with racialized targets and those that challenge that identification in order to underscore the legacy of racial violence in America. She points to the limits of art works that promote identification with those “over there” by invoking Judith Butler and her questions about whose lives count as grievable. Positioning America as a place of safety and justice, works such as Home Drone and Drone Shadow fail to acknowledge the continuity between drone strikes overseas and the violence and injustices inflicted on marginalized communities at home, a point driven home by the adoption of militarized robots by some local US police forces. In contrast works such as Teju Cole’s Seven Short Stories about Drones refuse to ground ethics in familiarity and identification and instead insist on mourning lives that are unknowable. The artistic collective behind #NotaBugSplat, James Bridle’s Dronestagram, and Omer Fast’s film 5,000 Feet is the Best also suggest, Rhee argues, “an ethical relationship that foregrounds disorientation, uncertainty, and the unknown rather than the familiar, the known, the predictable ” that direct cybernetic technology and drone warfare (172).

The Robotic Imaginary exposes the ways in which robot technologies perpetuate existing racial and gender hierarchies by devaluing certain labour and certain humans and valuing others while exploring robotic art as way of opening imaginings that challenge the colonial, patriarchal, class and racial histories. As robots invade work spaces and as privatization erodes social responsibility, Rhee rightly insists we should ask of every robot figure “who is being dehumanized?” And what version of human is considered “sacrosanct and familiar”? While automation and the restructuring of the labor force by multinationals like Google and Facebook that are buying up AI and robotic technology lies outside the argument of Rhee’s book, I did wonder about the very limits of the metaphor of the human as machine and whether dehumanization doesn’t begin with industry leaders in Silicon Valley who have so successfully propagated the view that there is no difference between the two. Rhee’s otherwise excellent reading also falls a little short in its American-centric focus. What, for instance, would she have to say about Japan’s embrace of the “robot revolution,” in lieu of immigration, that is trumpeted in the face of a shrinking labor force? Or about the global fight for control of AI.

A vital contribution to the field, Rhee’s book does not argue fiction is “coming true” as is so often the case in scientific and media reports on robots, but instead it turns to literature and art as providing some insight into the always shifting ground of what it means to be human. Rhee’s book is essential reading for anyone negotiating the intersections of literary studies, anthropomorphized robotics and the impact of these technologies on society.

EnchantingJust out in the Palgrave book series called Social and Cultural Studies of Robots and AI, which I co-edit with Kathleen Richardson and Cathrine Hasse:

Enchanting Robots: Intimacy, Magic, and Technology  By Maciej Musiał

“This book argues that robots are enchanting humans (as potential intimate partners), because humans are enchanting robots (by performing magical thinking), and that these processes are a part of a significant re-enchantment of the “modern” world. As a foundation, the author examines arguments for and against intimate relationships with robots, particularly sex robots and care robots. Moreover, the book provides a consideration of human-robot interactions and philosophical reflections about robots through the lens of magic and magical thinking as well as theoretical and practical re-evaluations of their status and presence. Furthermore, the author discusses the abovementioned issues in the context of disenchantment and re-enchantment of the world, characterizing modernity as a coexistence of these two processes. The book closes with a consideration of future scenarios regarding the meaning of life in the age of rampant automation and the possibility that designing robots becomes a sort of new eugenics as a consequence of recognizing robots as persons.”

Simon talk 2As part of the Automatons public lecture series, Dr. Simon Kow will give a talk on “Asian Robots and Orientalism” this Wednesday (7:00 pm, March 7th) at Alumni Hall, King’s College (Halifax).

Abstract: “This talk examines aspects of the history of East Asian robots and Orientalism from the early modern period to the present, including the image of the automaton in western Orientalist views of Asian societies, the influences  of Asian and especially Japanese cultural traditions on Asian approaches to robots, and ways in which certain depictions of robots in contemporary Japanese popular culture can be interpreted in terms of a counter-Orientalist narrative on technology.”

For more information go to the Automatons Lecture Series.

Simon talk 1

Dawn talk 3What do puppeteers mean when they speak about bringing a puppet ‘to life’? What is the difference between a prop and a puppet? Why do these questions matter not only in the creative arts but also in the study of how artificial intelligence and automatons are imagined? Dr. Dawn Brandes (Fountain School of Performing Arts and Halifax Humanities) will be exploring these questions in her talk this Wednesday, Feb 28th, 7:00pm at Alumni Hall, King’s College, Halifax. This talk is part of the public lecture series “Automatons: From Ovid to AI.” For information go to: Automatons Lecture Series.

 

Saint Mary’s University English prof Teresa Heffernan teamed up with Paul Abela of the Department of Philosophy, Acadia University, to argue for the “con” side in a policy debate last month on the implications of AI and robots for the future of society. While the pro v. con structure was simplistic, it generated a dynamic conversation on “grounds for optimism” compared to “concerns about what the future will bring.”

Dr. Heffernan argued that, “the massive industry and military investment driving this technology has already rendered a ‘con’ position irrelevant. There is no stopping it. All we can hope for is some sane regulation, more transparency, more education, less hype, and more voices in what’s been largely an unregulated field.” Acknowledging the optimism that characterized the early days of the internet, she outlined a range of negative impacts and risks indicative of the complex problems and disappointments of the new reality of social media and the “4th industrial revolution”. She concluded with the injunction that, “we cannot look to technology to solve our problems. We don’t need more engineers attempting to manufacture life for profit, we need more humans thinking creatively about how to share this planet with other complex lifeforms on which we all depend.”

The debate was hosted by Acadia University, with Ian Wilks (Acadia) serving as moderator. The “pro” side was represented by Danny Silver, Jodrey School of Computer Science, Director, Acadia Institute for Data Analytics, Acadia University, and Stan Matwin, Faculty of Computer Science, and Director of Big Data Analytics at Dalhousie University. Congratulations to Acadia University for hosting this fine event.

 

As the video that accompanied the July 2014 launch of the Jibo crowdsourcing campaign shows, Jibo is a personal robot designed to convincingly interact in conversations, as well as perform organizational, cognitive and educational tasks such as conducting internet searches on command and telling children’s stories. From the various interviews that Jibo’s inventor Cynthea Breazeal gave during the launch, one can surmise that the project aims to dispel a myth. This is the myth that progress in the development of AI and robotics is defined in terms of human labour redundancy. As Breazeal puts it in one newspaper article, “There’s so much entrenched imagery from science fiction and the robotic past – robotics replacing human labour – that we have to keep repeating what the new, more enlightened view is” (quoted in Bielski 2014). In the enlightened view, robots support and enhance human activities, rather than supplant them – they are our “partners” and “companions” rather than recalcitrant machines and adversaries. Jibo is intended to incarnate that enlightened view both in its appearance and in the ostensible services it provides. I want to suggest that while Breazeal’s effort to model her personal robot in terms of a non-reductive human-robot companionship model is valuable in its own right, her denial of the validity of labour replacement concerns only serves to cover over the real and complex problems of our entwinement with technology under the conditions of consumer capitalism.

Companion Species blog graphic

“People’s knee-jerk reaction right now is that technology is trying to replace us. The fact that Jibo is so obviously a robot and not trying to be a human is important because we’re not trying to compete with human relationships. Jibo is there to support what matters to people. People need people” (Breazeal Quoted in Bielski, Globe and Mail, 24 July 2014).

In an interview with Zosia Bielski of the Globe and Mail, Breazeal states that Jibo is “obviously not a robot.” In doing so, she draws attention to her effort to diverge from a prevalent research trend in personal robotics, a trend which aims to make machines that look, as well as function, like humans in terms of bi-pedal locomotion, speech and facial features, among other things. Jibo is a counter-top, bust-like system that, at first glance, appears to hybridize a PC flat screen monitor with the shiny white helmet-head of an astronaut. As a stationary, armless device Jibo doesn’t follow “mobile assistants” like Asimo or Reem, or embodied cognition platforms like iCub, into humanoid terrain. Devoid of recognizably human facial features, it has even less affinity with android creations like the Geminiod and Telinoid robots that mimic the aesthetic and emotive characteristics of human faces and bodies. Unlike these projects, Jibo is not trying to be look like a human.

Quite the opposite according to journalist Lance Ulanoff, who argues that one of Breazeal’s design objectives is to avoid the anxiety and repulsion that may arise when people encounter robots that mimic human traits too closely (Ulanoff 2014). This experience of strangeness, referred to in robotics literature as the “uncanny valley,” is believed to inhibit emotional investment in robots, which in turn presents a viability problem for robotics projects, “the kiss of death” as one writer puts it (Eveleth 2013). While the concept of the uncanny valley is, itself, a matter of debate (Eveleth 2013), Jibo’s design, according to Ulanoff, is intended to keep the human/robot distinction clearly differentiated at the perceptual level. It is designed to be recognizably robotic. As Breazeal states, “It’s a robot, so let’s celebrate the fact it’s a robot…” But if Breazeal intends to keep the human/robot boundary clearly delineated, she’s not trying to re-entrench robots as mere machines (“appliances” in Ulanoff’s terms) or as utterly unrecognizable, and therefore threatening, aliens in our midst.

If robots are different from humans, Breazeal seems to be trying to demonstrate that the difference does not necessarily amount to an opposition, an unbridgeable gap, played out in man v.s. machine sci-fi stories and rhetoric around human redundancy. If the problem is framed in terms of difference rather than opposition, then the task helpfully shifts from waging a defensive war against recalcitrant or malevolent machines to developing bonds between autonomous, non-reducible entities. In this respect, Breazeal talks about “humanizing technology” (Markoff 2014), which is not to be mistaken for turning robots into humans. Instead, as Ulanoff (2014) explains, the idea is to integrate movements and social behavior that triggers positive human responses. For example, Jibo is designed to move in ways that make humans perceive it as an animate – autonomous, living – creature rather than as an externally determined thing (a mechanism). On Breazeal’s account, according to Ulanoff, this distinction between the animate and the inanimate is a matter of human perception, a perception that can be addressed in the design of a machine. For example, Jibo is designed to turn its head in a fluid, rather than in a stiff, mechanical motion; and it “wakes up” – opens its eye and turns its head toward the speaker — when it hears its name, even if not directly called on. According to Breazeal, these behaviours indicate internal states, which to us amount to signs of life. No less significantly, Jibo is designed to participate in conversations in recognizably human ways, such as turning its head to face a speaker, an indicator of social presence and “reciprocal” engagement.

With these types of features, we are intended to perceive Jibo as living and, on top of that, as an interactive social agent. If Jibo is different from us because “he” (the voice is male) is a robot, he is nonetheless recognizably one of us because of his social abilities. For this reason, the Jibo promotional narrative is framed in terms of “partnership” and “companionship.” Rather than an adverse alien technology aimed at replacing authentically human work, Jibo is positioned as an extension of the human family, “supporting” and “augmenting” social relationships and experiences in the domestic sphere.

Neither a mere appliance nor an alien home invader, the Jibo construct starts to look like one of Donna Haraway’s “companion species.” Haraway (2008) emphasizes the point that the modern English term “companion” derives from the old French meaning, “one who breaks bread with” (or eats at the same table with), which in turn is derived from the Latin roots, “com” (together with) and “panis” (bread). In drawing attention to the roots of this word, Haraway endeavours to counter conventional narratives about human/animal relations, narratives that, on her account, are built on binary, oppositional terms — either humans or animals, but not both together. If Jibo is spared the ethical dilemma that Haraway devolves equally to all biological beings (“Rather, cum panis, with bread, companions of all scales and times eat and are eaten at earth’s table together, when who is on the menu is exactly what is at stake…To be kin in that sense is to be responsible to and for each other, human and not.” Haraway 2010), the companionship model for social robots brings into play a similar narrative about overcoming false oppositions and recognizing a fundamental interdependency between humans and machines. It is only because we are entwined with technology that machines could be seen to augment rather than supplant us, to work with and for us like dogs do — a companionship model — rather than against us.

The companionship model gives rise to the picture of domestic equanimity depicted in the Jibo promotional video, a video, it is worth noting, that is weirdly foreshadowed in a 1989 VHS promo for “Newton,” an R2D2-like domestic robot that augured much of what Jibo now promises. But the emphasis in the Jibo video on the domestic and personal spheres, and more importantly, on scenes of middle class family life and the sandwich generation (between kids and aging parents), is telling. It speaks of a consumer life-world fantasy of human-robot “partnerships” that occludes the economic support system upon which it depends. In this respect, it should be noted that Jibo is intended for the consumer electronics market, starting at a price point (US$499 for the first, limited run) that is meant to put it in the same range as a high-end tablet (Ulanoff 2014). As such, it is a commodity, subject to the same abstract law of value as all the other electronic devices competing for consumers’ attention. These are commodities designed to quantify and mass market affective, social and cognitive qualities, such as Jibo’s friendly demeanor and social reciprocity. Seen in this light, Jibo may well not be intended to “replace” human labour, but rather to create a new need, a new form of social outsourcing, for the sake of profit. And because consumer electronic devices are purposely built with “lifespans” ranging from two to, at most, five years, they are fundamentally destined for replacement. As such they bear a material fungibility romantically evoked in the robot scrap heap scavenging scene in the movie AI, but it is also all too evident in burgeoning digital scrap heaps worldwide, depositories that are, in turn, only the residual traces of an ecologically devastating industry.

Yet even if we put aside the troubling issues associated with the consumer electronics industry for which Jibo is destined, the Jibo narrative of robotic partnership is built on a disavowal of the ways in which developments in AI and robotics continue to displace blue and white collar work. These trends have seen a considerable amount of coverage recently in the wake of the Oxford University and Pew Research Center assessments of the ranges (e.g. business processes, transportation/logistics, production labour, administrative support, IT/engineering, and services such as elder care) and percentages of jobs at risk and the uncertainties associated with techno-utopian claims about the capacity of displaced workers to “adjust” (Bagchi 2013; Frey and Osborne 2013; Lafrance 2014; Pew 2014; Wohlsen 2014). A recent documentary called Humans Need Not Apply aptly demonstrates forms of robotic automatization that have already taken place. So attributing anti-technology sentiment to sci-fi and the “robotics past,” as Breazeal does, serves more to obscure than to clarify the situation. The “enlightened” perspective does not seem to follow from the statement that, “People’s knee-jerk reaction right now is that technology is trying to replace us.” Rather, it would be more enlightened to say that both things are true, that robots and AI can and even do support personal and social capabilities, in some spheres and for some, but not necessarily all, people; and at the same time, as nonreverseable job losses and increasingly precarious employment structures indicate, our interdependency with such technologies may also diminish, and even destroy human lives, in many if not all fields.

By Karen E. Asp.

(more…)

The Singularity University, founded by Peter Diamandis and Ray Kurzweil and located in Silicon Valley, brands itself as about “Science. Technology. The Future of Humanity.” This for-profit uncredited institution offers opportunities such as: 7-day workshops for executives and entrepreneurs (US $12,000); a 10-week Graduate Studies Program (US $29,500); and events like the 2-day conference at NYU that focuses on how new technologies are impacting finance (VIP tickets are US $10,000; general admission is US$ 5,000; and a special rate that students can apply for is US $2,500). Corporate sponsors include, Google, the Kauffman Foundation, and ePlanet Capital. Who is envisioning the “future of humanity”? The prices are already exclusionary; the core faculty and chairs listed on the university website are dominated by greying white men; and the faith in technology and profit are unwavering, while the world’s problems are understood as great “market” opportunities (http://singularityu.org).

The university’s mandate is to teach people “to utilize accelerating technologies to address humanity’s hardest problems.” Humanity’s “problems” are never with humans themselves their slogan suggests: poverty, depression, social inequity, colonialism, genocide, famine, climate change, pollution, trash, water scarcity, dying oceans, superbugs, disappearing species—can all be solved by innovative technology. The dark sides of science and technology are swept under the carpet as the seductive mantra of endless progress is trumpeted. This problematic strategy is brilliantly captured in the opening of Duncan Jones’ 2009 film Moon that begins with an advertisement for “Lunar Industries.”

 

The promise of a world-healing technology is mixed with images of sparking lakes, smiling racially diverse children and women, elephants roaming the savannah, a comforting voice and a soothing soundtrack. The advertisement then gives way to the reality of a business that has capitalized on the oil crisis and has established a mine on the moon to extract helium-3 and send it back to earth—a new technology that addresses the problems of an old one. On the desolate moon-scape, Sam, the man who operates the system and longs to return home to his wife and family, discovers that he is one of many short-lived replaceable clones with implanted memories of a family and is slated to be incinerated at the end of his contract in order to save the company the hassle and expense of sending new workers to the moon. While Sam blindly serves technology in the name of the future for humanity, he realizes he in turn has been enslaved.

Despite its declared interest in “humanity,” the Singularity University offers no courses in the humanities and human culture…nothing, for instance, on literature, language, gender studies, history, art, music, cultural studies, race studies, post colonialism or philosophy. Catapulting us into a shiny, bright future full of instant fixes, the complicated terrain of ethics and fiction is cast aside in favour of the truth and practicality of science harnessed to corporate interests. Reminiscent of nineteenth-century utopian dreams about technology, the Singularity University operates as if the horrors of the twentieth century—machine guns, gulags, gas ovens, atomic bombs, death camps, all designed by engineers and scientists and built by “reputable” companies—never happened. As if the scientists who worked on the atomic bomb and who lived to witness Hiroshima and Nagasaki never had a moments regret even as Oppenheimer lamented in an address to the American Philosophical Society: “We have made a thing, a most terrible weapon, that has altered abruptly and profoundly the nature of the world … a thing that by all the standards of the world we grew up in is an evil thing. And by so doing … we have raised again the question of whether science is good for man.”

As science was emerging as a discrete and soon to be dominant way of knowing and as the industrial revolution was transforming the English country-side, Thomas Love Peacock in his “Four Ages of Poetry” (1820) argued that poetry was increasingly useless and retrograde in the age of scientific invention: “A poet in our times is a semi-barbarian in a civilized community. He lives in the days that are past. His ideas, thoughts, feelings, associations, are all with barbarous manners, obsolete customs, and exploded superstitions. The march of his intellect is like that of a crab, backward.”His friend, Percy Bysshe Shelley, responded with his spirited “A Defence of Poetry” in 1821. He wrote: “The cultivation of those sciences which have enlarged the limits of the empire of man over the external world, has, for want of the poetical faculty, proportionally circumscribed those of the internal world; and man, having enslaved the elements, remains himself a slave.”

As profit, innovation, and technology (or the new educational push for STEM programs: Science, Technology, Engineering, Medicine) are once again offered as short-term solutions to our world at the expense of the long traditions of the humanities, Shelley’s “Defence” might serve as a useful reminder of the limits of this approach. In the periods in history when calculation trumped imagination, Shelley argued, there was the greatest social inequality: the rich got richer and the poor got poorer as the society was torn between “anarchy and despotism.” As we witness spontaneous global demonstrations and brutal state suppression of them (from Cairo to Istanbul to London to New York City to Athens), the increasing concentration of wealth in the hands of a few, and the disregard for the planet and fellow species; the cultivation of an ethical imagination that Shelley promoted at the outset of the industrial revolution seems newly urgent. Rather than manically throwing expensive newer “exponential” technology at older technology in a desperate attempt to deal with the many problems it produced—we need to rethink our relationship to the future of humanity.

By Teresa Heffernan