Archive for the ‘News’ Category

What is the K5 “autonomous data robot”? This seems to be the underlying question in various news reports covering the recent commercial launch of Silicon Valley-based Knightscope’s new security technology. And it is quickly followed by more questions that in essence ask, what does it mean for us? Does it mean that security guards will no longer be required to do boring and dangerous patrol work, or does it imply job losses due to automation? Should we celebrate the opportunity for improved surveillance of private property, or worry about further diminishment of privacy rights? These questions are being framed through analogical references to movies and movie characters, but the references by no means settle them. In this respect, it is not surprising that the cartoonish Star Wars character R2-D2 is frequently evoked to describe the K5, given the latter’s unequivocally non-humanoid design, its capacity for autonomous movement, and its data collection and social interaction features (Markoff 2013, McDuffee 2014, Vazquez 2014). Its seemingly benign alien appearance has even evoked feelings of endearment on the part of some who have encountered it, according to Knightscope. People have referred to it as “cute” and have tried hugging it. But just how like R2-D2 is the K5?

400px-R2d2K5 Hug PnP-Expo-21-333x500Dalek

The Atlantic ran a headline stating that the K5 is “less RoboCop and more R2-D2” because the robot is not “weaponized” (McDuffee 2014). Like RoboCop it is on the side of the good in terms of protecting people and property, nonetheless it is more of a scout than a warrior. Yet the K5 is equipped with 360-degree surveillance sensors, live video tracking, predictive analytic software and an optical character recognition feature that enables it to read license plates. Given these attributes and capacities, a privacy rights organization representative has said that the K5 “is like R2D2’s evil twin” (Markoff 2013). This account suggests that while the K5 may look like R2-D2, the similarity is an illusion because the K5 is designed to perform inherently invasive tasks, tasks that can facilitate even more illegitimate assaults on individual rights and freedoms.

Perhaps despite themselves, the K5’s developers further explicate these potentially negative implications by defining the K5 through references to movies: “We don’t want to think about ‘RoboCop’ or “Terminator,’ we prefer to think of a mash-up of ‘Batman,’ ‘Minority Report’ and R2D2” (Markoff 2013). In this constellation of references, the ethical and political implications of the “pre-cog” surveillance system, as portrayed in Minority Report, readily negate R2-D2’s seemingly benevolent aspect. Here, the K5 both is, and is not, like R2-D2. Indeed, one might say that the K5 is like R2-D2 with respect to the functional attributes of an apparently ethically neutral “autonomous data machine” – a self-piloting, socially and environmentally interactive computer on wheels. And it is unlike R2-D2 insofar as it is haunted by an ambiguous ethical purpose articulated in a “mash-up” of “pre-cog” technology and that very same, cartoonish figure of R2-D2.

In describing her encounter with the K5, MIT Technology Review writer Rachel Mech (2014) observed that, “The robots managed to appear both cute and intimidating. This friendly-but-not-too-friendly presence is meant to serve them well in jobs like monitoring corporate and college campuses, shopping malls, and schools.” On this account, the robots are intended to induce mixed feelings. Yet in what seems to be a casual introductory reference, Mech invokes “Daleks” rather than R2-D2 to aid in describing the K5’s appearance, ostensibly because the former are tall in stature, like the K5, which is 3.47 meters (5 feet) in height. R2-D2, on the other hand, is a mere 3.1 meters tall. But Daleks, who featured in the Dr. Who TV series, were intimidating for many reasons; they were, after all, a non-empathic race of robot-looking cyborgs bent on universal domination. Mech is not alone in referring to Daleks rather than R2-D2 to characterize the K5, and some, such as Sebastian Anthony (2014), suggest certain more sinister implications of that reference. Among other things, Daleks certainly would not invite hugs. Fiction may be helping commentators frame their questions concerning the new K5 security robot, but it is not providing them with neatly delineated boundaries or easy answers.

By Karen Asp



As the video that accompanied the July 2014 launch of the Jibo crowdsourcing campaign shows, Jibo is a personal robot designed to convincingly interact in conversations, as well as perform organizational, cognitive and educational tasks such as conducting internet searches on command and telling children’s stories. From the various interviews that Jibo’s inventor Cynthea Breazeal gave during the launch, one can surmise that the project aims to dispel a myth. This is the myth that progress in the development of AI and robotics is defined in terms of human labour redundancy. As Breazeal puts it in one newspaper article, “There’s so much entrenched imagery from science fiction and the robotic past – robotics replacing human labour – that we have to keep repeating what the new, more enlightened view is” (quoted in Bielski 2014). In the enlightened view, robots support and enhance human activities, rather than supplant them – they are our “partners” and “companions” rather than recalcitrant machines and adversaries. Jibo is intended to incarnate that enlightened view both in its appearance and in the ostensible services it provides. I want to suggest that while Breazeal’s effort to model her personal robot in terms of a non-reductive human-robot companionship model is valuable in its own right, her denial of the validity of labour replacement concerns only serves to cover over the real and complex problems of our entwinement with technology under the conditions of consumer capitalism.

Companion Species blog graphic

“People’s knee-jerk reaction right now is that technology is trying to replace us. The fact that Jibo is so obviously a robot and not trying to be a human is important because we’re not trying to compete with human relationships. Jibo is there to support what matters to people. People need people” (Breazeal Quoted in Bielski, Globe and Mail, 24 July 2014).

In an interview with Zosia Bielski of the Globe and Mail, Breazeal states that Jibo is “obviously not a robot.” In doing so, she draws attention to her effort to diverge from a prevalent research trend in personal robotics, a trend which aims to make machines that look, as well as function, like humans in terms of bi-pedal locomotion, speech and facial features, among other things. Jibo is a counter-top, bust-like system that, at first glance, appears to hybridize a PC flat screen monitor with the shiny white helmet-head of an astronaut. As a stationary, armless device Jibo doesn’t follow “mobile assistants” like Asimo or Reem, or embodied cognition platforms like iCub, into humanoid terrain. Devoid of recognizably human facial features, it has even less affinity with android creations like the Geminiod and Telinoid robots that mimic the aesthetic and emotive characteristics of human faces and bodies. Unlike these projects, Jibo is not trying to be look like a human.

Quite the opposite according to journalist Lance Ulanoff, who argues that one of Breazeal’s design objectives is to avoid the anxiety and repulsion that may arise when people encounter robots that mimic human traits too closely (Ulanoff 2014). This experience of strangeness, referred to in robotics literature as the “uncanny valley,” is believed to inhibit emotional investment in robots, which in turn presents a viability problem for robotics projects, “the kiss of death” as one writer puts it (Eveleth 2013). While the concept of the uncanny valley is, itself, a matter of debate (Eveleth 2013), Jibo’s design, according to Ulanoff, is intended to keep the human/robot distinction clearly differentiated at the perceptual level. It is designed to be recognizably robotic. As Breazeal states, “It’s a robot, so let’s celebrate the fact it’s a robot…” But if Breazeal intends to keep the human/robot boundary clearly delineated, she’s not trying to re-entrench robots as mere machines (“appliances” in Ulanoff’s terms) or as utterly unrecognizable, and therefore threatening, aliens in our midst.

If robots are different from humans, Breazeal seems to be trying to demonstrate that the difference does not necessarily amount to an opposition, an unbridgeable gap, played out in man v.s. machine sci-fi stories and rhetoric around human redundancy. If the problem is framed in terms of difference rather than opposition, then the task helpfully shifts from waging a defensive war against recalcitrant or malevolent machines to developing bonds between autonomous, non-reducible entities. In this respect, Breazeal talks about “humanizing technology” (Markoff 2014), which is not to be mistaken for turning robots into humans. Instead, as Ulanoff (2014) explains, the idea is to integrate movements and social behavior that triggers positive human responses. For example, Jibo is designed to move in ways that make humans perceive it as an animate – autonomous, living – creature rather than as an externally determined thing (a mechanism). On Breazeal’s account, according to Ulanoff, this distinction between the animate and the inanimate is a matter of human perception, a perception that can be addressed in the design of a machine. For example, Jibo is designed to turn its head in a fluid, rather than in a stiff, mechanical motion; and it “wakes up” – opens its eye and turns its head toward the speaker — when it hears its name, even if not directly called on. According to Breazeal, these behaviours indicate internal states, which to us amount to signs of life. No less significantly, Jibo is designed to participate in conversations in recognizably human ways, such as turning its head to face a speaker, an indicator of social presence and “reciprocal” engagement.

With these types of features, we are intended to perceive Jibo as living and, on top of that, as an interactive social agent. If Jibo is different from us because “he” (the voice is male) is a robot, he is nonetheless recognizably one of us because of his social abilities. For this reason, the Jibo promotional narrative is framed in terms of “partnership” and “companionship.” Rather than an adverse alien technology aimed at replacing authentically human work, Jibo is positioned as an extension of the human family, “supporting” and “augmenting” social relationships and experiences in the domestic sphere.

Neither a mere appliance nor an alien home invader, the Jibo construct starts to look like one of Donna Haraway’s “companion species.” Haraway (2008) emphasizes the point that the modern English term “companion” derives from the old French meaning, “one who breaks bread with” (or eats at the same table with), which in turn is derived from the Latin roots, “com” (together with) and “panis” (bread). In drawing attention to the roots of this word, Haraway endeavours to counter conventional narratives about human/animal relations, narratives that, on her account, are built on binary, oppositional terms — either humans or animals, but not both together. If Jibo is spared the ethical dilemma that Haraway devolves equally to all biological beings (“Rather, cum panis, with bread, companions of all scales and times eat and are eaten at earth’s table together, when who is on the menu is exactly what is at stake…To be kin in that sense is to be responsible to and for each other, human and not.” Haraway 2010), the companionship model for social robots brings into play a similar narrative about overcoming false oppositions and recognizing a fundamental interdependency between humans and machines. It is only because we are entwined with technology that machines could be seen to augment rather than supplant us, to work with and for us like dogs do — a companionship model — rather than against us.

The companionship model gives rise to the picture of domestic equanimity depicted in the Jibo promotional video, a video, it is worth noting, that is weirdly foreshadowed in a 1989 VHS promo for “Newton,” an R2D2-like domestic robot that augured much of what Jibo now promises. But the emphasis in the Jibo video on the domestic and personal spheres, and more importantly, on scenes of middle class family life and the sandwich generation (between kids and aging parents), is telling. It speaks of a consumer life-world fantasy of human-robot “partnerships” that occludes the economic support system upon which it depends. In this respect, it should be noted that Jibo is intended for the consumer electronics market, starting at a price point (US$499 for the first, limited run) that is meant to put it in the same range as a high-end tablet (Ulanoff 2014). As such, it is a commodity, subject to the same abstract law of value as all the other electronic devices competing for consumers’ attention. These are commodities designed to quantify and mass market affective, social and cognitive qualities, such as Jibo’s friendly demeanor and social reciprocity. Seen in this light, Jibo may well not be intended to “replace” human labour, but rather to create a new need, a new form of social outsourcing, for the sake of profit. And because consumer electronic devices are purposely built with “lifespans” ranging from two to, at most, five years, they are fundamentally destined for replacement. As such they bear a material fungibility romantically evoked in the robot scrap heap scavenging scene in the movie AI, but it is also all too evident in burgeoning digital scrap heaps worldwide, depositories that are, in turn, only the residual traces of an ecologically devastating industry.

Yet even if we put aside the troubling issues associated with the consumer electronics industry for which Jibo is destined, the Jibo narrative of robotic partnership is built on a disavowal of the ways in which developments in AI and robotics continue to displace blue and white collar work. These trends have seen a considerable amount of coverage recently in the wake of the Oxford University and Pew Research Center assessments of the ranges (e.g. business processes, transportation/logistics, production labour, administrative support, IT/engineering, and services such as elder care) and percentages of jobs at risk and the uncertainties associated with techno-utopian claims about the capacity of displaced workers to “adjust” (Bagchi 2013; Frey and Osborne 2013; Lafrance 2014; Pew 2014; Wohlsen 2014). A recent documentary called Humans Need Not Apply aptly demonstrates forms of robotic automatization that have already taken place. So attributing anti-technology sentiment to sci-fi and the “robotics past,” as Breazeal does, serves more to obscure than to clarify the situation. The “enlightened” perspective does not seem to follow from the statement that, “People’s knee-jerk reaction right now is that technology is trying to replace us.” Rather, it would be more enlightened to say that both things are true, that robots and AI can and even do support personal and social capabilities, in some spheres and for some, but not necessarily all, people; and at the same time, as nonreverseable job losses and increasingly precarious employment structures indicate, our interdependency with such technologies may also diminish, and even destroy human lives, in many if not all fields.

By Karen E. Asp.