Archive for the ‘News’ Category

Sophia UN 2

Divergent perspectives on recent developments in artificial intelligence (AI) and robotics were evident at a public lecture hosted last week (Feb 14, 2018) at the University of King’s College (Halifax). The keynote speaker, Stan Matwin, presented a nuanced, but largely optimistic view of where research in AI is heading, and the value that advances like “deep learning” promise for society. Following Dr. Matwin’s talk, Teresa Heffernan offered some critical commentary, emphasizing how deceptive claims were being used to promote initiatives in the industry, feeding on and reinforcing collective fantasies and delusional thoughts about, for example, the “aliveness” of robots.

Deep learning

“Deep learning” process

Dr. Matwin (Dalhousie University, Canada Research Chair in Visual Textual Analytics), summarized the history of AI and characterized advances in the field since the advent of “deep learning.” He explained how “classical” machine learning depended on human labour for the provision of “representations” and “examples” comprising the “knowledge” being imparted to the “artificial” system. In contrast, Dr. Matwin showed, the emerging field of “deep learning” requires human effort in the compilation of “examples” only; the machine “learns” what the examples represent by “finding non-linear relationships between” them. Moreover, according to Dr. Matwin, 2017 saw the development of a self-training chess playing program (AlphaZero) able to learn the game without humans supplying either “representations” or “examples.” Based on the powers exhibited by “deep learning” systems, Dr. Matwin predicted that significant social changes were on the horizon, such as white-collar job loss. Nonetheless he averred that the ubiquity of high-tech machines like smart phones was indicative of the inevitability of technological progress in general, and of the benefits of AI in particular.

Dr. Heffernan, professor of English at Saint Mary’s University and director of the Social Robot Futures Project, introduced her talk by showing a segment from the Tonight Show (Jimmy Fallon) in which the CEO of Hanson Robotics (David Hanson) affirmed that his robot “Sophia” was “basically alive.”

 

Dr. Heffernan then argued that “Sophia” was, in fact, merely a “chatbot” in a robotic body: it functions by reacting to a user’s statements, queries and expressions with prewritten scripts, and/or information gathered from the internet. The user’s spoken statements are transcribed into text that is then matched with automated replies. Dr. Heffernan presented some of the open source code used in programming Sophia’s chatbot capabilities. In sum, she argued with reference to CEO Hanson’s performance on the Tonight Show, that “what you are watching is a showman and an impressive spectacle.”

M-x doctor mode, an Eliza clone running in GNU Emacs (Wikipedia)

Dr. Heffernan explained how the scientist who invented the “chatbot” concept in the 1960s, Joseph Weisenbaum, had since become a critic of the industry. In a famous experiment, he programmed the chatbot “Eliza” with a dialogic model based in psychotherapy — giving rise to the so-called “DOCTOR” script. Weisenbaum noticed that although users fully understood how the DOCTOR-scripted chatbot worked, which was to respond with stock phrases or pick up on the last statement the subject made, they nevertheless divulged intimate personal details and attributed feelings to it. Regarding this phenomenon, Dr. Heffernan quoted Weisenbaum’s own reflections: “What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”  In a documentary called Plug and Pray Weizenbaum expressed his concern about the development of this technology given people’s susceptibility to being manipulated.

Nothwithstanding Weisenbaum’s public statements on these issues, according to Dr. Heffernan, contemporary “marketers of social robots like Sophia, which are now enhanced by faster computer processors and access to big data sets, encourage this delusional thinking instead of exposing it.”

Stan Matwin and Teresa Heffernan

Stan Matwin and Teresa Heffernan, Feb 14, 2018

You can watch Dr. Heffernan’s 15 minute talk, “Concerns About the Artificial Intelligence Industry,” below. More information about the “Automatons” lecture series at King’s College can be found here.

 

By Karen Asp (Feb 23, 2018)

AutomatonsSeriesPoster-668x1024Starting January 10th, 2018, the University of King’s College, Halifax, is hosting an exciting public lecture series, Automatons! From Ovid to AI, on the culture, science and politics of robots and AI. The series begins with a screening of Fritz Lang’s 1927 classic film, Metropolis, with live musical accompaniment by the Upstream Music Association. Talks will be given by international scholars and authors such as Stephanie Dick (Of Models and Machines), Despina Kakoudaki (Anatomy of a Robot: Literature, Cinema and the Cultural Work of Artificial People), and Courtney Ann Roby (The Written Machine between Alexandria and Rome). Renowned physicist and commentator Noel Sharkey is scheduled to debate the issue of “fully autonomous weapons systems” with Dalhousie University philosophy professor Duncan MacIntoshTeresa Heffernan, Saint Mary’s University English professor and director of the Social Robot Futures project, will open the series with an introductory lecture on robot imaginaries past and future.

The schedule of talks and events is presented below. More information on each talk can be found at Automatons! From Ovid to AI.

All lectures start at 7 p.m. and take place in Alumni Hall, University of King’s College, Halifax, except for the March 21 and March 28 events.


January 10: Fritz Lang’s “Metropolis”Fritz Lang’s classic 1927 film, Metropolis, with live electroacoustic music, opens the Public Lecture Series. Venue: Alumni Hall

With musical accompaniment by the Upstream Music Association, the screening explores the intersection between electronics and improvisation, automation and real-time inspiration, featuring some of our finest cinematic improvisors: Amy Brandon on guitar and electronics, Steven Naylor on keyboard and electronics, Lukas Pearse on bass and electronics, and Brandon Auger on synthesizer.

January 17: Imagining Automatons

Teresa Heffernan of Saint Mary’s University and Director of the “Social Robots Futures” project, delivers the opening lecture on the past and future of robots. Venue: Alumni Hall

January 25: Ancient Automatons

Courtney Ann Roby, Cornell University, and author of The Written Machine between Alexandria and Rome (2016). Venue: Alumni Hall

February 14: Panel discussion on “Big Data and Autonomous Vehicles”

With Brian Flemming, Senior Fellow with the Van Horne Institute, Calgary, and Stan Matwin, Tier 1 Canada Research Chair, Dalhousie University. Venue: Alumni Hall

February 28: Imagined Puppet Life

Dawn Brandes, University of King’s College and Halifax Humanities. Venue: Alumni Hall

March 7: Asian Robots & Orientalism

Simon Kow, University of King’s College. Venue: Alumni Hall

March 21: War in the Age of Intelligent Machines

Renowned physicist and commentator Noel Sharkey debates Duncan MacIntosh, Dalhousie University, on the role of autonomous weapons. Venue: Scotiabank Auditorium, Saint Mary’s University

March 28: Frankenstein

A special performance and lecture marking the 200th anniversary of the Mary Shelley classic

With Despina Kakoudaki, American University of Washington, and author of Anatomy of a Robot: Literature, Cinema and the Cultural Work of Artificial People. Venue: Fountain School of Performing Arts, Dalhousie University

April 4: Living Artificially

With King’s alumna and University of Pennsylvania professor, Stephanie Dick. Author of Of Models and Machines

The 2018 Lecture Series is made possible with assistance from the University of King’s College (Contemporary Studies Program, Early Modern Studies Program and History of Science and Technology Program), Dalhousie University and Saint Mary’s University.

 

What is the K5 “autonomous data robot”? This seems to be the underlying question in various news reports covering the recent commercial launch of Silicon Valley-based Knightscope’s new security technology. And it is quickly followed by more questions that in essence ask, what does it mean for us? Does it mean that security guards will no longer be required to do boring and dangerous patrol work, or does it imply job losses due to automation? Should we celebrate the opportunity for improved surveillance of private property, or worry about further diminishment of privacy rights? These questions are being framed through analogical references to movies and movie characters, but the references by no means settle them. In this respect, it is not surprising that the cartoonish Star Wars character R2-D2 is frequently evoked to describe the K5, given the latter’s unequivocally non-humanoid design, its capacity for autonomous movement, and its data collection and social interaction features (Markoff 2013, McDuffee 2014, Vazquez 2014). Its seemingly benign alien appearance has even evoked feelings of endearment on the part of some who have encountered it, according to Knightscope. People have referred to it as “cute” and have tried hugging it. But just how like R2-D2 is the K5?

400px-R2d2K5 Hug PnP-Expo-21-333x500Dalek

The Atlantic ran a headline stating that the K5 is “less RoboCop and more R2-D2” because the robot is not “weaponized” (McDuffee 2014). Like RoboCop it is on the side of the good in terms of protecting people and property, nonetheless it is more of a scout than a warrior. Yet the K5 is equipped with 360-degree surveillance sensors, live video tracking, predictive analytic software and an optical character recognition feature that enables it to read license plates. Given these attributes and capacities, a privacy rights organization representative has said that the K5 “is like R2D2’s evil twin” (Markoff 2013). This account suggests that while the K5 may look like R2-D2, the similarity is an illusion because the K5 is designed to perform inherently invasive tasks, tasks that can facilitate even more illegitimate assaults on individual rights and freedoms.

Perhaps despite themselves, the K5’s developers further explicate these potentially negative implications by defining the K5 through references to movies: “We don’t want to think about ‘RoboCop’ or “Terminator,’ we prefer to think of a mash-up of ‘Batman,’ ‘Minority Report’ and R2D2” (Markoff 2013). In this constellation of references, the ethical and political implications of the “pre-cog” surveillance system, as portrayed in Minority Report, readily negate R2-D2’s seemingly benevolent aspect. Here, the K5 both is, and is not, like R2-D2. Indeed, one might say that the K5 is like R2-D2 with respect to the functional attributes of an apparently ethically neutral “autonomous data machine” – a self-piloting, socially and environmentally interactive computer on wheels. And it is unlike R2-D2 insofar as it is haunted by an ambiguous ethical purpose articulated in a “mash-up” of “pre-cog” technology and that very same, cartoonish figure of R2-D2.

In describing her encounter with the K5, MIT Technology Review writer Rachel Mech (2014) observed that, “The robots managed to appear both cute and intimidating. This friendly-but-not-too-friendly presence is meant to serve them well in jobs like monitoring corporate and college campuses, shopping malls, and schools.” On this account, the robots are intended to induce mixed feelings. Yet in what seems to be a casual introductory reference, Mech invokes “Daleks” rather than R2-D2 to aid in describing the K5’s appearance, ostensibly because the former are tall in stature, like the K5, which is 3.47 meters (5 feet) in height. R2-D2, on the other hand, is a mere 3.1 meters tall. But Daleks, who featured in the Dr. Who TV series, were intimidating for many reasons; they were, after all, a non-empathic race of robot-looking cyborgs bent on universal domination. Mech is not alone in referring to Daleks rather than R2-D2 to characterize the K5, and some, such as Sebastian Anthony (2014), suggest certain more sinister implications of that reference. Among other things, Daleks certainly would not invite hugs. Fiction may be helping commentators frame their questions concerning the new K5 security robot, but it is not providing them with neatly delineated boundaries or easy answers.

By Karen Asp

(more…)

 

As the video that accompanied the July 2014 launch of the Jibo crowdsourcing campaign shows, Jibo is a personal robot designed to convincingly interact in conversations, as well as perform organizational, cognitive and educational tasks such as conducting internet searches on command and telling children’s stories. From the various interviews that Jibo’s inventor Cynthea Breazeal gave during the launch, one can surmise that the project aims to dispel a myth. This is the myth that progress in the development of AI and robotics is defined in terms of human labour redundancy. As Breazeal puts it in one newspaper article, “There’s so much entrenched imagery from science fiction and the robotic past – robotics replacing human labour – that we have to keep repeating what the new, more enlightened view is” (quoted in Bielski 2014). In the enlightened view, robots support and enhance human activities, rather than supplant them – they are our “partners” and “companions” rather than recalcitrant machines and adversaries. Jibo is intended to incarnate that enlightened view both in its appearance and in the ostensible services it provides. I want to suggest that while Breazeal’s effort to model her personal robot in terms of a non-reductive human-robot companionship model is valuable in its own right, her denial of the validity of labour replacement concerns only serves to cover over the real and complex problems of our entwinement with technology under the conditions of consumer capitalism.

Companion Species blog graphic

“People’s knee-jerk reaction right now is that technology is trying to replace us. The fact that Jibo is so obviously a robot and not trying to be a human is important because we’re not trying to compete with human relationships. Jibo is there to support what matters to people. People need people” (Breazeal Quoted in Bielski, Globe and Mail, 24 July 2014).

In an interview with Zosia Bielski of the Globe and Mail, Breazeal states that Jibo is “obviously not a robot.” In doing so, she draws attention to her effort to diverge from a prevalent research trend in personal robotics, a trend which aims to make machines that look, as well as function, like humans in terms of bi-pedal locomotion, speech and facial features, among other things. Jibo is a counter-top, bust-like system that, at first glance, appears to hybridize a PC flat screen monitor with the shiny white helmet-head of an astronaut. As a stationary, armless device Jibo doesn’t follow “mobile assistants” like Asimo or Reem, or embodied cognition platforms like iCub, into humanoid terrain. Devoid of recognizably human facial features, it has even less affinity with android creations like the Geminiod and Telinoid robots that mimic the aesthetic and emotive characteristics of human faces and bodies. Unlike these projects, Jibo is not trying to be look like a human.

Quite the opposite according to journalist Lance Ulanoff, who argues that one of Breazeal’s design objectives is to avoid the anxiety and repulsion that may arise when people encounter robots that mimic human traits too closely (Ulanoff 2014). This experience of strangeness, referred to in robotics literature as the “uncanny valley,” is believed to inhibit emotional investment in robots, which in turn presents a viability problem for robotics projects, “the kiss of death” as one writer puts it (Eveleth 2013). While the concept of the uncanny valley is, itself, a matter of debate (Eveleth 2013), Jibo’s design, according to Ulanoff, is intended to keep the human/robot distinction clearly differentiated at the perceptual level. It is designed to be recognizably robotic. As Breazeal states, “It’s a robot, so let’s celebrate the fact it’s a robot…” But if Breazeal intends to keep the human/robot boundary clearly delineated, she’s not trying to re-entrench robots as mere machines (“appliances” in Ulanoff’s terms) or as utterly unrecognizable, and therefore threatening, aliens in our midst.

If robots are different from humans, Breazeal seems to be trying to demonstrate that the difference does not necessarily amount to an opposition, an unbridgeable gap, played out in man v.s. machine sci-fi stories and rhetoric around human redundancy. If the problem is framed in terms of difference rather than opposition, then the task helpfully shifts from waging a defensive war against recalcitrant or malevolent machines to developing bonds between autonomous, non-reducible entities. In this respect, Breazeal talks about “humanizing technology” (Markoff 2014), which is not to be mistaken for turning robots into humans. Instead, as Ulanoff (2014) explains, the idea is to integrate movements and social behavior that triggers positive human responses. For example, Jibo is designed to move in ways that make humans perceive it as an animate – autonomous, living – creature rather than as an externally determined thing (a mechanism). On Breazeal’s account, according to Ulanoff, this distinction between the animate and the inanimate is a matter of human perception, a perception that can be addressed in the design of a machine. For example, Jibo is designed to turn its head in a fluid, rather than in a stiff, mechanical motion; and it “wakes up” – opens its eye and turns its head toward the speaker — when it hears its name, even if not directly called on. According to Breazeal, these behaviours indicate internal states, which to us amount to signs of life. No less significantly, Jibo is designed to participate in conversations in recognizably human ways, such as turning its head to face a speaker, an indicator of social presence and “reciprocal” engagement.

With these types of features, we are intended to perceive Jibo as living and, on top of that, as an interactive social agent. If Jibo is different from us because “he” (the voice is male) is a robot, he is nonetheless recognizably one of us because of his social abilities. For this reason, the Jibo promotional narrative is framed in terms of “partnership” and “companionship.” Rather than an adverse alien technology aimed at replacing authentically human work, Jibo is positioned as an extension of the human family, “supporting” and “augmenting” social relationships and experiences in the domestic sphere.

Neither a mere appliance nor an alien home invader, the Jibo construct starts to look like one of Donna Haraway’s “companion species.” Haraway (2008) emphasizes the point that the modern English term “companion” derives from the old French meaning, “one who breaks bread with” (or eats at the same table with), which in turn is derived from the Latin roots, “com” (together with) and “panis” (bread). In drawing attention to the roots of this word, Haraway endeavours to counter conventional narratives about human/animal relations, narratives that, on her account, are built on binary, oppositional terms — either humans or animals, but not both together. If Jibo is spared the ethical dilemma that Haraway devolves equally to all biological beings (“Rather, cum panis, with bread, companions of all scales and times eat and are eaten at earth’s table together, when who is on the menu is exactly what is at stake…To be kin in that sense is to be responsible to and for each other, human and not.” Haraway 2010), the companionship model for social robots brings into play a similar narrative about overcoming false oppositions and recognizing a fundamental interdependency between humans and machines. It is only because we are entwined with technology that machines could be seen to augment rather than supplant us, to work with and for us like dogs do — a companionship model — rather than against us.

The companionship model gives rise to the picture of domestic equanimity depicted in the Jibo promotional video, a video, it is worth noting, that is weirdly foreshadowed in a 1989 VHS promo for “Newton,” an R2D2-like domestic robot that augured much of what Jibo now promises. But the emphasis in the Jibo video on the domestic and personal spheres, and more importantly, on scenes of middle class family life and the sandwich generation (between kids and aging parents), is telling. It speaks of a consumer life-world fantasy of human-robot “partnerships” that occludes the economic support system upon which it depends. In this respect, it should be noted that Jibo is intended for the consumer electronics market, starting at a price point (US$499 for the first, limited run) that is meant to put it in the same range as a high-end tablet (Ulanoff 2014). As such, it is a commodity, subject to the same abstract law of value as all the other electronic devices competing for consumers’ attention. These are commodities designed to quantify and mass market affective, social and cognitive qualities, such as Jibo’s friendly demeanor and social reciprocity. Seen in this light, Jibo may well not be intended to “replace” human labour, but rather to create a new need, a new form of social outsourcing, for the sake of profit. And because consumer electronic devices are purposely built with “lifespans” ranging from two to, at most, five years, they are fundamentally destined for replacement. As such they bear a material fungibility romantically evoked in the robot scrap heap scavenging scene in the movie AI, but it is also all too evident in burgeoning digital scrap heaps worldwide, depositories that are, in turn, only the residual traces of an ecologically devastating industry.

Yet even if we put aside the troubling issues associated with the consumer electronics industry for which Jibo is destined, the Jibo narrative of robotic partnership is built on a disavowal of the ways in which developments in AI and robotics continue to displace blue and white collar work. These trends have seen a considerable amount of coverage recently in the wake of the Oxford University and Pew Research Center assessments of the ranges (e.g. business processes, transportation/logistics, production labour, administrative support, IT/engineering, and services such as elder care) and percentages of jobs at risk and the uncertainties associated with techno-utopian claims about the capacity of displaced workers to “adjust” (Bagchi 2013; Frey and Osborne 2013; Lafrance 2014; Pew 2014; Wohlsen 2014). A recent documentary called Humans Need Not Apply aptly demonstrates forms of robotic automatization that have already taken place. So attributing anti-technology sentiment to sci-fi and the “robotics past,” as Breazeal does, serves more to obscure than to clarify the situation. The “enlightened” perspective does not seem to follow from the statement that, “People’s knee-jerk reaction right now is that technology is trying to replace us.” Rather, it would be more enlightened to say that both things are true, that robots and AI can and even do support personal and social capabilities, in some spheres and for some, but not necessarily all, people; and at the same time, as nonreverseable job losses and increasingly precarious employment structures indicate, our interdependency with such technologies may also diminish, and even destroy human lives, in many if not all fields.

By Karen E. Asp.

(more…)