Archive for the ‘AI’ Category

A talk by Teresa Heffernan at the “Ethics of AI in Context” interdisciplinary workshop, September 17, 2019. The workshop was hosted by the Ethics of AI Lab, a project initiated by the University of Toronto’s Centre for Ethics.

A talk by Teresa Heffernan

ETHICS OF AI IN CONTEXT. A series of talks presented by the Ethics of AI Lab, Centre for Ethics, University of Toronto

4 – 6 PM, Tuesday, Sept 17, 2019. Room 200, Larkin Bldg., 15 Devonshire Place, Toronto

The era of “disruptive” technologies has given way to an ethical quagmire. Biased algorithms, invasive facial recognition software, proprietary black boxes, the theft and monetization of personal data, and the proliferation of hate-spewing bots and deepfakes have undermined democracy. Killer robots and the automation of war have led to a new arms raise with Vladimir Putin declaring whoever leads in AI will rule the world. The concentration of wealth and power of corporations that own most of this resource-intensive technology and the environmental price tag of AI can only hasten climate change. In response to these ethical problems, a number of research centres are now investing in the intersection of humanities and AI in order to study its impact on society, notably the Schwarzman College for Computing at MIT, the Schwartz Reisman Institute for Technology and Society at the University of Toronto, and The Schwarzman Centre’s Institute for Ethics in AI at Oxford. An article about the MIT initiative noted: “The approach has the potential not just to diversify tech but to help ‘techify’ everything else” while Geoffrey Hinton said: “My hope is that the Schwartz Reisman Institute will be the place where deep learning disrupts the humanities.” What these statements disavow, however, are the very different epistemological approaches that structure these fields. If we are to begin to deal with the ethical issues of AI, the humanities should not be “disrupted” and made to bow to the logic of big data, algorithms, and machines. In this talk, I will argue that it is only by keeping alive the tensions between artificial intelligence and the humanities that we can hope to have an informed debate about the limits and possibilities of this technology.

For more information see Ethics of AI Lab, Centre for Ethics, University of Toronto

The Real Life ‘Ex Machina’ Is Here

Posted: April 23, 2019 by keasp1 in AI, Events, Film, Robots
Tags:

ss4_poster_finalTeresa Heffernan will present “The Real Life ‘Ex Machina’ Is Here: Restoring the Gap between Science and Fiction” at the Machine Agencies Speakers Series on Tuesday, April 23, 2019 from 3 to 5 pm. Milieux Institute for Arts, Culture and Technology, Concordia University, 1515 Rue Sainte-Catherine W. EV Building, 11.455. Montréal, Quebec. For more information: www.facebook.com/events/1001883676675274/

 

keasp photo Big Think Nov14,2018The Big Thinking public panel on the social implications of AI, held at the Halifax Public Library on November 14, was a lively and well attended event. The discussants included Teresa Heffernan, Ian Kerr, Fuyuki Kurasawa and Duncan MacIntosh, with Howard Ramos (Dalhousie) as moderator.  Courtney Law provides a sense of the event here, noting for example that Teresa Heffernan “reminded the audience that AI and humans are inextricably linked because AI is built on data created by humans” and that “we sometimes assume ‘fauxtonomy’ when it comes to AI, attributing more complexity to machinery than it is due because we are influenced by fictional representations.” The event was sponsored by Dalhousie University, the Federation of Social Sciences and Humanities (SSHRC), and the Halifax Public Library.

SSHRC has published a video recording of the entire event that you can view below or access here.

llwlYou are invited to join Teresa Heffernan (Saint Mary’s University), Ian Kerr (U of Ottawa), Fuyuki Kurasawa (York U) and Duncan MacIntosh (Dalhousie) for a panel discussion on ‘the potential social impacts of artificial intelligence and the role humanities and social sciences will play in identifying the legal, ethical and policy issues we should start considering today.’

Where: Paul O’Regan Hall, Halifax Public Library Central

When: Wednesday, November 14, 2018. 6:30 PM – 8:00 PM

Moderated  by Gabriel Miller, Executive Director, Federation for the Humanities and Social Sciences

The event is sponsored by Dalhousie University (Offices of the President and the Vice-President Research, plus the Faculties of Arts and Social Sciences, Computer Science, Law, and Management), the Federation for the Humanities and Social Sciences and the Halifax Public Libraries.

The event will be free and open to the public with a reception to follow.

Questions? Contact fassalum@dal.ca

Join the Facebook event.

halconTeresa Heffernan, professor of English at Saint Mary’s University, will give a talk at the upcoming HAL-CON science fiction, fantasy and gaming convention, a massive multi-format event attended by some 9,200 people in 2017.

Dr. Heffernan’s talk, “Fiction Meets Science: Ex Machina, Artificial Intelligence and the Robotics Industry,” is scheduled for 6:15 pm, Friday, October 26, 2018.  Location: Room 502, Ballroom level 5 at the Halifax Convention Centre — 1650 Argyle Street, Halifax, NS. For information and tickets go to HAL-CON.com.

ABSTRACT: The conflation of AI and fiction in the cultural imaginary helps to drive the fantasy aspect of the robotics/AI industry that encourages the view that there is no difference between a computing machine and a human. If fiction offers an exploration and interrogation of the shifting terrain of what it means to be human, the industry’s overly literal readings of fiction fetishize the technology, strip it of its cultural and historical context, and claim it for the here and now. While the industry exploits fiction to help animate machines and bring them to “life” in the name of a certain technological future, it erases the “fictiveness” of the fiction that keeps open the question of the future and what it means to be human.

My talk–“Fiction Meets Science: Ex Machina, Artificial Intelligence and the Robotics Industry”–will argue that we need to restore the gap between the literary and scientific imaginings of AI and robots. Resisting literal readings of fiction, it considers the ways in which metaphors shape our reading of humans and other animals. For instance, in the field of AI, rather than the computer serving as a metaphor for the brain, the brain has come to serve as a metaphor for the computer. The film Ex Machina, as a modern day Frankenstein story, exposes the consequences of this metaphor that reduces humans to computing machines that in turn entraps them in an algorithmic logic under corporate control. In this film, it is not Ava, the programmed machine, that is the subject of the experiment, but rather Caleb who finds himself locked in the robot lab by the end of the story.

By Bryn Shaffer*

CBC The National April 2018

CBC news video on lethal autonomous weapons. Includes a clip from the debate between Noel Sharkey and Duncan MacIntosh at Saint Mary’s University, March 2018.

This March, Saint Mary’s University played host to an engaging and heated debate on the use of autonomous weapons of war, as part of the Automaton! From Ovid to AI public lecture series. The debate was between acclaimed BBC commentator and activist Dr. Noel Sharkey, and Dalhousie philosophy professor Dr. Duncan MacIntosh. It was moderated by Dalhousie professor of philosophy Dr. Letitia Meynell. You could tell before the debate began, by the way the crowd was cramming themselves into every inch of standing space in the already large auditorium, that we were in for an engaging and thought-provoking discussion.

After introductions by Dr. Sarty, dean of graduate studies at Saint Mary’s University, our two contestants positioned themselves: Dr. MacIntosh would be arguing in favour of autonomous weapons of war, and Dr. Sharkey would be arguing against them. Dr. Meynell stood up to the mic to lay out the ground rules, then she rang the proverbial bell and we were off.

Debate3Dr. MacIntosh started off by asserting that regardless of the points he would raise in favour of autonomous weapons of war, he did not want to be misconstrued as in favour of war or violence itself. His position at its core was that sometimes peaceful means of engagement are not possible, and in those cases, autonomous weapons are the superior choice.

He went on to argue that one of the major advantages of autonomous weapons is that they spare more human lives in the long run, and that they prevent soldiers from experiencing the horrors of war. Further, he posited that the use of autonomous weapons would be comparable to the use of soldiers, as they are much the same in how they operate based on commands. However, the advantage to using autonomous weapons compared to soldiers is that they are able to carry out hazardous missions humans cannot be sent on.

Another point Dr. MacIntosh raised in favour of the use of autonomous weapons, is that they would execute less prejudice informed killings, thus resulting in fewer deaths. However, due to the inability of robots to make informed judgment calls, he stated that they should not be sent on missions that require the discretion of people. As an example of such a mission, he described a situation where a child soldier is looking to surrender to opposing forces.

N.Sharkey Mar 21 2018 2Then Dr. MacIntosh’s time ran out, and it was Dr. Sharkey’s turn to take the floor. Dr. Sharkey began by asserting that militaries have a science fiction perspective on the ability of robots. To explain further, he presented slides demonstrating how autonomous weapons actually function and provided some examples of current weapons being used and in development. He articulated that a simplified way of understanding autonomous weapon function is that they receive a sensory input (such as from a heat sensor), which triggers a motor output (such as firing a gun).

Dr. Sharkey then explained that autonomous weapons once activated, are completely under computer control. This can be a problem because, as he said, “robots are easy to fool”. This is also a problem because current technology is not reliable enough in discerning civilians from armed combatants. Further, because we cannot predict how the algorithms operating the weapons will behave in every circumstance, especially when autonomous weapons come into contact with each other, unpredictable and dangerous outcomes could occur on the battlefield. These algorithms are also biased in nature, which could lead to indiscriminate targeting practices.

Dr. Sharkey also argued the use of these weapons will accelerate and amplify the pace of battle. And while autonomous weapons can be more accurate in targeting, the problem rests with what they are choosing to target. Dr. Sharkey was clear in reminding us that these machines are weapons, which are ultimately in the hands of humans. Meaning that while the robots may not be able to be prejudice, as Dr. MacIntosh said, they are the tools of those who are prejudiced. Further, these weapons won’t be able to be reserved to those in the military. Here, Dr.Sharkey provided the example of drone swarms operated by police forces using pepper spray on crowds.

After both parties provided their views, they were each given a brief opportunity to respond and defend their points. Dr. MacIntosh started by asserting his position that autonomous weapons should be used as threats to deter violence, rather than actually put into use. To this, Dr.Sharkey responded that while we often hope for weapons to be never be used, this is not the reality, as they are always implemented in the execution of mass murder.

The floor was then opened up for questions from the audience, which sparked interesting responses from both speakers. Questions ranged from issues of legality, and of science fiction, to questions of morality and implementation. By the end of the event, the audience was left with much to think on concerning the future of robotics and warfare, and what role we all will play in in that future.

Throughout the debate I found myself thoroughly engaged, on the edge of my seat, biting my tongue. Not only did the debate’s topic engage me, but also the often heated interactions between the speakers. When Dr. MacIntosh proposed the need for autonomous weapons as a means of intervention in what he termed dictator and third world states, Dr. Sharkey was quick to respond on the neo-colonial narrative being put forward. On another point, Dr. Sharkey called out Dr. MacIntosh as naive in his proposal that it is currently possible to create unbiased programming. At these points I found myself really aligning with Dr. Sharkey, wondering to what lengths nations such as the USA might go to justify and reframe their use and creation of autonomous weapons as ethical and unbiased.

So what can we take away from this battle of wits on autonomous weapons? Are killer robots truly an inevitable future of warfare? Is there hope for a more peaceful alternative? As someone who gets chills at the prospect of an Amazon drone landing in my yard, I greatly hope for the latter. As Dr. Sharkey argued, these weapons of destruction are currently being thought of in science fiction terms, but are being used and created in the real world, where real consequences exist. While the notion of human lives being spared is an alluring prospect of autonomous war, we must realize that those who build and advocate for killer robots understand this fantasy as truly nothing more than a carrot driving their horses into battle. War without death, under the implementation of autonomous weapons, is an oxymoron at best. At worst, killer robots are a near future that will affect all of us, both on and off the battlefield.

If you are interested in learning more about the speakers or about how you can join the campaign against killer robots check out the links below.

CBC’s The National coverage of the event can be viewed in their video “Stopping Killer Robots Before They Get To Us First.”

Dr. Noel Sharkey’s Twitter for the campaign against killer robots can be accessed here.

Dr. Duncan MacIntosh’s profile can be read here.


* Bryn Shaffer is an MA student in the Women and Gender Studies Program at Saint Mary’s University where she is exploring the topics of gendered robot design in both science fiction and reality.