Archive for the ‘Education’ Category

By Teresa Heffernan

The tagline at the 2017 Robot Exhibit at the Science Museum in London reads “the 500-year quest to make machines human.” The show has attracted a great deal of media attention—The Guardian’s review announced: “Science Museum’s robotic delights hold a mirror to human society” while The Telegraph preview reads “A truly mind-bending array of humanoid imagery ” and the Londonist insists “You HAVE TO SEE Robots At The Science Museum.”[i] But the reviews by the public, who paid the 15 GBP entrance fee, have been less than enthusiastic—with lots of grumbling about all the robots that were static and inactive. This is a typical complaint on TripAdvisor: “We stood in front of the robot thespian, looking like idiots as we assumed he could engage with us and asked him questions. In fact, he just jiggles about a bit and repeats random speeches. A tape recorder and an electric motor attached to a shop mannequin could easily create a similar effect.”[ii]  So what accounts for this discrepancy in the reviews?

Thespian robot 2

Thespian robot

The expectation that robots be life-like and interactive has been fueled by fiction and film.  Carefully curated and edited, the video clips in the media reviews, complete with soundtracks, promise an evocative spectacle: the robots seem to perform autonomously as if they exist in a state of constant life-like motion. But this digital magic jars with the actual experience of attending the show, which exposes all the limits of machines–from expensive energy costs to bugs to programming glitches to mechanical problems to wires—the robots suffer the same issues that plague most modern technology, leaving its audiences underwhelmed.

Bleeding Jesus 3

Bleeding Jesus

The first rooms of the exhibit are dark but get progressively lighter as you move through the show till you get to the lab-like lighting of the present, suggesting the progress from mysticism to science. This story of robots begins in the sixteenth-century and its fascination with mechanical clocks and anatomical models and continues through the industrial revolution with its factory machines and ends with the current industry of social robots.  “Marvel, Obey, Dream, Build, Imagine” the exhibit instructs its viewers.  The religious age with its “magical” screaming Satan and bleeding Jesus automatons commissioned by the Catholic Church to inspire awe and deference amongst its followers eventually gives way to the modern age with Kodomoroid and Pepper, social robots that vow to transform our lives and become our partners.

Model based on 1920 play 5

Model based on 1920 play

But have we really always been dreaming about “recreating ourselves as machines” as the exhibit suggests?  The term robot was first applied to artificial people in Karel Čapek’s 1920 play R.U.R. and was derived from the Slavic term robota (forced laborer) that referred to peasants enslaved under the feudal system in 19th century Europe; the term was used in the play to discuss the mechanization of humans under factory capitalism.  Robots were born in fiction but the anachronistic use of the term in this show begs the question about why begin with machines as opposed to other fantastical scenes of creating life:  Prometheus’s modeling of man from clay or God’s creation of Eve from a rib or Pygmalion’s female statue that is brought to life by a wish? I am always suspicious of linear historical timelines that suggest the inevitability of the current market for robots.

Then again, narrating robots as originating in the context of priests, power, showmanship and magic makes sense given the contemporary business of AI and robotics, with its promises of miracles. Perhaps it is better to understand robotics and AI as the direct legacy of religion given the captains of these industries and their claims of god-like powers. When asked about the future of AI, Ray Kurzweil, Director of Engineering at Google, who heads a team developing machine intelligence and who insists we will all live forever, announced in IEEE Spectrum’s June 2017 issue: “I believe computers will match and then quickly exceed human capabilities in the areas where humans are still superior today by 2029.”[iii]  I suspect visitors to the robot show will rightly remain skeptical of this claim.

[i] See: “Science Museum’s robotic delights hold a mirror to human society“; “A truly mind-bending array of humanoid imagery – Robots, Science Museum“; “You HAVE TO SEE Robots At The Science Museum.”

[ii] See reviews on TripAdvisor: “Just A Review of the Robots Exhibition

[iii]  “Human-Level AI Is Right Around the Corner—or Hundreds of Years Away

Dependency diagram (Geek Sublime, pg 111)

By Ellen MacIntosh*

During his presentation at the Cyborg Futures Workshop (March 31-April 1st, 2017), author Vikram Chandra called attention to an issue previously overlooked during the conference: the presence of bugs and ambiguity in computer code. Other speakers expressed concern over the potential repercussions of using inexact language when speaking about artificial intelligence, particularly in terms of society’s tendency to anthropomorphize robots. Chandra spoke about the effects of ambiguity in more practical terms though, drawing attention to the fact that computer code is a form of communication that is especially susceptible to indeterminate language. Capitalizing on his interest in both art, which he showed benefits from vagueness, and science, which works to minimize uncertainty, Chandra explored the dualistic nature of the ambiguous by turning to Sanskrit theorists, who attempted to curtail ambiguity while still recognizing beauty in it.

Vikram Chandra

Chandra spoke first of what he called a programmer’s “dream,” a complete clarity of language, allowing perfect communication between man and machine. This is, as Chandra showed, currently impossible, and failures of this dream spawn both computer errors and the message that programmers dread: “an exception has occurred.” But why should communication between man and machine be more difficult than the already challenging task of conversing with other humans? Even between people, words may have multiple meanings and different words can mean the same thing. Context matters, as does setting. Terms can be used literally or figuratively. In essence, coding is a type of formal language, an intermediary between human language and the zeros and ones through which computers operate. The translation is more susceptible to misunderstanding than human to human communication though. As Chandra succinctly explained, computers are dumb. Sentences that are easily understood by people, such as, “Mary ate the salad with spinach from California for lunch on Tuesday” give a computer a multitude of possible meanings. Without the ability to understand context, figurative language, and other idiosyncrasies of human language, the usual pitfalls of communication only increase when we try to speak to a machine.

PANINI (India – commemorative stamp)

Ancient Indian scholars recognized the danger involved in misunderstood words. Sacred texts were considered the “code of the universe,” and misinterpretation of these codes could have dire consequences, providing incentive to clarify language. In 500 BCE, a man named Panini created the world’s first generative grammar. Its rules, which were applied sequentially, acted like an algorithm. With an infinite number of possible words, the language was precise, yet flexible, formal and unchanging. It appeared perfectly unambiguous. Ambiguities still managed to exist though. Chandra used the sentence, “the sun has set” as his example.  Such a simple sentence seems straightforward, but if spoken from a king to his commanding officer, it could mean that the time has come to launch their attack. If said by a girl waiting for her lover, it could insinuate that the day is done and she still awaits his return. Therefore, while a sentence’s expressed meaning might be obvious, semantics and the power of suggestion can impart uncertainty on the clearest phrases. To counter this ambiguity, scholars developed what Chandra called a “low-level Sanskrit,” a method of writing that specified a precondition, a present state, and the exact means of proceeding from one to the other, similar to the practice of modern coding languages.

While Sanskrit speakers tried to clarify their language, they also recognized beauty in ambiguity through the aforementioned “suggested meaning.” The writing produced through their precise language was exact but dull, resulting in an effort being made to identify what made poetry beautiful. Some Indian scholars suggested it was the incorporation of figurative language, while others believed beauty lay in the style or diction of the verse. Finally, a man named Anandavardhana said that neither denotative nor connotative meaning could account for all the things expressed in a poem. He called the final source of meaning that poets used “suggestion,” and proposed that poetry is made beautiful by the things it does not say. Something as simple as a name can be imbued with enough symbolism and connotation to give multiple meanings at once, which is impossible to do when using words only literally or metaphorically.

Chandra’s talk provided wonderful insight into ambiguity in communication. I appreciated his look at the beauties of ambiguity since it is usually something that is feared in our culture. From coders who want a computer to respond perfectly to their commands, to scientists who meticulously record their experiments to ensure reproducibility, the unknown induces anxiety. Even in Western art, ambiguity often incites unease. A course I took on Romanticism focused heavily on the fear these poets experienced as one of the first groups of writers whose work, thanks to new technology and media forms, would spread beyond their ken or control once released into the world. The anxiety they experienced seems to have focused mainly around an inability to ensure their work was interpreted as intended. Ambiguity could strike, giving phrases unintended meaning, or perhaps turning their words foolish, trite, or offensive. It is always useful to be reminded of the benefits of something one typically tries to avoid, in this case ambiguity, and I appreciated the unique insight that Chandra provided.


*Ellen MacIntosh is a student at Saint Mary’s University, Halifax. Her blog post is based on an essay she wrote for English 4556 (Honours Seminar): Animal Life, Social Robots, and Cyborg Futures, taught by Dr. Teresa Heffernan. Ellen was among the students who helped make the Cyborg Futures Workshop a successful event.


 

As Teresa Heffernan noted here last week, at the Silicon Valley-based Singularity University, “faith in technology and profit are unwavering, while the world’s problems are understood as great ‘market’ opportunities.” In March 2013, journalist Eric Benson took a weeklong $12,000 course in the Executive Program at the university. Not only was he instructed in the “nearly limitless potential of artificial intelligence, robotics, nanotechnology, and bioinformatics,” he learned that science fiction, religion and the drive to get rich motivate the university’s teachers and students. Benson’s insightful analysis of his experience, “Sci-Fi, Religion, And Silicon Valley’s Quest For Higher Learning At Singularity University” is posted at BuzzFeed.

Eric Benson/BuzzFeed

Eric Benson/BuzzFeed