Artificial Intelligence, Avatars, and the Future

Amanda Bastone
April 4, 2017

Most people first heard the word “avatar” from James Cameron’s Avatar, one of the top grossing films of all time. Some consider avatars an extension of the self that can save the world in the context of virtual reality or a video game. In Hinduism, avatars are considered incarnations of deities or immortals. The Hindu god Vishnu, for example, has many avatars, including the Buddha.

Helping to sort out the avatar conundrum and the fascinating field of artificial intelligence was a Brainwave series program at the Rubin Museum of Art in NYC last Wednesday night. The program—“A.I. and Avatar: The New Explorers,”— began with a head-spinning question: “Can machines and other avatars expand the human experience—and perhaps even take our minds to the stars?”

Caleb Scharf. (Photo courtesy of the Rubin Museum of Art)

Caleb Scharf from Y-House, a NYC-based non-profit that explores the nature of awareness and consciousness, introduced its engineer and roboticist Hod Lipson, who began with a highly informative presentation on the history of artificial intelligence (in particular, machine learning) and robotics. Lipson, the author of Driverless: Intelligent Cars and the Road Ahead, told the audience that the first and oldest paradigm is “building a program by writing rules and instructions using logic to solve a problem,” and the second is the very recent idea of a machine using perception to sense the world around it or machine learning.

Hod Lipson. (Photo courtesy of the Rubin Museum of Art)

Lipson used the example of how it was once impossible to program a driverless car to stay on the road. The car also could not reliably sense the difference between a fire hydrant and a child or comprehend that a bridge over a road is not an obstruction in the road – tasks a human brain can process easily. He said that in the last few years, new machine-learning algorithms have been developed to make driverless cars safer by taking in a vast number of images and classifying them (i.e., whether an image is a cat or a dog, a car or a bus) more accurately than a human, or with higher than a 95 percent success rate.

He explained that the new software utilized deep learning, a new type of machine learning that involves the non-linear processing of data that simulates the neocortex’s large array of neurons in increasingly multilayered “neural networks” (also known as cognitive computing). Cloud computing has added another layer of sophistication to deep learning, allowing “machines to train machines” and a network of cumulative knowledge to be shared.

Schneider portrait
Susan Schneider. (Photo courtesy of the Rubin Museum of Art)

In the second part of the program, Lipson discussed A.I.’s  future implications with philosopher Susan Schneider and astrophysicist  Edwin L. Turner. Schneider, who teaches philosophy at the University of Connecticut and Princeton University, asked the panel whether understanding can be equated with consciousness. Turner, professor of astrophysical sciences at Princeton University, said he believed that understanding whether the goals of a machine are chosen autonomously can be a way to measure machine intelligence in humans. Lipson gave the example of a robot in his lab that—despite a lack of vision—was programmed to become self-aware of its physical form in space so that it could learn how to move (to learn more about Lipson’s work building self-aware robots, watch his TED talk). All three agreed that research needs to determine if machines can understand emotions, goals, desires, and their own existence on some level. Schneider added that if, and when, machines (as in the case of using robots for elder care in Japan) learn to simulate emotions, we must determine if it is mimicry or evidence of a deeper understanding.

Turner turned the conversation to the question of whether A.I. machines will be a “beautiful extension of us” (an avatar) or a powerful technology that threatens humanity. There is the possibility that machines may someday choose to hurt us but, Lipson argued, A.I.’s greatest threat is censorship due to data monopoly.

During a Q&A, Lipson responded to one query by arguing that finding truth is different than determining intelligence. Turner seconded his colleague’s response by saying that “flashes of insight are not consciousness.” A.I.s may be able to better recognize images than people, act in ways we don’t understand, or learn in ways we cannot comprehend but, they are only conscious if machines can explain “how” and “why.” Interestingly, Lipson pointed out that A.I. machines without human-directed goals can self-replicate unconsciously. Turner stated that many organisms with little to no intelligence do the same, due to a basic biological need for permanence.

If machines will think, learn, and feel differently than we do, will they teach us how to function more creatively or will they be an alien existence that we cannot grasp fully? As we dissect and try to understand how our own brains function, we will also need to comprehend the new artificially intelligent “brain.”

This year is the tenth year of the Rubin Museum’s Brainwave series, where popular personalities are paired with neuroscientists for a themed discussion.