Comments on Blade Runner

Delivered (with variations) by Lee Spector at a "Science on Screen" event at Amherst Cinema, in Amherst, Massachusetts on April 21, 2015

It's wonderful to be here. Thank you Carol, and everyone here at Amherst Cinema, for inviting me and also for hosting "Science on Screen" in general, and for all of the other wonderful programming that you bring to our community.

For this short presentation before the film I have some prepared remarks. But after the film there will be a Q&A session and we can have a more interactive discussion then.

I am very happy to share with you the experience of watching Blade Runner, and to share a few thoughts on some of the scientific and philosophical dimensions of the film's story, and of the world in which the story takes place.

The world of Blade Runner is this world, Earth, projected a bit into the future. Actually it's only 4 years in the future now, although it was 37 years in the future when the film was first released, and about 50 years in the future when the book that inspired it was written by Philip K. Dick.

I would say that it actually holds up pretty well as we approach the projected time frame of the story. In some ways, of course, it's way off -- there are flying cars, and computer screens look a lot like they did in the 80s -- but with respect to the scientific and philosophical issues raised by the film, I think it's surprisingly on the mark.

What does it mean to be human? Blade Runner asks this by putting us in a world with artificial humans, and to some extent, by giving us an inkling of what it might feel like to be an artificial human. We witness their existential crises along with their practical, physical challenges as they grapple with mortality.

One of the things that has helped Blade Runner to stand the test of time is that it doesn't oversimplify the distinction between the natural and the artificial. It is actually pretty subtle, particularly for a sci-fi thriller, and it leaves several key questions intentionally unresolved.

In fact, Blade Runner's ambiguity is so pronounced that different versions of the film -- the original release in 1982, the TV version in 1986, the so-called Director's Cut in 1992, and the 2007 Final Cut, which we'll be seeing this evening -- actually imply different things about the humanity of some of the central characters.

The situation is even more complex in Philip K. Dick's book that inspired the film, in which there are also damaged humans who are treated as sub-human, along with a god-like figure at the center of the world's primary religion, who might be human but it's not clear. Maybe he's a robot, or biologically engineered, or an alien. And the human characters -- at least the ones you think are human -- sometimes treat themselves like mechanical systems with electronic controls, manipulating their own mental states with "mood organs" that change their brain chemistry.

So in the world of Blade Runner the lines between human and nonhuman, biological and mechanical, are blurry, and this invites us to ask fundamental questions about what lines are real and what lines aren't.

I've sometimes shown Blade Runner to the students in a class that I teach at Hampshire College, called Cognitive Science Fiction. The course covers topics in cognitive science, including artificial intelligence, introducing the topics with science fiction and then investigating them more thoroughly with readings from the scientific and philosophical literature. One of the questions that we ask after watching Blade Runner is the question of whether there really will be artificial people of the sort that the film projects.

Mostly, the answer seems to be yes.

First, there's a sense in which we are all artificial people. Every organism is made by its parents and its community and its culture. Humans are made by humans. You may want to object here that that's different, that although we regularly make humans we don't design them and we generally have no idea what we're doing when we make them.

True enough, and I'll come back to a more external sense of "making" in a moment, but first I want to point out that even when it comes to biological reproduction we humans have long been intervening in the process quite consciously, and increasingly technologically. From mate selection to family planning, to new reproductive technologies and probably soon genetic engineering, we manipulate the reproductive process and the future our species.

We also use technologies to extend ourselves and these become essential components of the species that we have become and will become in the future -- technologies ranging from spoken language, writing, and culture to eyeglasses, pacemakers, knee replacements, intelligent hearing aids, cochlear implants, smartphones, and google glass. This further blurs the distinctions between natural and artificial, biological and mechanical.

Let's go back, though, to the question of whether there will be artificial people that are built by us in the way that we currently build industrial robots. Mostly, the answer here also seems to be yes, at least eventually. It seems likely that we will, maybe soon, build technologies that match or exceed human capabilities in all areas, physical and mental.

It was long thought that we couldn't build such things until we thoroughly understood how humans work, for example that we couldn't build systems that think strategically or creatively until we understood how humans do these things. But now it seems like it will probably be good enough just to understand how learning and evolution work. If we can build a system that learns and evolves then it may come to have capabilities that we wouldn't know how to build directly.

Machine learning -- that is, the science of building computer systems that learn from experience -- has recently taken great strides, and each of you has probably benefited from this, several times each day in recent years, whether you knew that machine learning was involved or not.

If you've searched the web, then you've benefited from machine learning. If you've used speech recognition, or saved time by not wading through email spam, or watched a movie recommended by Netflix, or used an online dating site, or even just used a credit card, then you've benefited from machine learning.

Many areas of science and engineering, ranging from genomics to climate science, are using machine learning, and some are making breakthroughs that wouldn't be possible without it.

Most machine learning systems work primarily by collecting a lot of data and by recognizing regularities in the data. This allows them to make predictions, and to improve their predictions over time. Machine learning is a powerful tool, and it is making our technology smarter and more helpful.

But we were talking here about artificial people. Being able to learn and to make good predictions is handy, but surely there's a lot more to being a person than that. Yes, and this is where evolution comes into play.

Imagine, for a moment, that we build a robot in roughly human form that isn't very human-like in most ways, but can perform some basic tasks like walking and following instructions, and, crucially, using a screwdriver and a soldering iron. We'll also build in some basic machine learning capabilities.

For example, if the robot over-tightens a screw and strips its threads, it'll use less force the next time it tightens a similar screw. It can use machine learning to notice patterns, and to change its actions based on those patterns if it predicts that a different action will produce a better outcome.

Let's give the robot a list of parts, some money, and instructions for building a copy of itself (which, critically, will include a copy of the instructions), and send it off in the direction of an electronics supply store. It will buy the parts and build a bunch of copies, but they won't be exact copies -- the instructions will include choice-points, for which it will toss a coin or roll some dice, and also sometimes the robot will probably just make mistakes. If we build and program our robot reasonably well, then it will make many new robots, and at least some of them will set off on their own to buy more parts and build more robots.

Each robot will behave a little differently. Some will be faster and some slower. Some will be more or less likely to get hit by a car when running back and forth between the parts store and the workshop.

Some may figure out time-saving construction shortcuts. Some may figure out how to perform tricks for money in the park along the way to the parts store, earning money to buy more parts. Some may figure out how to salvage parts from broken-down cars or discarded cellphones.

The more successful robots will pass their own variants of the robot-making instructions to more new robots. Eventually there will be very clever and resourceful and creative robots living among us.

If we were able to do this, and if we actually did do this, then what do you think would be going on in the heads, and in the minds, of the robots? They would apparently be thinking, and they would certainly be acting in complex ways that we, as the original designers and programmers of the first robot, may not know or even be able to understand. They may well appear to us to have minds of their own, and it would take a strong argument -- one that philosophers have been hard pressed to develop -- to convince some observers that these robots do not actually have minds of their own.

Ideas about self-reproducing machines go back hundreds of years, but around the middle of the 20th century they began to get a lot more attention, and with the development of 21st century computer technology they have begun to develop more quickly. Several research groups around the world are actively working on these ideas.

In my own work I have been focusing on the evolutionary part of this story, but with virtual rather than physical robots. Along with using evolution to produce virtual robots, I -- and many others -- have been using evolution, in computers, to solve practical scientific and engineering problems, evolving mathematical formulae, designs for antennas and wind farms, and useful computer programs.

There is a whole field dedicated to this kind of work, called Evolutionary Computation. There is also an annual prize, called the Humies, for so-called "human competitive" results of evolutionary computation. The idea is to reward work in which computational evolution beats humans at their own game, for example by inventing something that earns a patent.

We in the field are generally very pleased when our systems are "human competitive," but in the context of Blade Runner we should pause... and notice that we are creating systems that actually do compete with us. And already, they sometimes win.

Because of the recent advances in machine learning and evolution, it no longer seems farfetched to many of us in the field that we will soon build systems that cross a threshold of self-improvement and outstrip humans in every conceivable way. We might say, using the motto of the replicant manufacturer in the film, that these systems will be "more human than human."

Several prominent figures, including Stephen Hawking, Elon Musk, and Bill Gates, have recently been warning that the consequences of these developments may be dire for humanity as we know it. They may be. Much is still unknown about how the technologies will develop, and about how the various intelligences in a post-AI world will interact with one another. But this is a set of issues that we cannot afford to ignore.

These technologies may help us to cure diseases, invent clean energy sources, and solve some of the other critical problems that we face.... They may also eat us for lunch.

So, as you watch the film this evening you may also want to think about the line (or lack of line) between natural and artificial, and about the ways in which complex systems, whether natural or artificial (if there's a difference) grow and learn and evolve.

I'll he hanging around to answer questions and have a discussion about these issues after the show.