An AI Researcher’s Reaction to Spielberg’s Film “A.I.: Artificial Intelligence”
July 6, 2001
Some films, dealing with intellectual, technological and philosophical issues, have a raw power extending beyond the world of the mind into the realm of emotion. They touch us, disturb us and tantalize us, showing us how the products of technology and advanced cerebration tie in with the deep-down fears, lusts and rages that make us the human animals we are. And some themes naturally lend themselves to this kind of powerful intellectual/emotional/technological film-making. AI is one of these themes: it has given us two undoubted cinema classics. Kubrick’s 2001: A Space Odyssey , based on the Arthur C. Clarke novel, gave us HAL 9000, the psychopathic yet sympathetic spaceship-controlling software program that will forever define the dangers of AI. And Blade Runner , based on Philip K. Dick’s book Do Androids Dream of Electric Sheep, gave us a chilling yet in every way convincing portrayal of humanoid robots, thinking and feeling and living their own lives, like us but not quite like us – a peek into android reality, and a cautionary tale about the potential future sociology of a world populated by both biological and nonbiological intelligence.
When I heard that Steven Spielberg had created a new film about AI, based on some earlier notes and plans by Stanley Kubrick, these great AI films of the past naturally came to mind. 2001 and Blade Runner share a revelry in darkness that is alien to Steven Spielberg’s vision. But, within his world where goodness ultimately reigns, Spielberg has made some equally enduring images. E.T. will remain the iconic image of first contact for a long time. Walking into the theater to see A.I.: Artificial Intelligence, I was curious what his warm and friendly vision would bring to the realm of AI. As an AI researcher who has devoted his life to the quest to create a truly intelligent software program, and who believes that thinking machines will ultimately be a great thing for everyone, I was enthusiastic about the idea of a Hollywood film giving AI a positive public image. HAL 9000 and Blade Runner’s Phil-Dickian androids were memorable and wonderful, but why not a warm, cuddly AI, finally? After all, in my view, AI’s are at least as likely to be friendly and helpful as to be sinister and menacing; and I think psychotic AI’s will be far rarer than psychotic humans.
I left the theater with a rather sick feeling in my stomach. After a few minutes had passed, however, my mind cleared and I was able to adopt a more rational assessment of the film. The last ten or fifteen minutes of the movie are simply unforgivably, unendurably bad. Even my two sons, aged eleven and eight, realized this. One of them had this summary comment to make as we stepped into the car: “The movie was OK, there were some good parts, but the end was really, really stupid.” Apart from the ending, however, the film has a lot to recommend it, and the conceptual vision underlying (and imperfectly realized in) the film has even more going for it.
As a movie, A.I. reminds me somehow of “Star Wars Episode I.: The Phantom Menace.” Like George Lucas’s 1999 film, A.I. fails egregiously to live up to the massive hype that greeted its release. But on the other hand, also as with The Phantom Menace, once one gets over the initial disappointment, one has to admit that the film is a fairly interesting one. A.I. is not a great movie – the characters only infrequently move you, the plot quality is wildly uneven, and the futuristic vision expressed is badly flawed. On the other hand, it brings to light important issues that are rarely treated in the popular media, and the best parts of the film are really quite good -- simultaneously intellectually stimulating, emotionally jarring, and fun. With A.I. as with the Phantom Menace, one has the feeling of seeing the work of a brilliant visionary who is too famous for his own good. Apparently, in the making of these films, no one had the guts to point out to Lucas or Spielberg the obvious flaws in their realizations of their visions.
The film, throughout, can’t decide whether it wants to be a science fiction story or a fairy tale. This isn’t necessarily problematic – it’s possible to fuse the genres well – but in this case if often is problematic. As soon as one starts to take the film’s technological vision seriously, one is hit with something absurdly fairy-tale-like. Half the time AI is treated as a routine technological achievement, and the other half, it’s portrayed as a fabulous thing like Pinnochio coming to life. This ambiguity catches something of the wonder of AI – the idea of a machine that thinks and feels really does reek of magic, which is why AI is more fun than accounting software or mechanical arms that assemble auto parts. As Arthur C. Clarke said, “Any sufficiently advanced technology is indistinguishable from magic.” And yet, far too often throughout the film, the treatment of high technology as magic just seems clunky and contrived – as if the fairy-tale theme is being used to cover up an inadequate attention to common sense in the elaboration of AI and other future technologies.
The story is set 2000 years in the future, and the premise is that meccas, android robots, outnumber people and carry out a variety of roles from menial labor to prostitution – but they have no feelings. They are intelligent yet have no conception of emotion at all. And then a maverick scientist figures out how to endow a mecca with feelings. He creates one prototype, David, an eight-year old boy – the world’s first robot who feels.
Right here, as an AI scientist, my unreality sensors kick in. I doubt very much there will ever be an easy way to make a program speak, act and perceive as adeptly as the androids in this film, without endowing them with some kind of feelings. In the human mind, feeling, thinking, doing and acting are all bound up, and it seems by far the most likely case that this same interwovenness will be there in artificial intelligences as well. But this is a nitpick compared to other blatant scientific absurdities and aesthetic incongruities in the film – let’s move on.
David, fairly endearingly and convincingly portrayed, goes home with the family of one of the scientists on the project. But trouble starts almost immediately. He tries to copy their “real son” (his “brother”) by eating food, and corrupts his digital insides that way. Yeah, right. If any real robot designer were building an android with the property that its insides were easily damageable by eating, then he would simply close off the useless passage from the mouth into the robot’s interior. Then, a little later, David is afraid of some neighborhood boys who threaten to cut him with a knife, so he grabs his brother and begs for help, then falls into the swimming pool, still clutching his brother, until his brother nearly drowns. “Yeah, right” again. With all the sophistication of his nearly perfectly human body, it wasn’t possible to program into his brain the knowledge that humans will die if held underwater, and that killing humans is a bad thing? HAL, in Kubrick’s 2001, went crazy and killed people; and the Blade Runner androids are willing to kill to save their own lives … but David at isn’t crazy, he’s just stupid and mildly delusional, and unrealistically so given the posited sophistication of the engineers who created him
But these are just incidental points, really. More serious than the various detail-level shortcomings of the science-fictional world, are the conceptual flaws at the heart of the plot. For example, the whole plot turns around the idea that, after David is “imprinted” with his mother’s voice and face, he can never be reprogrammed or de-imprinted, he can only be destroyed if his progress is somehow unsatisfactory. But of course, no one would ever do something as foolish as this with the first working version of their new machine. Again, it’s a fairy tale idea – on a par with Cinderella’s coach turning into a pumpkin at midnight -- awkwardly wrapped around a high technology theme.
Anyway, following the plot along -- once he stupidly nearly drowns his brother, David is at risk of being put to death, since reprogramming him to live with some other family is defined as being impossible by design. So his mother, who perceives him as a real boy, not just an android piece of hardware, releases him into the woods to fend for himself, with heartfelt and tearful farewell. Here we have a moment of some real poignancy. Mom feels David is alive, human, feeling, with a right to live and be free like any other human being. The others in the family basically feel like he’s a piece of machinery. This is a real dilemma that we’ll face in the future, probably well before we have AI’s with perfect android bodies like David’s. Some people will feel a certain AI program is conscious, others will not. I may become incensed that you propose to delete your own AI program, because I feel you’re killing a living being. Since AI programs are just bits and bytes, and robots are just metal, I doubt these kinds of disputes will ever reach the same emotional pitch as arguments over abortion or genetic engineering. But they will surely be problematic nonetheless.
Free and on his own, David takes up with some renegade meccas on the loose. Here we have the most interesting part of the movie: the lives of free-ranging wild robots, on the run from bounty hunters. The special effects don’t disappoint, and nor does the portrayal of the social interactions of intelligent but unfeeling humanoid creatures. Although I doubt very much that truly unfeeling human-level intelligences will ever exist – I think 2001 and Blade Runner were more on target here, with their AI’s possessing nonhuman but still vivid feelings -- it is indeed fairly likely that AI’s will be less ruled by their emotions than humans are. And this aspect of AI’s has the potential to shed some light on our position as humans – even now, pre-real-AI.
It’s interesting to think about how our lives would go if we had no emotions, only instincts and intellectual minds. As we watch a mecca flee capture and destruction, we’re forced to reflect that it doesn’t really care if it’s captured and killed -- it’s just running away because it’s programmed to do so. But then, in a sense, we all largely do what we’re programmed to do, aren’t we? We can acquire new programming through experience, but one way or another, the instinct to run away from attackers is wired just as deeply and rigidly within us as within a robot. So, one is pushed to wonder, is a robot’s instinct for survival really any less real than ours? What are feelings anyway? Can you really have survival instinct without feelings? What would that mean? Here, with the meccas, the film is almost as thought-provoking as it wants to be. In spite of their lack of emotion, the meccas are more engaging than any of the humans in the film, who are mostly cardboard-cut-out stereotypical characters devoid of real personality.
When David is captured along with other renegade bots, and threatened with a public death in a robot-bashing stadium show, he cries rather than passively accepting his fate like the other machines. One good reason for this is that he, unlike some other models, can’t turn off his pain receptors. But when the humans in the audience see him crying, they violently insist that the host of the robot-bashing show not kill him. Issues of future ethics loom. Is it conscionable to put to death intelligent beings just because they’re not human? Is it OK to kill an intelligent unfeeling being? How can we tell if a being is truly unfeeling, or if, rather, it just has feelings that we can’t understand?
After he’s freed from the stadium show by the sympathetic audience, David and his new friend the mecca gigolo – hilariously portrayed – take off once again and roam the forest. David confides his ideas about the Blue Fairy – the creature in the Pinnochio story who turns the wooden puppet into a real boy. Somehow, after his mother read him Pinnochio, he came to believe that the Blue Fairy was real, and would turn him into a real boy, if only he could find it. This delusion on the part of the emotional young android feels wrong to me – yes, robots will have delusions … but really, a delusion this large and this firm in the mind of so cautious and rational a young android? To find the Blue Fairy of his schizophrenic fantasy, David is led to Dr. Know, a kind of search engine with a holographic avatar interface. Dr. Know directs him to Manhattan, a historical relic largely abandoned after severe floods.
The Manhattan skyscape looks fabulous half-submerged in water, and the best scenes in the movie occur here, when David enters a room and meets another David – a precise clone of him, another android produced in the same series. In a 2001-worthy moment, David kills the other David. He then meets his creator, a robot engineer and finds out that there is no Blue Fairy after all. Dr. Know was programmed to respond to his Blue Fairy question by leading him to Manhattan, the location of the engineering team that built him. David sees thousands of clones of himself, standing in rows, ready for release to human customers throughout the world. He realizes that he is just a prototype of a new line of low-cost child substitutes. Now, thanks to the miracle of modern emotional engineering, people who can’t afford a real child can buy a robot child – one who will love them like a real child, but costs less to maintain.
Unable to handle the psychological pressure, David leaps from the building to the ocean below, and apparently to his death. Hurrah, I say – not that I have anything against David, but this tragic end makes sense, emotionally and intellectually. Obviously, the film should have ended here, at this profound existential moment. At this moment the flawed movie touches greatness. The young android doesn’t want to be one of a series of clones, he wants to be unique, to be his own being. But yet, really, how different is his situation from the one we humans will be in a century or two from now, when human cloning is commonplace? Or from the one an identical twin is in today? Why is the presence of a thousand clones so threatening to his identity – after all, for intelligent robots as for humans, both heredity and environment contribute to personality. Deep philosophical questions meet action and drama, as in the best science fiction films. One identifies with the young robot more here, in his death, than at any point during his life. He was not born human, and this is beyond his control, but he can make himself die human. The machine who wanted to be a human, and sought humanity through bizarre delusions similar to those a mentally ill human, finally achieves true humanity in his death.
But, bizarrely and absurdly, the film doesn’t end here. It is at this point that it becomes very clear: this is a Spielberg film, and Kubrick’s vision had very little to do with it. As Eyes Wide Shut showed, Kubrick was capable of making mediocre cinema – but he never came near this level of banality. Unbelievably, instead of leaving an ambiguous ending with the young robot plunging into the sea in the throes of digital existential angst, Spielberg returns us to fairy-tale-land. The young robot plunges into the sea, where after some derring-do he sinks to the bottom and finds – what else – a statue of the Blue Fairy. He stares at it for a few thousand years, and is dredged up by some superhuman androids from a posthuman world, who make him happy by fulfilling his fantasy of returning to his mother in an implausible, sickly sweet and, for me, profoundly unsatisfying way. The details of the end hardly matter, the point is that they make no sense and have nothing to do with the preceding part of the film. What they are is a desperate attempt to tack on a happy ending at all costs. This is not the right way to synthesize Spielbergian optimism with Kubrickian darkness – it’s just plain pathetic.
The sad thing is, there was a great film in here somewhere. The elements are all present. If the human characters had been endowed with a little more substance, if the fairy tale elements had been made a little bit subtler rather than being allowed to dominate the science fiction elements, if the sadness of the story had been allowed to sink in rather than being immediately smothered in senseless Hollywood sweetness…. As it is, we’ll have to wait a little longer for the powerful, positive cinematic vision of AI that we really deserve. Someone should make an AI film that packs the wallop of 2001 or Blade Runner, but comes out on the side of the angels, arguing through drama and character that AI’s, though they won’t be human, will have their own valuable thoughts and feelings, and may well enrich the universe and the human race. Maybe someone will make this film one day, but it won’t be Stanley Kubrick since he’s dead, and apparently it won’t be Steven Spielberg either.