Readings Index
 

 

Truth Is Stranger Than [Science] Fiction

A Review of David Stork’s HAL’s Legacy: 2001’s Computer as Dream and Reality

 

In a tradition that provides a welcome relief from the petty infighting that characterizes much of academic discourse, younger professors sometimes honor their elders (imagine that!), on retirement or on the occasion of an advanced birthday, by contributing to a collection of essays. Generally, the collection deals with aspects of the elder’s work, explaining why it was important, building on it, extending it, connecting it to other fields or areas of inquiry.  The new book edited by David Stork, HAL’s Legacy: 2001’s Computer as Dream and Reality, is such a collection of essays, but the person whose work is honored by the collection is not an academic but rather the science fiction writer Arthur C. Clarke, now in his seventies and living in Sri Lanka. And the birthday being celebrated is not Clarke’s but rather that of a fictional computer—HAL, the onboard computer of the Discovery mission in the screenplay on which Clarke collaborated with the director Stanley Kubrick for the 1968 film 2001: A Space Odyssey. The screenplay, later made into a novel with the same title, was based on Clarke's earlier short story “The Sentinel.” 

In the novel we are told that HAL sprang into life on January 12, 1997. Stork, chief scientist at the Ricoh California Research Center, where he does research on pattern recognition by computers, and visiting professor of psychology at Stanford University, assembled a team of luminaries in computer research—experts in artificial intelligence, supercomputer design, computational linguistics, computer chess, philosophy of mind, computer speech, interface design, and other topics—and asked them to assess how far we have come, since Clarke’s movie and book, toward making Clarke’s vision of an intelligent, emotional, chatty, lip-reading, chess-playing, and finally murderous supercomputer a reality. Stork’s book contains fascinating essays by and interviews with Murray Campbell, Daniel Dennett, Ravishankar Iyer, David Kuck, Raymond Kurzweil, Douglas Lenat, Marvin Minsky, Donald Norman, Joseph Olive, Rosalind Picard, Azriel Rosenfeld, Roger Schank, David Wilkins, and Stephen Wolfram, as well as contributions by Stork and by Arthur C. Clarke himself. 

It is not surprising that Clarke should be so honored. There are two kinds of science fiction writer. One kind, who might more properly be called a writer of science fantasy, creates wildly improbable stories that break all the rules known to science today. An example of a writer of this kind is Ray Bradbury, who in one story, for example, writes of a person who becomes obsessed by the fact that he has an internal skeleton and finally has the monstrous thing removed. As Stork points out in his introductory essay, most screen science fiction is of this kind. Ships well lighted from all angles scream through space with stars zipping by them. They blast other ships with laser beams. The other ships blow up. Smoke billows, and the debris falls toward the bottom of the screen. Of course, as Stork points out, none of this would actually happen in space. Stars are too far apart to appear to move relative to a ship. A laser beam would not be seen in a vacuum in which there was nothing for the beam to bounce off. There is no atmosphere for smoke to billow in and no gravity to pull debris downward. 

In contrast, Clarke is the sort of science fiction writer who tries to extrapolate from real science and to shed light on what might actually be possible in the future, and Clarke can claim credit for one truly important scientific achievement. It was he who conceived of the idea of geosynchronous satellites—ones that orbit at such a speed that they remain in the same position relative to the earth and so make possible modern global telecommunications. Because Clarke was careful to make use of real science in his book, looking back on the predictions in his book and movie is instructive. We can learn a lot about the validity of scientific prognostication by looking at where Clarke got it right and where he got it wrong. 

How Close Are Today’s Computers to HAL?

Obviously, we do not have, today, anything like the computer that Clarke envisioned. The artificial intelligence initiatives heralded with such optimism by Alan Newell, Herbert Simon, John McCarthy, Marvin Minsky, and others in the 1950s proved vastly more complex than anyone at the time imagined. Teaching computers to do difficult tasks such as making medical diagnoses or predicting where one might find oil proved to be much easier than teaching them to do so-called simple tasks such as recognizing faces or communicating in English or Japanese. This is because these “simple tasks” are actually astonishingly complicated. They seem simple to us only because they are carried out automatically, below the level of consciousness, by the society of incredibly complex minicomputers in our brains. Still, as the essays in Stork’s book point out, we have made significant progress toward creating computers with the astonishing characteristics of Clarke's HAL. 

In “Could We Build HAL? Supercomputer Design,” David Kuck explains that “To be as large and powerful as he is described, HAL would have to be a parallel system,” and today we have, in fact, built large, massively parallel computers that actually have greater memory storage capacity than Clarke predicted: 

    One concrete number given in the novel describes the memory unit Bowman pulls out as a “marvelously complex three-dimensional network, which could lie comfortably in a man’s hand yet contained millions of elements.” . . . This was large for the time but inadequate for any laptop today. The very conservative nature of Clarke’s prediction is underscored by today’s commodity memory technology; even a modern 8-MB PC memory contains several hundred million transistors. (36-37)
In 1997, the supercomputer Deep Blue, developed by IBM, defeated the chess grandmaster Garry Kasparov, thus becoming the world’s foremost player of the game. In “An Enjoyable Game: How HAL Plays Chess,” Murray S. Campbell, an IBM researcher and one of the members of the team who developed Deep Blue, analyzes the brief scene from the movie in which the computer HAL defeats astronaut Frank Poole. Campbell points out that in the movie, HAL makes a nonoptimal but “trappy” move that tricks Frank into responding with a move that causes him to lose. Today’s chess-playing computers do not play in that way, which requires a lot of real-world knowledge. Instead, Deep Blue relied upon massive computation of positions that resulted from particular moves: “Deep Blue is capable of searching up to two hundred million chess positions per second,” Campbell notes, a fact that “prompted Kasparov to comment that ‘quantity had become quality’” (86). 

In “The Talking Computer: Text to Speech Synthesis,” Bell Laboratories scientist Joseph P. Olive explains, in considerable detail, the complexities of and approaches to teaching computers to produce intelligible speech from written text. Olive summarizes as follows: 

    As we near the year 2001, do we have a computer that sounds like the voice of HAL portrayed by actor Douglas Rain—personable, warm, emotional, human-sounding? The answer is no, not yet. 

    At Bell Laboratories we have developed a text-to-speech synthesizer that is highly intelligible in several languages, including English, German, French, Spanish, Russian, Chinese, and Navajo. . . . [Y]et, although capable of both reading or generating such complex text as e-mail or newspaper stories, the synthesizer does not replicate the human voice. It has a distinct “machine” sound. (124)

The biggest problem is giving computers the real-world knowledge that would be necessary for them to understand the contexts in which sentences appear and so be able to apply the appropriate rules to make the speech sound natural. As Olive explains, “A computer can only perform tasks requiring very limited understanding. It can maintain a dialogue about ordering a pizza but not about a subject matter that has not been previously define” (125) . 

Another expert in computer speech, Raymond Kurzweil, addresses this problem in “When Will HAL Understand What We Are Saying? Computer Speech Recognition and Understanding.” Here are Kurzweil’s predictions: 

    Based on Moore’s law [Moore's law is the prediction by George Moore, a founder of Intel, that each year the cost of integrated circuits would halve while the number of transistors on them, and thus the processing power, would double], and the continued efforts of over a thousand researchers in speech recognition and related areas, I expect to see commercial-grade continuous-speech dictation systems for restricted domains, such as medicine or law, to appear in 1997 or 1998. And, soon after, we will be talking to our computers in continuous speech and natural language to control personal-computer applications. By around the turn of the century, unrestricted-domain, continuous-speech dictation will be the standard. An especially exciting application of this technology will be listening machines for the deaf analogous to reading machines for the blind. They will convert speech into a display of text in real time, thus achieving Alexander Graham Bell’s original vision a century and a quarter later. (161)
Kurzweil points out that even today’s supercomputers do not have anything approaching the capacity of the human brain, which has “about a hundred billion neurons, each of which has an average of a thousand connections to other neurons” (163).  Advances in circuit design, such as creating three-dimensional circuits, will increase the capacity of computers, but even more important is achieving breakthroughs in architecture, in the arrangements of circuits. Kurzweil suggests that in the near future we might “reverse engineer” the brain to create a computer that has the same architecture—literally scanning a brain “to ascertain the architecture of interneuronal connections in different regions” (165). This suggestion then leads Kurzweil to some fascinating speculation about the possibility of creating a duplicate of someone’s brain during the next century! 

In “From 2001 to 2001: Common Sense and the Mind of HAL,” Douglas Lenat emphasizes the importance of building in a computer a real-world knowledge base, something that he and his colleagues at the Microelectronics and Computer Consortium have been doing with a program called CYC. The idea is to create a means for representing knowledge and to “prime the knowledge pump” of a computer by providing it with the millions of bits of information that a person knows, such as the facts that “Napoleon died on St. Helena” and “Wellington was greatly saddened.” Knowing such things, a computer will be able to infer, as a person does, “that Wellington heard about Napoleon’s death, that Wellington outlived Napoleon, and so on” (203). The availability of such common sense is a large factor in our ability to understand language and to function intelligently. Lenat’s program “to bring a HAL-like being into existence’ consists of three steps: 

    1. Prime the pump with the millions of everyday terms, concepts, facts, and rules of thumb that comprise human consensus reality—that is, common sense. 

    2. On top of this base, construct the ability to communicate in a natural language, such as English. Let the HAL-to-be use that ability to vastly enlarge its knowledge base. 

    3. Eventually, as it reaches the frontier of human knowledge in some area, there will be no one left to talk to about it, so it will need to perform experiments to make further headway in that area. (203)

Stork’s collection contains many other fascinating essays that provide a unique opportunity to explore the cutting edge of current computing technology and to find out what leading experts foresee for the future. It is a must read for anyone who is interested in how real science has a way of inevitably transcending the wildest of science fiction. 
   

Reference

Clarke, Arthur C. 2001: A Space Odyssey. London: Hutchinson/Star, 1968. 

---. The Sentinel (Masterworks of Science Fiction). New York: Berkley, 1986. 

Stork, David, ed. HAL’s Legacy: 2001’s Computer as Dream and Reality. Cambridge, MA: MIT P., 1997. 
 

 
 
Questions for Discussion and Review 

The following questions are based on the preceding text. Clicking on a question will take you to the place in the text where the question is discussed. To return to these questions, simply click the "Back" button in your browser. 

1. Who was HAL, and when was he "born"? 

2. According to the essay, what are the two types of science fiction writer? 

3. What contribution did Arthur C. Clarke make to aerospace technology? What characteristic of his science fiction makes it exceptional? 

4. What sorts of tasks are easily simulated on computers, and what sorts are not? 

5. Was Clarke correct in predicting that computers in 1997 would have memories consisting of millions of parts? Explain. 

6. In 2001, the computer HAL defeats Frank Poole in a game of chess. Was Clarke correct in predicting that computers would one day be able to beat human opponents? Are modern chess-playing supercomputers like HAL? Explain. 

7. How successful have we been in creating computers that, like HAL, communicate in human-sounding speech? Explain. 

8. What is the biggest obstacle to creating computers that sound human? 

9. According to Raymond Kurtzweil, how long will it be before we have computers that can reliably take dictation dealing with any subject area, or domain? 

10. According to Kurtzweil, what did Alexander Graham Bell dream of creating? When will Bell's dream become a reality? 

11. According to Kurtzweil, how long will it be before scientists are capable of scanning a brain and downloading it to a computer? 

12. What is the goal of Douglas Lenat's CYC project? 
 

 

 EMCParadigm Publishing