Engineering Intelligence: Computers and the Cognitive Revolution in Psychology
A Review of Steven Pinker’s How the Mind WorksIn his landmark study The Structure of Scientific Revolutions, Thomas Kuhn explained that science cannot be viewed as a steady accumulation of knowledge. Reviewing the history of any science, one finds, instead, a series of orthodox world views, or paradigms, each of which reigns for a time and then is replaced by a usurper in what can only be called a revolution. Consider, for example, the science of mechanics, the branch of physics that deals with moving bodies. According to the naive view first advanced by Aristotle and still held by people who have not studied elementary physics, when force is imparted to a body, it moves until that force is used up, at which point it comes to rest. This intuitive view prevailed for more than a thousand years. Eventually, however, people began to see a problem with the theory. Imagine a ball rolling along a smooth surface. After an initial push, it will continue for a while and then come to rest some distance from its starting point. According to Aristotle, the ball has used up the force imparted to it. If, however, you add grease to the ball or make the surface smoother, the ball will go further. The better the grease, the smoother the surface, the further the ball will go as a consequence of the initial push. Aristotelian mechanics is at a loss to explain such a phenomenon. The fact that the ball goes further when greased is, from the point of view of Aristotelian mechanics, an anomaly, something the theory cannot explain. After all, the grease has not added force to the ball.
Imagine, now, a ball covered with a perfect grease or rolling on a perfectly smooth surface. Under such conditions, once an initial force is imparted to the ball, it will continue on a straight line forever. This observation led to a revolution in physics, instigated by Galileo Galilei and later codified by Isaac Newton. Newton’s Law of Inertia states that given an initial force in one direction, a body will persevere in uniform motion in that direction until acted upon by another force (the friction of the ball against the surface on which it rolls). Newton’s theory explains the commonplace observation that a ball, when pushed, eventually comes to a stop, but it also explains why the greased ball goes further. In other words, it explains all the facts accounted for by the older theory and also explains the anomaly of the greased ball. That is how science works. A theory is advanced. It proves to have enormous explanatory power and so becomes the accepted paradigm. For some period thereafter, called by Kuhn a period of “normal science,” people make steady progress, reading the book of nature by the light of the orthodox theory. Then an anomaly is discovered that cannot be explained by the theory. Scientists try to shore up the theory, to alter it to fit the observed facts, but eventually, someone proposes a new theory that accounts for the known facts and for the anomaly. Eventually, this theory becomes the new paradigm, and another period of normal, paradigmatic science ensues.
In the past three decades, the relatively new science of psychology has undergone a paradigm shift of the kind that Kuhn described. The shift is so recent that the psychology textbooks being used in high schools in the United States today do not as yet reflect it. Nonetheless, psychology today is a brave new world, one that would be barely recognizable to the practitioners of that science who died in the 1940s. The dominant paradigm for most of the century, Behaviorism, has given way to a new, powerful synthesis known as cognitive science, which owes its impetus to the creation of the digital computer. Steven Pinker’s How the Mind Works, published in 1997, is as good an introduction to the new paradigm as one is likely to find. Before having a look at the contents of Pinker’s book, however, it will prove valuable to take a brief glance backward to the paradigm that has been replaced.
The Undoing of the Behaviorist ParadigmIn the nineteenth century, when the new field of psychology was emerging from natural philosophy, the dominant paradigm was Introspectionism. People reasoned that if they wished to explore the workings of the mind, the best way to approach this task was to reflect on the contents of what William James, founder of the first school of psychology in the United States, called the “stream of consciousness.” Nineteenth-century psychologists thus spent a lot of time reflecting on their own perceptions, sensations, intuitions, memories, habits, motives, dreams, desires, emotions, goals, and so on. In the first decade of the twentieth century, the American psychologist John B. Watson challenged Introspectionism on the grounds that it was unscientific. If a given Introspectionist claimed to have a particular subjective experience, there was no way for a colleague to test that claim through observation, for, by definition, a subjective experience is accessible only to the person having it. Watson argued that the proper subject of psychology was not subjective experience but behavior, and the school of psychology that he founded was known as Behaviorism.
For the greater part of the twentieth century, most orthodox (i.e., Behaviorist) psychologists in the United States and Great Britain held that any discussion of mental categories or contents was unverifiable nonsense. The foremost Behaviorist in America, the brilliant and irascible B. F. Skinner, went so far as to claim that conscious experience did not exist! Rats, pigeons, and people were biological machines, acted upon by external forces (stimuli) that cause them to react (to respond or behave) in particular ways, much as a rock responds by falling to the stimulus of being pushed off a cliff. Learning, from the Behaviorist point of view, occurred as a result of building up a repertoire of responses to new stimuli, which took the form of positive or negative reinforcements.
Behaviorism had enormous influence on the social sciences and education. Sociologists (taking their cue from Behaviorism and from the historical materialism of Karl Marx) argued that the ills of society could be cured by changing people’s conditioning. Educators banished from their discussions any talk about students’ understanding, knowledge, feelings, attitudes, and so on, and replaced such talk with behavioral objectives and lists of positive and negative reinforcements. It became unfashionable, for example, to speak of a student’s understanding the difference between a noun and a verb. Instead, one was supposed to say that the student, on being exposed to the stimuli of the lesson, would then behave in a certain way (for example, he or she would be able to sort a list of words under headings Noun and Verb). The desired outcome (e.g., sorting the nouns and verbs correctly) would be reinforced by positive or negative stimuli (e.g., M&Ms or detentions) until the behavior was learned. For a while, it looked as though Behaviorism would remain the dominant paradigm until the trump of doom.
In the late 1950s and early 1960s, however, psychologists were suddenly confronted by anomalies that could not be accounted for by the Behaviorists’ stimulus-response model. In his devastating review of Skinner’s book Verbal Behavior, the linguist Noam Chomsky pointed out the inability of Behaviorism to account for certain aspects of language, such as the creation of utterances that are completely original and yet completely grammatical. People do not simply parrot back what they have heard. Instead, they make up entirely new sentences, ones that have never before occurred in the history of language use, based on internalized structures and rules. For example, it is highly likely that no one has ever before uttered this sentence:
In another blow to Behaviorism, the respected psychologist Karl Lashly delivered a paper to the American Psychological Association pointing out that certain kinds of serial behaviors, such as improvisation by a jazz pianist, occur too quickly to be described in terms of a series of stimuli and responses. There simply isn’t time for the nerve signals to move back and forth in two directions. Instead, the pianist must be “playing out,” in one direction, an internal schema, or model.
Such observations were the death knell for Behaviorism, but it is likely that psychologists would still be shunning discussion of internal states if they hadn’t had ready to hand an example of another kind of entity with internal states—the digital computer. It is possible, of course, to describe a computer in terms of its inputs (“stimuli”) and its outputs (“behaviors”), but doing so leaves out most of what’s interesting about computers—processing and storage that occur in-between. Computers gave psychologists an entirely new model for their science: the brain is the machine, the wetware, that takes care of processing and storage, and the mind is the program that runs on that wetware. In Steven Pinker’s words,
The computational theory of mind resolves the paradox. It says that beliefs and desires are information, incarnated as configurations of symbols. The symbols are physical states of bits of matter, like chips in a computer or neurons in the brain. They symbolize things in the world because they are triggered by those things via our sense organs, and because of what they do once they are triggered. If the bits of matter that constitute a symbol are arranged to bump into the bits of matter constituting another symbol in just the right way, the symbols corresponding to one belief can give rise to new symbols corresponding to another belief logically related to it, which can give rise to symbols corresponding to other beliefs, and so on. Eventually the bits of matter constituting a symbol bump into bits of matter connected to the muscles, and behavior happens. The computational theory of mind thus allows us to keep beliefs and desires in our explanations of behavior while planting them squarely in the physical universe. It allows meaning to cause and be caused. (24-25)
A New View of the Brains and MindsCognitive science views the brain as a processor. The machine is no monolith. It is made up of many modules designed for specific purposes. As Pinker puts it, the brain
Second, the brain is made up of interconnected modules, each with its own functions. Much as a personal computer can contain a separate math co-processor dedicated to arithmetic operations, so the brain contains separate, dedicated processors for such activities as recognizing faces (a fact proven, Pinker points out, by brain lesions that cause people to suffer from prosopagnosia, an inability to distinguish one face from another that is unrelated to other pattern recognition).
Third, as one would expect of a machine, the modules of the brain were designed, but not, according to standard cognitive science theory, by an intelligent designer but rather by the blind forces of natural selection. One would expect, based on this theory, that human brains would be very good at kinds of processing that were valuable to their foraging ancestors and very bad at kinds of processing that were not. This prediction is borne out by thousands of observations. For example, as Pinker points out, people without specific training are very bad at doing abstract problems in logic. However, if an abstract logical problem is couched in terms of contractual rules, in which being false is equivalent to cheating, then the same people who could not solve the problem correctly before are able to solve it with ease. That’s because the ability to do abstract logical problems carried no selective value in the past, but the ability to recognize a cheater (someone who took more than his share of food or an unfaithful spouse) did.
To summarize, the brain processes information, different parts of the brain are specialized for particular processing, and the processors that exist in the brain were designed to meet the needs of our evolutionary ancestors.
Applications of the Cognitive ViewOne of the virtues of Pinker’s book is its wide-ranging coverage of many, many issues in psychology. Pinker tackles hundreds of questions (Why do people like parks and waterfront property? Are emotions universal across cultures? Why has instruction in phonics been more successful than whole-language instruction? How do 3-D stereograms work? What causes the psychological disorder known as autism, in which the sufferer is unable to distinguish people from objects? What are the relative roles of nature and nurture in the development of human abilities such as intelligence? Does a faculty called "general intelligence" exist? Why do people fall in love?) and shows how these can be answered within the cognitive science paradigm. Let’s consider two of Pinker’s examples.
For years, people have debated the relative role of nurture and nature in human development. People on the left, from liberals to Marxists, tend to favor explanations related to nurture. Given the right environment, anyone can be made into a monster or a saint, a dullard or a genius. People on the right, from conservatives to Fascists, tend to favor explanations related to nature. There are natural inequalities among people, life is a struggle to survive, and those with money and power are simply the ones who are best adapted and therefore deserve what they have. To Pinker and other cognitive scientists, the statement that people’s abilities depend upon a complex interaction between nature and nurture is downright silly. Pinker makes the point by having us consider an analogous statement:
Cognitive science not only helps us make sense of ancient, grandiose conundrums such as the nature of intelligence and the relative importance of nature and nurture, but also has many, many practical applications. Consider, for example, the teaching of reading. In the 1960s and 1970s, linguists working within a cognitive science framework demonstrated conclusively that brains contain circuitry specifically designed for learning spoken language. A child does not learn the rules of a spoken language by imitation or by dedicated instruction. No one teaches the average English-speaking child the rules governing the order of precedence of adjectives, yet a six-year-old can tell you that “the little, green VW microbus” sounds right but that “the VW green little microbus” sounds wrong. The brain comes equipped with circuitry specifying the abstract features of possible languages, and the child learns a particular language when what he or she hears fills in the slots in the abstract design. For example, in English an object typically comes after a verb:
Puer puellam osculat. [The boy (subject) the girl (object) kissed.]
In the 1980s, educators in the United States, excited about the discovery by linguists that language-learning was innate, proposed a new approach to reading instruction. In the past, teachers had laboriously drilled students in sound-symbol correspondences, using a technique known as phonics instruction. Students were systematically taught to recognize the written symbols corresponding to sounds in English: cat, bat, mat, sat, rat, fat, hat, and so on. Half understanding what the linguists were saying, the educators decided that they could dispense with phonics and instead simply expose children to stories. Students’ innate language-learning mechanisms would take care of teaching them to read, as long as the stories they were exposed to were interesting enough to engage their attention. This so-called “whole language” approach proved to be a dismal failure, and cognitive science has a ready explanation. What the educators didn’t realize is Pinker’s point that the processing modules that exist in the brain are those that evolved to meet past purposes. The brain evolved specific processors for intuiting the grammatical, phonological, and semantic features of spoken language, but writing is a recent cultural invention. There simply hasn’t been enough time, evolutionarily speaking, for dedicated processors for determining correspondences between written and spoken symbols to develop. The moral is clear: education should step in to meet needs not already provided for by our neural machinery. People have the neural machinery to learn how to distinguish the grammatical difference between who (subject) and whom (object) by simply being exposed to enough examples of so-called “proper” usage, without specific instruction in the differences between who and whom. However, they do not have in-born, hardwired machinery for learning that one particular set of squiggles on a piece of paper stands for the sound who (hoo) and another stands for the sound how (hou). The interpretation of the squiggles has to be taught. In Pinker’s words,
ReferencesChomsky, Noam. “Review of B. F. Skinner’s Verbal Behavior.” Language 35 (1959): 26-58.
Kuhn, Thomas S. The Structure of Scientific Revolutions. Cambridge, MA: Univ. of Chicago P., 1962.
Pinker, Steven. How the Mind Works. New York: Norton, 1997.
|Questions for Discussion and Review
The following questions are based on the preceding text. Clicking on a question will take you to the place in the text where the question is discussed. To return to these questions, simply click the "Back" button in your browser.
13. People's logical reasoning abilities dramatically improve when logic problems are couched in terms of contractual rules. In what way does this fact provide evidence for the idea that the processors in our brain were designed by an evolutionary process?