Oh yeah. Searle. I heart Searle.
Note: do not read if you don’t care for philosophy and/or speculations regarding the way the mind understands, or if you just don’t want to read my crappy essays.
“A Whole-Systems Response to the Chinese Room”
Of the several different responses to dualism—the idea that the mind and the body are separate substances entirely—one that has gotten a large number of responses is functionalism, or the idea that mental states are functional states of the brain and that the mind’s relationship to the body is analogous to software’s relationship to a computer. One form of this response—the idea of strong AI—claims that a properly programmed computer is a mind rather than just a model for the mind (Searle, 67) and that the processes and outputs of such a programmed computer demonstrate an understanding similar to the type of understanding we exhibit. One notable argument against strong AI is put forth by John Searle in his essay, “Minds, Brains, and Programs.” Using the example of a man locked in what he calls the “Chinese Room,” Searle claims to show that a mind as demonstrated by computers is not analogous to humans’ mind.
In Section I of this essay I explain Searle’s Chinese room and follow it with the important relationships his example has both with understanding and with strong AI in order to explain what he attempts to show with his demonstration. I then compare in Section II what I take to be the logic used in the Chinese Room with a more real-life example to demonstrate that Searle’s argument fails to show that strong AI is false due to the fact that it does not take into account the fact that understanding requires context. Following this, I better explain my argument by comparing and contrasting it to the systems reply, another argument against Searle’s Chinese Room.
Section I
Searle sets up his Chinese Room example by asking the reader to imagine him locked in a room full of books containing Chinese writing. He notes that in this scenario, he has no knowledge of Chinese—he cannot read it, he cannot understand it when it is spoken, he cannot tell Chinese characters from random squiggles—he doesn’t even have knowledge of China (Searle, 68). Knowing this, he then asks us to imagine that, while in this room, he is constantly given input, or Chinese writing, from outside. His instructions in English are to compare these input characters to another set of symbols in the books, and then compare the information from the books with a third set of characters he is given. He responds to this third set of characters based on the comparisons he makes in the books, and outputs these responses back to the outside world (Searle, 69).
For those who are native Chinese speakers outside of the room, Searle’s output responses are indistinguishable from those generated by native Chinese speakers. This, Searle claims, is him behaving like a computer—his output is based on the input that enters the room and the “program,” or the set of English instructions he is given, helps to formulate output that is something indistinguishable from responses given by native Chinese speakers (Searle, 69).
The goal of Searle’s Chinese Room is to argue against the idea that strong AI is true—that is, to argue against the idea that a properly programmed computer is actually a mind, rather than just a model for a mind. In order to understand how Searle comes to this conclusion, it is important to see how Searle defines both understanding and strong AI. Searle describes the concept of understanding by viewing it in relation to representation of things or concepts. He notes that when a human reads a story, he or she can correctly answer questions that are derived from the story but whose contents involve information that was not explicitly provided. He uses the example of a story about a man ordering a hamburger from a restaurant and then storming out without paying because the hamburger arrived at his table horribly burnt. The human can answer the question, “did the man eat the hamburger?” correctly, even though that information was never explicitly stated, due to his or her understanding of the story (Searle, 68).
As for the concept of strong AI, Searle describes it as the idea that a properly programmed computer can actually be a mind, instead of just a representation of one. Rather than just demonstrating how the mind works, strong AI proponents claim, properly programmed computers can literally understand—e.g., read a story about a man who angrily left a restaurant because of a burnt hamburger and correctly determine whether or not he ate it—and possess cognitive states, and thus exist as minds rather than mere models.
For Searle, both of these definitions play into his denial of strong AI. An important component of the Chinese Room example is the fact that the Searle isolated in the room fails to understand (his definition of understanding) Chinese. Even though he can take the input, manipulate it, and produce an output that is, to any native Chinese speaker, indistinguishable from responses produced by any other native Chinese speaker, Searle fails himself to understand what the symbols mean. This lack of understanding, coupled with the fact that he is functioning as a properly programmed computer in the example, demonstrates for Searle that a computer with strong AI is not equivalent to a human mind.
He basically uses his situation and compares it with what occurs in computers. If Searle in the example is doing everything a computer that appears to understand Chinese does—taking in input, processing it and manipulating symbols, and providing an output—but he fails to understand Chinese, how can it be said that the computer could understand Chinese, either? A computer properly programmed to output Chinese can appear to understand but really doesn’t understand it at all. Because of this lack of understanding, according to Searle, it seems inappropriate to him for us to claim strong AI—to claim that properly programmed computers essentially are minds.
Searle puts a lot of weight on the importance of understanding. He wants to demonstrate that a computer can look like it understands Chinese—but only so far as a door with a motion sensor can understand when to open or a can opener understands how to open a can. He wishes to draw a connection between attaching the idea of understanding to inanimate objects and the fact that people, as he puts it, “can follow formal principles without understanding” (Searle, 71). In other words, a person can act much like a door with a motion sensor—if the motion sensor detects movement, it sends an electrical signal, which triggers the door to open—by simply following the logical steps (much like a Turing Machine). However, if the situation is reversed, Searle claims that you cannot have a door with a motion sensor act like a person—it cannot gain a sense of understanding that a person can.
Section II
Drawing from this idea, he wishes to claim that minds are capable of some sort of deeper understanding than symbol-manipulating computers. In other words, he wants to show with his Chinese Room example the dissimilarity between the understanding demonstrated in the example and the understanding we all experience when we, for example, read a sentence “the dog is brown.” If the example is examined closer, though, I do not feel that it demonstrates exactly what Searle wants it to demonstrate—that is, I do not think that it is an argument against strong AI.
It is true that when Searle isolates Example Searle (ES) in the room and has him take in Chinese characters and produce uninterpreted outputs, ES fails to understand Chinese. However, I do not feel that his example is an accurate representation of how understanding arises. ES is all alone in the room. Aside from the set of English instructions telling him which input characters go with which characters in the books and which characters in the books go with which third character, ES has nothing else to go on—no background, no scenarios in which to see the use of the Chinese characters, no relation of these unfamiliar characters to a language he does know or even to components in his world (e.g, “this squiggle here represents the English word “mouse” or the object “chair”). In other words, ES is isolated from all other context in which these characters could be applicable, and it seems unfair of us to assume that ES, in this situation, could possess any level of understanding (that is, understanding in the same sense we gain when we read the sentence “the dog is brown”) with regard to the Chinese language.
Looking at the Chinese Room example from this angle, I think that it is analogous to a situation in which we could take, for example, the syntax-understanding part of the brain, isolate it from all other parts, and ask it to understand the phrase “the dog is brown.” Assuming that this isolation were possible, it would seem odd to assume that this part of the brain could understand the sentence as we do. It does not understand “dog” in the sense that it represents a four-legged, furry mammal, and it does not understand “brown” in the sense that it represents a color that can be formed by mixing two complementary colors. It understands that “noun is adjective,” and that “dog” represents a noun and “brown” represents an adjective in this case, but that is probably the extent to which anyone would credit understanding to the syntax-understanding part of our brain. This part of our brain is like Example Searle, and the words “dog” and “brown,” apart from the roles they play in syntax, may as well be random Chinese characters.
However, if we examine our understanding of the sentence “the dog is brown,” it becomes apparent that our understanding of this sentence goes far beyond its syntax—we know what “dog” is due to various other experiences, mental routes, and inputs, and we know what “brown” is due to various components of the brain—the vision center, memory (since we’ve probably seen brown before), etc. This is due to the fact our mind—and our understanding in the way that we experience it—does not arise from isolated components of the brain. Rather, it arises from the culmination of the different parts of the brain as well as the inputs into the system of the brain. I need a syntactical understanding of the sentence to understand how brown relates to dog, but I need experiential understanding of what a dog is to know how brown can be applied, etc. Isolating any part of the brain and asking it to understand something will not produce the same type of understanding we are used to because we use many different components—the whole system of the mind—when arriving at an understanding of something.
The problem I see with Searle’s argument is that by isolating ES, he is in effect assuming that one component of the mind is responsible for understanding. In other words, he eliminates the idea of the system of the mind arriving at understanding and instead focuses on one aspect of it, claiming that what ES is doing is merely symbol manipulation, moving uninterpreted Chinese characters around and producing a recognizable output for those who understand Chinese while still failing to actually understand what the symbols mean. If we take understanding to arise out of this more compartmentalized view of the mind—that is, if we isolate processes that produce different forms of understanding and ask them to form an understanding of something—it is true that the compartmentalized parts of the mind, such as the syntax-understanding part of the brain discussed above, are merely manipulating symbols (‘dog’=noun, ‘brown’=adjective, and so on). However, what I think Searle fails to look at is that understanding as we see it arises out of the entire system and all inputs into it.
While my objection may initially seem like a form of the systems reply as discussed and replied to by Searle, it is distinctly different. The systems reply argued against by Searle claims that while the individual (ES) does not understand Chinese, the entire system does. Searle argues against this by claiming that even if ES internalized the entire system, ES still would not understand, and therefore the entire system would not understand (Searle, 72-73). What I am arguing for is different—understanding instead lies in the different communications and connections in the system as well as outside influences that are interpreted through the components (like, for example, light interpreted through the vision center of the brain). There is no way that all inputs into the system can be internalized into, for example, the syntax-understanding part of the brain, due to the fact that the inputs exist outside of the system and since the system relies so heavily on connections between components.
Computer programming as it stands today may only be able to represent an example such as one demonstrated in Searle’s Chinese Room—that is, it may only be able to produce computers and programs for those computers that can only run one form of input à symbol manipulation à output chain. Regardless of this, however, I think that Searle’s Chinese Room example fails to argue against strong AI due to the way the example represents understanding.
Rather than seeing understanding from the viewpoint that it arises from a multitude of different functions, the isolation of ES in the Chinese Room seems to suggest viewing understanding as based on components (in his example, ES in the room). For Searle, the fact that ES does not understand Chinese despite the fact that his output looks like he does is indicative of a failing of strong AI. However, I think Searle’s example is only indicative of trying to get at understanding by looking at the mind piecemeal rather than as a whole—analogous to trying to derive an understanding of the sentence “the dog is brown” based off of the sole interpretation of the sentence by the syntax-understanding part of the brain rather trying to get at it from the whole system of the mind.
References
Searle, J. R. (1980). Minds, brains, and programs. The Behavioral and Brain Sciences 3, 417-424.
