Tag Archives: searle

What is blogging?

I was living in the dank, dull doldrums of Vancouver when Watson debuted on Jeopardy! back in February of this year. However, tonight I was able to catch an old repeat show from when the IBM computer competed against Ken Jennings and Brad Rutter.

All I have to say is this: how freaking insane is it that we have the technology to create AI computers that are able to not only compete but beat humans in a real time trivia situation? Some philosophers like John Searle argue that Watson can’t really “think,” but how much longer until computers become so sophisticated that the  line between computation and thought becomes totally blurred?

Crazy times, 2011, crazy times.

Forget everything you ever knew about…wait, what was this essay about?

Oh yeah. Searle. I heart Searle.


Note: do not read if you don’t care for philosophy and/or speculations regarding the way the mind understands, or if you just don’t want to read my crappy essays.


“A Whole-Systems Response to the Chinese Room”

                Of the several different responses to dualism—the idea that the mind and the body are separate substances entirely—one that has gotten a large number of responses is functionalism, or the idea that mental states are functional states of the brain and that the mind’s relationship to the body is analogous to software’s relationship to a computer. One form of this response—the idea of strong AI—claims that a properly programmed computer is a mind rather than just a model for the mind (Searle, 67) and that the processes and outputs of such a programmed computer demonstrate an understanding similar to the type of understanding we exhibit. One notable argument against strong AI is put forth by John Searle in his essay, “Minds, Brains, and Programs.” Using the example of a man locked in what he calls the “Chinese Room,” Searle claims to show that a mind as demonstrated by computers is not analogous to humans’ mind.

                In Section I of this essay I explain Searle’s Chinese room and follow it with the important relationships his example has both with understanding and with strong AI in order to explain what he attempts to show with his demonstration. I then compare in Section II what I take to be the logic used in the Chinese Room with a more real-life example to demonstrate that Searle’s argument fails to show that strong AI is false due to the fact that it does not take into account the fact that understanding requires context. Following this, I better explain my argument by comparing and contrasting it to the systems reply, another argument against Searle’s Chinese Room.


Section I

                Searle sets up his Chinese Room example by asking the reader to imagine him locked in a room full of books containing Chinese writing. He notes that in this scenario, he has no knowledge of Chinese—he cannot read it, he cannot understand it when it is spoken, he cannot tell Chinese characters from random squiggles—he doesn’t even have knowledge of China (Searle, 68). Knowing this, he then asks us to imagine that, while in this room, he is constantly given input, or Chinese writing, from outside. His instructions in English are to compare these input characters to another set of symbols in the books, and then compare the information from the books with a third set of characters he is given. He responds to this third set of characters based on the comparisons he makes in the books, and outputs these responses back to the outside world (Searle, 69).

                For those who are native Chinese speakers outside of the room, Searle’s output responses are indistinguishable from those generated by native Chinese speakers. This, Searle claims, is him behaving like a computer—his output is based on the input that enters the room and the “program,” or the set of English instructions he is given, helps to formulate output that is something indistinguishable from responses given by native Chinese speakers (Searle, 69).

                The goal of Searle’s Chinese Room is to argue against the idea that strong AI is true—that is, to argue against the idea that a properly programmed computer is actually a mind, rather than just a model for a mind. In order to understand how Searle comes to this conclusion, it is important to see how Searle defines both understanding and strong AI. Searle describes the concept of understanding by viewing it in relation to representation of things or concepts. He notes that when a human reads a story, he or she can correctly answer questions that are derived from the story but whose contents involve information that was not explicitly provided. He uses the example of a story about a man ordering a hamburger from a restaurant and then storming out without paying because the hamburger arrived at his table horribly burnt. The human can answer the question, “did the man eat the hamburger?” correctly, even though that information was never explicitly stated, due to his or her understanding of the story (Searle, 68).

                As for the concept of strong AI, Searle describes it as the idea that a properly programmed computer can actually be a mind, instead of just a representation of one. Rather than just demonstrating how the mind works, strong AI proponents claim, properly programmed computers can literally understand—e.g., read a story about a man who angrily left a restaurant because of a burnt hamburger and correctly determine whether or not he ate it—and possess cognitive states, and thus exist as minds rather than mere models.

For Searle, both of these definitions play into his denial of strong AI. An important component of the Chinese Room example is the fact that the Searle isolated in the room fails to understand (his definition of understanding) Chinese. Even though he can take the input, manipulate it, and produce an output that is, to any native Chinese speaker, indistinguishable from responses produced by any other native Chinese speaker, Searle fails himself to understand what the symbols mean. This lack of understanding, coupled with the fact that he is functioning as a properly programmed computer in the example, demonstrates for Searle that a computer with strong AI is not equivalent to a human mind.

He basically uses his situation and compares it with what occurs in computers. If Searle in the example is doing everything a computer that appears to understand Chinese does—taking in input, processing it and manipulating symbols, and providing an output—but he fails to understand Chinese, how can it be said that the computer could understand Chinese, either? A computer properly programmed to output Chinese can appear to understand but really doesn’t understand it at all. Because of this lack of understanding, according to Searle, it seems inappropriate to him for us to claim strong AI—to claim that properly programmed computers essentially are minds.

Searle puts a lot of weight on the importance of understanding. He wants to demonstrate that a computer can look like it understands Chinese—but only so far as a door with a motion sensor can understand when to open or a can opener understands how to open a can. He wishes to draw a connection between attaching the idea of understanding to inanimate objects and the fact that people, as he puts it, “can follow formal principles without understanding” (Searle, 71). In other words, a person can act much like a door with a motion sensor—if the motion sensor detects movement, it sends an electrical signal, which triggers the door to open—by simply following the logical steps (much like a Turing Machine). However, if the situation is reversed, Searle claims that you cannot have a door with a motion sensor act like a person—it cannot gain a sense of understanding that a person can.


Section II

Drawing from this idea, he wishes to claim that minds are capable of some sort of deeper understanding than symbol-manipulating computers. In other words, he wants to show with his Chinese Room example the dissimilarity between the understanding demonstrated in the example and the understanding we all experience when we, for example, read a sentence “the dog is brown.” If the example is examined closer, though, I do not feel that it demonstrates exactly what Searle wants it to demonstrate—that is, I do not think that it is an argument against strong AI.

It is true that when Searle isolates Example Searle (ES) in the room and has him take in Chinese characters and produce uninterpreted outputs, ES fails to understand Chinese. However, I do not feel that his example is an accurate representation of how understanding arises. ES is all alone in the room. Aside from the set of English instructions telling him which input characters go with which characters in the books and which characters in the books go with which third character, ES has nothing else to go on—no background, no scenarios in which to see the use of the Chinese characters, no relation of these unfamiliar characters to a language he does know or even to components in his world (e.g, “this squiggle here represents the English word “mouse” or the object “chair”). In other words, ES is isolated from all other context in which these characters could be applicable, and it seems unfair of us to assume that ES, in this situation, could possess any level of understanding (that is, understanding in the same sense we gain when we read the sentence “the dog is brown”) with regard to the Chinese language.

Looking at the Chinese Room example from this angle, I think that it is analogous to a situation in which we could take, for example, the syntax-understanding part of the brain, isolate it from all other parts, and ask it to understand the phrase “the dog is brown.” Assuming that this isolation were possible, it would seem odd to assume that this part of the brain could understand the sentence as we do. It does not understand “dog” in the sense that it represents a four-legged, furry mammal, and it does not understand “brown” in the sense that it represents a color that can be formed by mixing two complementary colors. It understands that “noun is adjective,” and that “dog” represents a noun and “brown” represents an adjective in this case, but that is probably the extent to which anyone would credit understanding to the syntax-understanding part of our brain. This part of our brain is like Example Searle, and the words “dog” and “brown,” apart from the roles they play in syntax, may as well be random Chinese characters.

However, if we examine our understanding of the sentence “the dog is brown,” it becomes apparent that our understanding of this sentence goes far beyond its syntax—we know what “dog” is due to various other experiences, mental routes, and inputs, and we know what “brown” is due to various components of the brain—the vision center, memory (since we’ve probably seen brown before), etc. This is due to the fact our mind—and our understanding in the way that we experience it—does not arise from isolated components of the brain. Rather, it arises from the culmination of the different parts of the brain as well as the inputs into the system of the brain. I need a syntactical understanding of the sentence to understand how brown relates to dog, but I need experiential understanding of what a dog is to know how brown can be applied, etc. Isolating any part of the brain and asking it to understand something will not produce the same type of understanding we are used to because we use many different components—the whole system of the mind—when arriving at an understanding of something.

The problem I see with Searle’s argument is that by isolating ES, he is in effect assuming that one component of the mind is responsible for understanding. In other words, he eliminates the idea of the system of the mind arriving at understanding and instead focuses on one aspect of it, claiming that what ES is doing is merely symbol manipulation, moving uninterpreted Chinese characters around and producing a recognizable output for those who understand Chinese while still failing to actually understand what the symbols mean. If we take understanding to arise out of this more compartmentalized view of the mind—that is, if we isolate processes that produce different forms of understanding and ask them to form an understanding of something—it is true that the compartmentalized parts of the mind, such as the syntax-understanding part of the brain discussed above, are merely manipulating symbols (‘dog’=noun, ‘brown’=adjective, and so on). However, what I think Searle fails to look at is that understanding as we see it arises out of the entire system and all inputs into it.

While my objection may initially seem like a form of the systems reply as discussed and replied to by Searle, it is distinctly different. The systems reply argued against by Searle claims that while the individual (ES) does not understand Chinese, the entire system does. Searle argues against this by claiming that even if ES internalized the entire system, ES still would not understand, and therefore the entire system would not understand (Searle, 72-73). What I am arguing for is different—understanding instead lies in the different communications and connections in the system as well as outside influences that are interpreted through the components (like, for example, light interpreted through the vision center of the brain). There is no way that all inputs into the system can be internalized into, for example, the syntax-understanding part of the brain, due to the fact that the inputs exist outside of the system and since the system relies so heavily on connections between components.

Computer programming as it stands today may only be able to represent an example such as one demonstrated in Searle’s Chinese Room—that is, it may only be able to produce computers and programs for those computers that can only run one form of input à symbol manipulation à output chain. Regardless of this, however, I think that Searle’s Chinese Room example fails to argue against strong AI due to the way the example represents understanding.

Rather than seeing understanding from the viewpoint that it arises from a multitude of different functions, the isolation of ES in the Chinese Room seems to suggest viewing understanding as based on components (in his example, ES in the room). For Searle, the fact that ES does not understand Chinese despite the fact that his output looks like he does is indicative of a failing of strong AI. However, I think Searle’s example is only indicative of trying to get at understanding by looking at the mind piecemeal rather than as a whole—analogous to trying to derive an understanding of the sentence “the dog is brown” based off of the sole interpretation of the sentence by the syntax-understanding part of the brain rather trying to get at it from the whole system of the mind.


References

Searle, J. R. (1980). Minds, brains, and programs. The Behavioral and Brain Sciences 3, 417-424.

Intentionality

An essay. It’s on intentionality, which is basically any type of causal relation (either a statement or an action) that has a “cause and effect” kind of thing going on. Intentional mental states can include thirst, a desire for something specific, a thought that prompts an action, or a simple prepositional thought. I think that’s all the premises you need; most of it is explained in the essay. I’m arguing against Searle’s position again. ‘Cause I want to.

In his chapter on intentionality, Searle puts forth two differing arguments relating to how the contents of intentional states are determined and what properties of these intentional states constitute their having the contents they do. Out of the two arguments he puts forth—externalism and internalism—he argues for internalism, stating that intentional contents result solely from what is inside our heads.

The idea of internalism basically states that what features constitute intentional states exist entirely in our minds, or, as Searle puts it, “entirely between our ears.” This is in contrast to externalism, which says that intentional content is constituted at least in part by the external world—that is, that it is caused by relations between the mind and the external world. Searle argues his point by emphasizing the idea of conditions, or, more specifically, conditions of satisfaction. Conditions of satisfaction are conditions that allow for mind-world “fit”—that is, in the case of desires, conditions of satisfaction are satisfied when the world (reality) fits the content of the intentional state, and in the case of beliefs or convictions (for example), conditions are satisfied when the intentional state fits reality.

Searle asserts that these conditions are entirely represented in the mind and are entirely internal to it. He uses the example of water to demonstrate this. Something is defined as “water,” he says, if it matches the conditions of satisfaction for water that are set up in a person’s mind. In other words, if the external thing in question matches the “checklist” of traits that characterize the condition of “water” for a person, the thing is then deemed to be water. Here is where Searle draws the line between the internal and external influence: it is up to the external world whether or not an object fits these criteria, but it is up to the mind what the criteria are.

By the end, Searle has basically asserted that the features that enable intentional states to arise are constituted by conditions of satisfaction, the properties of which are set up entirely by the mind itself and are internal to it. In other words, he has stated that (P1) all non-null intentional states have conditions of satisfaction that allow for a mind-world “fit” and that (P2) these conditions of satisfaction are all internal to the mind. Therefore, (C) internalism, the argument that the features that constitute intentional states exist entirely in our minds, is valid.

I believe that it is possible to refute Searle’s second premise, that all conditions of satisfaction are internal to the mind, but in order to do so it is important to break away from an argument based on language or social interaction. I think a stronger argument against internalism can arise from arguing from meaning stemming from the intentional state itself. My argument will keep the same first premise, that (P1) all non-null intentional states have conditions of satisfaction that allow for a mind-world “fit,” but also assert that (P2) some intentional states’ conditions exist independently of the mind’s internal “checklist” and are instead determined by external factors, and therefore conclude that (C) internalism is not a valid argument for how the contents of intentional states are determined.

The best way to demonstrate this is with a primitive desire, like thirst. The intentional state of thirst has a very specific set of conditions of satisfaction, and the things that satisfy these conditions are things that the mind on its own cannot specify. That is, the mind cannot set the conditions for what satisfy thirst on anything that can be solely internally constructed. There is a very set list of things in the world that satisfy the desire of thirst (water, soda, juice, etc.), and the mind cannot create any other things or traits that satisfy thirst.

If we step away from using language as what assigns meaning to things, we can see that a better way to assign meaning to intentional states—and to argue that there is an external factor in at least some intentional states’ contents—is to rely on the intentional states t..hemselves and what actually satisfy their conditions of satisfaction. Looking at the intentional state of thirst in this way doesn’t rely on a social or language-based interpretation of the desire. The desire is the same regardless of what it is called, and the conditions of satisfaction are not something the mind can, on its own, determine.

The mind cannot assign, for example, the condition “sunlight satisfies thirst”—it has no control over what specific externally-existing objects or states satisfy the desire of thirst. The things that satisfy thirst do not do so because they conform to our internal list of conditions of satisfaction—they do so because they are the only things that satisfy the desire. We can say whatever we want in regards to what satisfies thirst—the basic biological fact is that only certain things actually do satisfy thirst. This is the external influence. The mind, on its own, cannot “set” these conditions of satisfaction; the things that satisfy thirst are the only things that satisfy the desire, and they exist independently of and are not dependent on the mind’s internal “checklist.”