A Response to Searle

Anand Tyagi
8 min readJan 6, 2020

John Searle’s paper “Minds, Brains, and Programs” explores the relation between computers and understanding. Specifically, it addresses whether or not a computer appearing to understand a language equates to the computer actually understanding a language. But, Searle’s omission of a clear definition of the term “understanding” makes it difficult to address and refute his claim. However, I believe by providing a slightly more concrete definition of what “understanding” is, Searle’s claim can be more robustly investigated and, potentially, refuted: we can show that computers can indeed potentially understand language. Before addressing some of the issues with Searle’s argument, let us first understand what his argument is.

Searle’s thought experiment involves a native speaker of English, who has no knowledge of Chinese, in an isolated room. The individual receives an input in Chinese characters and gives an output in Chinese which they produce by following a given a set of rules in English. In the experiment, the responses that the individual gives in Chinese would be “absolutely indistinguishable from those of native Chinese speakers.” Additionally, the individual also gives English responses to any questions asked to them in English. Now, we have that the individual can give equally good answers in both English and Chinese, but in the case of the Chiense, “[the individual is] simply an instantiation of the computer program.”

With this thought experiment, Searle claims that the appearance of understanding a language does not equate to an actual understanding of language. Furthermore, the computer’s ability to portray understanding does not, in any way, explain human understanding as the computer itself does not actually understand the language.

Searle’s claims are valid in some ways. His claim that the individual in the room does not understand Chinese in intuitively correct: at no point does the individual actually understand the language. However, Searle never really defines what it means for someone, or something, to “understand.” The closest we get to his definition of understanding is when he asks “what is it that I have in the case of the English sentences that I do not have in the case of the Chinese sentences?” To which he responds, “The obvious answer is that I know what the former mean, while I haven’t the faintest idea what the latter mean.” From this, we can gather that Searle believes that understanding a language comes from actually understanding the meaning of the words directly: there should be no need to translate the input into another language, rather, comprehension should be done immediately. However, due to his lack of explicit definition, it is difficult to say what kind of system Searle would accept as a system that can understand. But, from what Searle does say, we know that his line between understanding and non-understanding is that machines are only mimicking the act of understanding, rather than actually understanding. How, then, could we create an artificial system that passes Searle’s definition of understanding? Well, we could not. Since any artificial system would have to mimic human understanding, as the system itself would not be human, according to Searle, we would never be able to make a system that satisfies his definition of understanding. This is where I disagree with Searle: I believe it would be possible to make a system that understands. But, we have to first change our definition of what it means to “understand.”

Searle’s argument counteracts any proposed solutions because he conveniently leaves out a formal definition of what understanding is. As a result, I will present a more concrete idea of what human understanding consists of. Let us say that a human understanding of language consists of an immediate relation and connection with the ideas conveyed through a linguistic input. Understanding a language, thus, has less to do with the processing of the physical words, but rather, an immediate reaction to the ideas that the sentences, or any input, convey. With this proposed definition of understanding, we can now consider an alternative, but similar, thought experiment to Searle’s Chinese room experiment and the Robot Reply mentioned in the paper.

First, let us consider a human individual who is traveling to another country where no one but them knows English. While traveling, this individual uses a phrase book that helps translate any English phrase into the foreign language, and visa versa. This is not unlike the individual in the room in Searle’s thought experiment — the traveler knows nothing about the language and is simply following a rule book of sorts. Over time, however, the traveler starts to remember words like “thanks” and “hello” and no longer has to constantly refer to the phrase book. It is clear that the traveler still does not understand the language, but, over time, the traveler memorizes the entire book, and is able to respond appropriately in every situation and to any sentence that is spoken to them. Would this mean that the individual now understands the language? Well, simply memorizing the rule book and responding appropriately at all times would supposedly not make the cut for Searle. However, one aspect of the traveler’s understanding of the language which is separate from the rule book is the traverler’s ability to simply respond to events or actions — not spoken or written sentences — in the foreign language. For example, using a swear word or explicative in the foreign language when stubbing their toe. While this is a rather simple example, if we took this reactionary view point towards the entire language, we could say that the individual understands the language as they are now associating a feeling or state of being with the language. We can now stretch this further to say that every spoken or written phrase that the traveler responds to is in fact a reactionary response, rather than a translation that the traveler has to make within their head.

This thought experiment holds many similarities to Searle’s as it starts with the individual not understanding any of the foreign language and the use of a rule book to translate from English to the foreign language, and visa versa. Where it diverges from Searle’s experiment is when the traveler starts to associate the language with direct feelings, rather than a word or phrase in English. Additionally, at the end of this experiment, the individual can be said to understand the language. What I want to address is the point at which we can make the claim that the individual understands the language. I believe that this occurred when the individual stopped translating the phrases into English and instead reacted to any statement said to them in the foregn language. According to Searle, this would constitute of understanding. If this is the case, then I believe we could easily make an artificial system that does exactly this without having it violate Searle’s beliefs.

If Searle’s main issue is with the program being written by a human, thus lacking in intentionality, we could, hypothetically, run a computer program that randomly generates a set of characters until it creates a program that is able to execute a behavior as described in the thought experiment above. With this, we could effectively eliminate the program from having any intentionality transferred to it by a human. Now, with a program that acts as described above, we would have me a program that behaves, not mimics, like the traveler: it learns all of the rules of the rule book and slowly commits the phrases to memory. Given that this is a thought experiment, imagine we could replicate the senses of the human body using mechanical sensors that the program could use. This is similar to building a robot like the one mentioned in the Robot Reply by Yale. Since the program is meant to act like the traveler, the program would be able to use the sensors to develop reactions to events that do not require a specific linguistic input. Additionally, let us add that the program could physically manipulate itself such that some inputs could trigger an immediate response. After enough time, this program would become similar to the traveler at the end of the experiment: we would have a program that is able to use language to react to events and language, and thus, possess an understanding that would fall under Searle’s definition.

Now, Searle would probably likely give a response similar to his rebuttal to the Robot Reply which is that, “the addition of such “perceptual” […] capacities adds nothing by way of understanding.” Additionally, he makes the point that while the manner of input is more diverse, the actual computation of the linguistic input to the output does not require actual understanding of the language. But, to this, I pose the point that the program I proposed in the thought experiment earlier is not, in fact, computing a response to the input. The point of mentioning the traveler reacting in the foreign language when stubbing their toe was to highlight that this reaction is immediate and does not require a calculated reaction. In my proposed computer program, since the program can physically change its circuit to wire immediate responses, a sensor being hit and sending a signal to elicit an immediate response is equivalent to a nerve sending a signal to the brain which would elicit an immediate response when one stubs their toe. Thus, this sort of program, if we could construct it, could potentially refute Searle’s claim.

Now, let us consider a program that learns using Reinforcement Learning. With Reinforcement Learning, a program takes risks and presents potentially incorrect solutions which are either “rewarded” or “punished.” In this manner, it is able to “learn” the best approach and correct solution to a given input. If we have a program that uses Reinforcement Learning while talking to a human, it would be much like a child trying to learn a language; it would make mistakes and gradually correct them over time. However, would Searle consider this understanding? Searle would probably respond by stating that the program is in fact, not understanding the language, but rather, emulating understanding. However, to this, I respond with the question, is the baby understanding the language? In fact, the baby is learning how to make models of what is grammatically correct and what is not through trial and error, much like the program. The only real difference is that one was programmed by a human, while one was programmed by chance (according to the scientific perspective). To account for this, I once again propose we create the program by randomly generating text until we create a program that executes a Reinforcement Learning algorithm. Thus, with Reinforcement Learning, we could potentially create a program that behaves just as a human does, and thus claim that it understands the language as well as any other human, or at the very least, as well as any baby.

Searle seems to have a rather firm belief that computers cannot, and never will, truly have the ability to understand language. But, while his statements are not exactly false, they are not specific enough either, thus resulting in them not being particularly convincing. While a program in general may not be able to understand language — due to not having an intention, or otherwise — if we go about making that program in a way that removes any indirect intentionality from the human, then we could potentially subvert Searle’s claim. I believe computers will one day be able to understand language, but only if understanding does not rely on the program having something akin to a “soul.” In the case that language, and understanding, relies on a consciousness that indeed is based in a non-physical and potentially spiritual realm, computers may never be able to fully understand language.

--

--

Anand Tyagi

Hi! I’m Anand. I’m currently an Engineer at Flexport and studied CS and Data Science at NYU. Check out my website to learn more about me! anandtyagi.me