3a. Searle, John. R. (1980) Minds, brains, and programs
What's wrong and right about Searle's Chinese room argument that cognition is not computation? (Computation is just rule-based symbol manipulation. Searle can do that without any idea what it means.)
Reading: Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457
Note: Use Safari or Firefox to view;
does not work on Chrome
I believe that Searle’s claim that syntax ( aka rule-following) is separate from semantics (“understanding”) is convincing only for short-term exchanges. When thinking of it in the context of the lifelong T2, the argument becomes more difficult to sustain. A system that could remain indistinguishable from a human for a lifetime would need to keep up with how humans converse (new slang, words, contexts, references, emojis, etc) and language evolves too quickly for any one pre-set list of rules to anticipate future input. And the rule-book can’t just endlessly update. From my understanding, to keep up, the system would either need constant external updates from humans (which means its not autonomous, which goes against the foundation of the test) or it would have to rely upon the ability to invent and revise rules itself, which would be more like learning than passive-rule following, which Searle wanted to deny…
ReplyDeleteYou make a very interesting point about extending Searle's claim beyond short-term exchanges. I agree that in order to remain indistinguishable from a human over decades, a system could not rely on a single set of rules but rather would require something like a learning system (to handle new slang, cultural references, and evolving context, etc). However, Searle would argue that although the system can revise and create rules through a learning-like process, it still does not guarantee understanding. His Chinese room thought experiment is designed to prove that no matter how sophisticated the rule manipulation gets, the process is still syntax without semantics. Changing the process to become dynamically updated rather than pre-set does not add a level of meaning to the symbols.
DeleteElle, of course having passing T2 requires the capacity to learn, update remember, and integrate, whether across minutes or decades, would have to be like. That's part of cognitive capacity, isn't it? Conversations are not just rote local verbal exchanges of static words. And that's true about sensorimotor (T3) capacities too. It's also already true within a 1-hour exchange with ("cheat'n") ChatGPT today!
DeleteI strongly recommend extended exchanges with GPT for everyone, so you get a realistic sense of how much T2-testing calls for. Since you are not using the paid version, use your imagination when thinking of what a version with memory across individual day to day interactions would be like.
Emily, you are right that none of this would have any impact on Searle's Argument or the syntax/semantic distinction. Learning and memory are essential parts of T2 (and T3) capacity. And T2 candidates' internal mechanisms would be executing computations (symbol manipulations), including updating the recipe as part of the recipe -- if C=C were true...
***EVERYBODY PLEASE NOTE: I REDUCED THE MINIMUM NUMBER OF SKYWRITINGS. BUT THE READINGS ARE **ALL** RELEVANT TO AN OVERALL UNDERSTANDING OF THE COURSE. SO, EVEN IF YOU DO NOT DO A SKYWRITING ON ALL OF THEM, AT LEAST FEED EACH READING YOU DO NOT READ TO CHATGPT AND ASK IT FOR A SUMMARY, SO YOU KNOW WHAT THE READING SAID — OTHERWISE YOU WILL NOT HAVE A COMPLETE GRASP OF THE COURSE TO INTEGRATE AND INTERCONNECT FOR THE FINAL EXAM.***
DeleteSearle claims that since computers only manipulate symbols (computation) without explaining how those symbols get meaning, they cannot genuinely think. But I think this sets a double standard. Humans also can’t fully explain how meaning or consciousness arises in our own cognition, sometimes we just experience it. So if we demand that AI must provide a complete account of meaning to count as “thinking,” while humans don’t meet the same requirement, the standard is unfairly asymmetrical.
ReplyDeleteRachel, what is Searle's argument? All he is pointing out is that if C=C were true, and there were a (Chinese) T2-passing recipe that would produce (Chinese) understanding in any hardware that was executing the T2 software, that would produce the understanding of Chinese. That is what would be true, if C=C (computationalism) were true.
DeleteSearle’s argument is that even if a program can pass T2, this still doesn’t mean genuine understanding is happening. He agrees that if C=C were true, then a T2-passing program would have understanding. But the whole point of the Chinese Room is to show that passing T2 only shows correct symbol manipulation, not semantics. The person inside can follow the recipe and output the right symbols, yet still not understand Chinese. So Searle’s thought experiment is meant to refute the claim that computation alone (C=C) is sufficient for understanding.
DeleteRachel Y, correct. Can you explain to kid-sib what "Searle's Periscope" is, and what role it plays in his CR Argument?
DeleteSearle's Periscope is a thought experiment that demonstrate that computers do not understand in a literal sense (the way we humans do). In this situation, a computer can answer questions in Chinese perfectly. We want to know if it is evidence of an understanding of the Chinese language or if the computer is merely successfully carrying out instructions. To test this, Searle imagines a person - an anglophone who does not understand Chinese - who is given a rulebook. They are asked to answer the same questions given to the computer earlier. The rulebook provides this person with knowledge on how to answer (ex: this squiggle always follows that squoggle); this allows them to provide the right answers, too. At no point in this process is true understanding of Chinese required. So long as the rulebook is clear, you can apply the rules and get away with properly answering Chinese questions without ever learning the language. By demonstrating that someone can successfully carry out this task without understanding Chinese, we disprove the notion that a task successfully carried out by a computer such as this one indicates understanding in a literal sense.
DeleteWhat caught my attention was Searle’s point that computation is “observer-relative.” In other words, a computer only counts as computing because we interpret its symbols as meaningful. That makes me wonder if AI is really thinking at all, or if it’s just reflecting back the meaning we give it. Searle seems to say this means AI can never truly understand. But I’m not fully convinced: why couldn’t a different kind of system, like silicon instead of neurons, also create real understanding?
ReplyDeleteRandala, look again at what computation -- and C=C computationalism -- mean. Then explain the Searle's Chinese Room Argument to kid-sib. (But read Week 3b first.)
DeleteSearle believes that machines can behave like humans by symbol manipulation, without any actual understanding of meaning. He presents the Chinese room thought experiment where a person inside a room responds in Chinese through symbol manipulation based on rules, appearing as if they understand Chinese when they don’t at all. Like the person inside the room, this shows that machines can behave like they understand when they don’t. What I found most interesting from this reading was the Robot Reply counterargument. The Robot Reply says that if you “put a computer inside a robot, and … actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating drinking -- anything you like”, the robot must be understanding. Searle rejects this, claiming that the robot is still just doing symbol manipulation, with its inputs and outputs just at different parts of the robot. I’m now left feeling confused on how we can defend T3 based on Searle’s reply to the Robot Reply. I feel like Searle can continue to argue his point based on this logic against T3?
ReplyDeleteAnnabelle, read 3b and then explain what "Searle's Periscope" is.
DeleteSearle’s Chinese Room Argument is a thought experiment designed to show that cognition is not the same as pure computation. The experiment involves a person following rules to manipulate Chinese symbols in a room without understanding them. This experiment is thought of as a periscope, where Searle uses his own understanding to bypass lengthy theoretical arguments. Therefore, since computation is implementation-independent (the same code can run anywhere), one can “look through the periscope” of their own mind and see that the symbols are being manipulated without true meaning. The argument is correct in showing that computation alone is not sufficient for understanding. However, it focuses too much on the man in the room rather than on the room and everything else that makes it up as a whole. Searle’s Periscope shows that a T2 system is not enough for understanding, but it ignores the possibility that computation plus symbol grounding (via sensors, movement, and real-world interaction—T3/T4 systems) could generate meaning.
ReplyDeleteI just wanted to add something new here, because upon further reflection, my response above feels similar to others. Returning to Searle’s paper, I began wondering: if Searle’s Periscope shows that symbols can be manipulated without true meaning, and if meaning requires computation plus symbol grounding, is there still something more that relates to human consciousness beyond grounding—and even beyond T4? In the Chinese Room, Searle shows that pure computation lacks understanding, and with the Periscope, he argues we can “see” that a T2 system is insufficient. But even if T3 or T4 systems ground symbols through real-world interaction, I wonder whether human consciousness involves something uniquely human that science cannot fully capture. I do understand that T3 may be the highest level we can empirically reach, but could there be a dimension of consciousness beyond grounding? And how would we possibly figure that out? If ever?
DeleteRachel, I appreciated reading your skywriting and would like to contribute something : For kid-sib, symbol grounding links abstract symbols to sensory experience and action so they become meaningful. From my understanding, Searle argues, that symbols gain meaning through intentionality, not simply computation + grounding. In his “Robot Reply,” he claims that sensory and motor input/output doesn’t produce understanding because it doesn’t produce intentionality if it only manipulates symbols formally (p.7). Thus, Searle doesn’t ignore grounding but sees it as insufficient without intentionality. I believe intentionality links computation to symbol semantics in humans, but the question is how semantic meaning could be built into a formal symbol system so the system itself, not just a third party, understands. i.e. taking the Chinese room experiement, system = Searle, third party = a fluent chinese speaker reading Searle’s written responses.
DeleteAfter reading What's Wrong and Right about the CRA, I have to correct my own skywriting. As mentioned in 3b, "intentionality" is a weasel-word. To me it made lots of sense, because, for example, I understood Searle's intentionality the same way I understood not being able to learn a language while sleeping. I guess in the later it refers to "paying attention" and the importance of paying attention to any stimulus. While in the CRA example, Searle is "paying attention" to the stimuli, i.e. the rules he is following, but that doesn't make him understand and I am unsure what would qualify as "intentional".
DeleteFinally, I want to add that I did not understand that Searle refuted computation has ANYTHING to do with our cognition, I thought his argument stopped with "computation not being all there is about cognition".
Rachel H, first of all, Searle's Periscope (implementation-independence) only works on computationalism (C=C) because only computation is implementation-independent, so, by memorizing and executing the Chinese T2-passing recipe, Searle could report (in English) that he still doesn't understand Chinese. Memorizing and executing the code is crucial, because that's how Searle "becomes" the system: then there's nothing else to point to as "the system" other than Searle.
DeleteGrounding means sensorimotor capacity in the world. Searle cannot become the T3 robot, because sensorimotor capacities are not implementation-independent computation, so there's nothing Searle can do to become the robot, and hence he has no Periscope on whether the T3 robot understands.
I don't know what you mean by a "dimension" of "consciousness" (which is just a WW for feeling). Turing sensibly says his method only works for observable evidence (T2/T3 doing-capacity, and perhaps also T4 for observable internal doings); but that's the end of the road, because feeling is unobservable to anyone but the feeler. That's the O-MP -- and it leads to the HP, because each of us knows that we feel, but CogSci can't explain how or why. See the discussion of evolution in the skywriting on 3b.
Emmanuelle, Searle concluded from his Argument that he had shown that cognition is not computation at all; but all he had shown was that cognition is not all computation.
DeleteYet that was enough to refute C=C, and C=C is what Computationalism claims -- not that "C=part-C" (which is true, but leaves Cognitive Science with most of its work still to do (T3, and maybe some T4).
And, yes, "intentionality" is a WW, and a particularly weaselly one, conflating with "intending" with "meaning". He didn't need all that fuss: all he wanted to show was that becoming the implementation of a purely computational recipe for passing the Chinese T2 did not make him understand (hence it does not do it for any other implementation of that same code. Everyone knows what it feels like to understand (or not understand) a language: that's all Searle needed to show that C=C is wrong.
For an ironic twist (on the fact that Searle did not fully understand his own Argument), see the "Intentional Fallacy" in google, google scholar or Wikipedia -- just ask ChatGPT...
The core of Searle's argument is that AI lacks understanding and intentionality, which proves that human cognition can not be reduced to computation. I had difficulty fully accepting this argument, as "understanding" and "intentionality" were never effectively defined.
ReplyDeleteOn the other hand, one section of the argument I found quite compelling was that computation alone is insufficient to give rise to cognition, specifically that actual human mental phenomena might be dependent on actual physical/chemical properties of actual human brains. I enjoyed the anecdote that Annabelle also pointed out from “The Robot Reply”: that a computer capable of perceiving, walking, moving about, hammering nails, eating drinking…would…have genuine understanding and other mental states., which, intended as a counterargument, actually supports Searl’s ideas.
I think you make a strong point about the ambiguity in how Searle uses “understanding” and “intentionality,” and I agree that the “Robot Reply” ironically strengthens his view that embodiment matters. this also makes me wonder: if embodiment and causal powers are key, how do we decide which features are essential for understanding and which are just incidental? would memory, perception, and action be enough?
Delete
ReplyDeleteI think that Searle's beliefs that simply running the right program to simulate true understanding is correct and the Chinese Room Argument demonstrates this quite well. There is no true understanding of the symbols while manipulating them with a guidebook. I was the same when taking my first statistics classes in CEGEP, simply using the formula without understanding what the formula meant, and I got an A despite this. However, Searle’s idea that this programing and structure is completely irrelevant to the mind if it’s not biological is something I disagree with. If we reach the passage of T3 Turing Test where the LLM interacts with the real world in meaningful ways, then programming can be safely said to be part of the process of creating a mind by combining it with sensation and experiences.
It is an interesting thought that if we took a fully functioning T3 robot that it’s program could help it create a ‘mind’ by combining its experiences and sensations. As a computer science student I do find it difficult to get behind this just I know some of the mechanics of machine learning. A ML system is made of layers of units that take in numbers, do operations on the numbers, and pass the results to the next layer. So when a system ‘learns’ it adjusts the ‘weights’ of the connections between these units so that it predicts better the next time. This process is repeated until the system gets good at whatever task it is trying to complete. Theoretically when a T3 system takes in information from its environment it would adjust its weights based on that experience so it can make predictions about how to better interact in that environment later. However, the T3 machine would have no understanding of the environment or things in the environment it is in. All it sees is the symbols being passed to it that are making it adjust it’s weights (also symbols), which really just boils down to the Chinese Room thought experiment, meaning it is just taking in symbols and outputting symbols with no actual meaning being assigned to the symbols.
Delete
DeleteYes, I agree that computation can be part of cognition without being all of it. I think that what Searle was saying was that a machine could “think” if it has the same causal properties as the brain may it is biological (but different from us), artificial (made by us) or other. According to Searle, intentionality does not rise from the sequence of the programming itself but from beyond (even in the brain). We would not have to reach T4 and recreate a brain to have a potential candidate that can think. Even though, Searle does not mention T3 robots here, I do think that sensorimotor robots could eventually be close to “thinking”. They could ground symbols in the real world (which is missing from T2 pen-pals) and maybe, from there “cognition” would emerge if the robot is made with physical stuff that have the right causal properties.
What I find most interesting in Searle’s argument is how it flips the Turing Test on its head. Instead of asking “Can the system fool an outsider into thinking it understands?”, Searle asks “What does it feel like inside the system?” To which he answers 'nothing'. It shows that behavioral success doesn’t guarantee comprehension. This makes me wonder about our own learning: when we memorize material for an exam without grasping concepts, aren’t we basically a mini–Chinese Room? Passing the test isn’t the same as understanding, and that gap seems central both to humans and AI.
ReplyDelete(Follow-up comment) As I read through the posts, I realized that this is just a different version of the problem of other minds. Searle could “look inside” the Chinese Room because he knew from his own perspective that he didn’t understand Chinese, even though his outputs fooled outsiders. With machines, we can’t do that; we only ever see their behavior. Even if a robot grounds words in sensation, we’d still be stuck asking: Does it really understand, or just act as if it does? That’s the same leap we make with people, animals, plants, and even sometimes ourselves. We infer understanding from behavior, but can’t directly access true objective experience.
ReplyDeleteThis comment has been removed by the author.
DeleteI actually understood this paper as arguing that this is not an instantiation of the Problem of Other Minds. The question here is not to assess how one can know that another one is thinking, but rather if the object in front of us is at all understanding, or if they possess the intentionality which is necessary to this understanding.
DeleteRather, it suggested that the lack of intentionality implicit in the design of a computer program entails that it cannot logically be conscious and thinking. Indeed, what Searle seems to argue is that understanding entails more than simple symbol manipulation through an algorithm, which is what a computer does. Rather, understanding comes partly from being able to grant meaning to these symbols, or at least understanding that they can bear a form of meaning.
Sofia, you are correct in saying that the aim of Searle’s thought experiment was not to assess the other minds problem – that is, whether others have thoughts given that we can only observe their behaviour. However, the fact that Searle used his own first-person introspection rather than that of a machine to manipulate symbols following a specific algorithm demonstrates that there is some slippage in his reasoning. He concludes that cognition is not computation yet disregards that his subjective sense of “not understanding” does not guarantee that the entire system also lacks understanding. Therefore, computation alone may not be sufficient for cognition, but it still may be part of it.
DeleteI enjoyed reading Searle's work more than the two previous writings. It was easier to understand, and I think this is because he uses simpler language. I rather Searle's comparison of strong vs weak AI compared to Turing's T1, T2 and so forth, but as I've read through the threads of previous comments, combining both Searle and Turing develops an advanced understanding. I think Searle highlights a limitation of Turing's test; the test checks if a machine's answers can seem human, but Searle explains that thinking requires the brain's causal powers, which seems to be much more than clever programming.
ReplyDelete“As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements”
ReplyDeleteI found this line interesting because it highlights his claim that syntax alone, aka formal symbol manipulation, can never amount to real understanding. Though I do agree with Searle’s claim that computers don’t understand in the literal sense, my question is: even if we accept that the system doesn’t “understand” in a human sense, could there be a more minimal or functional kind of understanding that emerges from consistent performance across contexts? Do we risk defining understanding too narrowly if we only allow it to apply to human style cognition?
I like your point about "minimal" or "functional" knowledge, because this is where the debate about Searle's Chinese Room really starts. Searle demonstrates that he can follow rules for manipulating Chinese symbols, without ever knowing what they mean, and that is so; syntax is not semantics. The video carries the same comment: Searle claims that just because someone runs a program doesn't mean that they have a mind, yet the twist here is, if a system can reliably and correctly respond to all questions in all contexts, including in real conversations, doesn't this at least appear much to be the same as understanding, even though it might not precisely be human understanding? Maybe it is reasonable to consider performance and not subjective experience. Harnad and others have argued that real cognition must be situated in the real world, by grounding symbols in sensorimotor experience otherwise they are meaningless, and so fine, it follows Searle is right that symbol manipulation alone is not understanding at all. But your question is a good one: perhaps there are degrees of understanding applicable to a machine, that are useful, although they may not be considered what we mean by the term "understanding."
DeleteI also wanted to respond to your comment because, as I was reading Searle’s paper, I kept thinking about the phenomenon of talking dogs, which I have seen multiple videos of on social media. To contextualize, in such videos, you can see dogs communicating with their owners through buttons that shout a word or expression as they press them. The owner may ask a question verbally or via the buttons, and the dog answers or hails their humans by pressing the buttons, sometimes even making complete sentences. Here, although we can agree that these talking dogs are able to communicate with humans by learning that each button can lead to various situations or results, and manipulating them, I think it would be harder for us to claim that these dogs know or speak English, in the same way neither Searle nor the computer does with Chinese. So I wonder if your point of a different kind of understanding, different than the human kind, a “more minimal or functional kind”, could apply here and thus expand our understanding of “understanding”.
DeleteMaëla, I like your dog example a lot — it really brings out the difference. But with the dog, even if it doesn’t know English, its button presses seem connected to real feelings and needs, not just symbol manipulation. That makes me think the dog actually does have some kind of genuine understanding, even if it looks different from ours. And Shireen, I think that Searle would probably say that computers can’t “sort of” understand — for him it’s all or nothing, and machines just "aren’t in that business". He actually uses the example of himself understanding English fully, French a little, German even less, and Chinese not at all. The point is that there are degrees when it comes to people, but computers fall on the other side entirely — more like a car or an adding machine, which don’t understand anything at all.
DeleteSearle's Chinese Room Argument prompts me to think about the different ways humans learn— notably rote learning. In the same way a computer— and the English speaker in the Chinese room— sometimes students memorize large volumes of material to correctly reproduce answers without actually understanding the concept for their exams (I have been guilty of it). Humans sometimes mirror Searle's argument, that computers produce appropriate outputs without semantic understanding. But how does our brain distinguish between memorizing and inherently understanding the information? Perhaps exploring the distinction might not just clarify how we learn but could illuminate a critical step in the emergence of consciousness— how does one transition from syntactic to semantic processing. If we can pinpoint where understanding “switches on,” maybe we could be closer to solving the hard problem and designing machines that do more than just simulate comprehension.
ReplyDeleteReferring back to Tuesday Sept 16 class— Is Searle's Chinese Room Argument and the difference between memorization and understanding discussing explicit VS implicit cognition?
DeleteRote learning seems to be a special case, in which the computational agent (in this case, the student) is both understanding and not understanding the symbols which it manipulates through the algorithms.
DeleteIf the student memorizes the phrase "computation is the manipulation of symbols through the use of an algorithm" and understands each of the symbols individually (e.g. they know what the individual words mean), but does not understand what the phrase entails, or they are not able to apply the phrase's meaning to a situation they are evaluating, then it seems that they are stuck in an odd in-between. A form of a case of partial understanding...
What I find critical in Searle’s Chinese Room argument is how we define “understanding”. It reminds me of a film about someone in a Nazi camp pretending to know Polish and thus avoid the persecution. At first, he only manipulated rules and symbols he invented, with no real grasp of the language. But over time, through repeated use, he actually started to understand and command ‘Polish’. This seems very similar to the Chinese Room case: even if the person starts out with no understanding, could genuine understanding eventually emerge through practice? If so, Searle’s claim that computation can never lead to understanding may be too strong.
ReplyDeleteSuch interesting insight! I really like the point that emerges from what you've said, namely that somehow practicing simulating a behavioral output might actually lead to an understanding of said output. So, I suppose that if we put it in perspective of 'understanding feels like something', which was discussed in class today, then could it be that feelings, in artificial intelligence, emerge as a result of repeated output (i.e. learning)? But it seems like grounding the feeling is paramount for any sort of learning to happen, and understanding to emerge: say the prisoner you mentioned was deprived of any sensory information, such that he could not encode the context in which he pretends to know Polish (he just does, for sake of argument); I presume it would be much more difficult for him to get a sense of why he's pretending. Can we assume such things about AI, namely that they're divested of much, if not all, sensory information? It depends on our definition of sensory information I suppose...
DeleteRachel, your point aligns with the comment I wanted to add about language learning and the Chinese Room case. One common myth about linguists is that they know many languages (like translators), but they study languages' phonetics, phonology, syntax, semantics, etc. Just like the person in the Chinese room, linguists do not need to be fluent in the language studied since they only look at its rules. Although it would be natural for a linguist studying a certain language to want to learn how to speak it out of interest, and to even learn some vocabulary or pronunciations, fully understanding that language requires much more learning and practice. Therefore, I think the difference between the movie character you mentioned who learned Polish and the person in the Chinese Room is that the former had an adequate environment to interact with and eventually understand the language whereas the latter only followed rules about the writing system of the language.
DeleteSearle (1980) argues that computer programs simulate rather than duplicate thinking, rejecting the strong AI claim that a properly programmed computer could have a mind. His hypothesized Chinese Room argument explains that a person with no knowledge of Chinese could follow rules to produce fluent responses without understanding a word. Likewise, programs only manipulate symbols syntactically and lack the brains causal powers that generate intentionality—the mental states directed towards something like desires and beliefs that give thoughts meaning. Thus, Searle maintains that understanding thinking requires looking strictly at the biology of the brain itself, whereas Harnad (2001) suggests that we can also consider robots with senses that can ground symbols in real life to understand cognition.
ReplyDeleteWhat heavily shifted my perspective is Searle’s note that “[n]o one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything?”. I would argue that, whether his case that artificial intentionality cannot be produced unless the causal mechanisms of the brain can be replicated is true, his argument about simulations is very sound. Previously, I was for the argument that a simulation of the human brain may be a plausible mean by which we could understand cognition (intentional thinking leading to memory retention, learning, etc.) However, Searle emphasizes the idea that simulation is not duplication; a program simulating the brain would only be manipulating symbols exclusively meaningful for the programmer and the interpreter.
ReplyDeleteI agree that Searle’s point that “simulation isn’t duplication” carries real weight. He keeps coming back to the idea in his article that a program only has “a syntax but no semantics.” In other words, you can follow symbol rules and get the right input/output match (like in the Chinese Room), but that doesn’t mean there’s any real understanding. and even the perfect brain-simulation of synaptic firings (done with water pipes) still seems to lack the brain's "causal powers" that are important to intentionality. However, his objection remains true: a program isn't necessarily a mind.
DeleteWhere I’m less convinced is the move from “program alone isn’t enough” to “therefore no computational system could ever understand.” Searle’s replies try to neutralize every upgrade (systems, robot, brain-simulator), but each upgrade also changes the causal story, like embedding, perception/action loops, learning, and error-driven repair, introducing the very worldly coupling that gives symbols content. If a system were causally isomorphic to the brain at the appropriate grain and nested in dense interactions, it's not clear we'd want to continue calling it a "mere simulation." Searle thinks the brain’s unique powers come only from biology, while many others think those powers come from patterns or functions that could exist in different kinds of systems.
So what Searle gets right is that syntax by itself won’t ever equal understanding, and behavior by itself (like in the Turing Test) can mislead us. What he gets wrong is pushing the fire/rain analogy too far, since that treats cognition as purely material instead of organizational. If the right causal organization can be built, even in silicon, then what looks like “simulation” might actually be replication. The key issue is figuring out which causal features really matter.
I want to bring up a different perspective I have on “understanding”. The Searle’s Chinese Room assumes flawless rule-following, but in real human understanding, our grasp often becomes apparent through the mistakes we make (like mishearing a word or misapplying a concept) and then repairing/refining our mistakes into the “right outputs” through interaction.To me, error reveals that we are not just navigating syntax or rules but also semantics. Understanding and learning involves our flexible capacity to make mistakes, reinterpret, and recover. Perhaps the true mark of cognition isn’t perfect rule execution but resilient handling of the mistakes we make. Flawless performance indicates rigid symbol manipulation and doesn’t necessarily equate to understanding. Considering the fact that humans rarely perform without error, maybe looking at imperfection and repair of imperfections may be a better test for genuine understanding in terms of symbol manipulation.
ReplyDeleteSearle’s “Chinese Room” offers a compelling argument that programs can manipulate syntax without ever achieving semantics. The fact that he insist that cognition is tied to the biological substrate of the brain highlights an important challenge for AI research. Yet, I think the thought experiment overlooks how much of human understanding is not private but relational. We rarely construct meaning in isolation; it emerges through our ongoing interactions with others and the world. It also depends on much more than just spoken words or phrases as facial expressions, mannerisms, tone and much more come into play.
ReplyDeleteThis raises the possibility that “understanding” is not simply a matter of internal causal powers but of participating in networks of communication. A machine sealed inside a room may never understand, but a system embedded in rich, dynamic exchanges might begin to approximate something closer to it. If meaning is at least partly emergent from interaction, then perhaps the more appropriate question is not “can machines think?” but “can machines participate?”.
I agree with you from a societal point of view, as we often use contextual clues and movements accompanying language to interpret it. For that to be possible, we still need to understand the semantics of the majority of words by themselves and understand what other people are saying. A computer akin to the CRA who has someone answer to him still doesn't understand what has been said to him, it will only use another rule from his set to answer the question. The CRA does discredit C=C, which entails that a computer cannot understand solely using a computational state. It would need to somehow be able to integrate the semantics of those interactions and not solely "plug" them in a recipe to produce an answer, in my opinion.
DeleteA lot of previous comments touch upon Searle’s use of ‘understanding’ especially how he argues that strong AI, where a program based on a formal system cannot ‘understand’ through his Chinese Room argument. How Searle uses the term ‘understanding’ had struck me as very similar to how we talk about the hard problem and what it means for a machine to feel like how we feel. I can’t help but to feel these are two very connected topics, would perhaps proving or figuring out how an artificial machine may or may not ‘understand’ be able to bridge towards the question of feeling in these machines? Are the questions that surround both also somewhere outside of our realm of explanation, to borrow from the class, are we out of degrees of explanatory freedom to fully answer explore machines ‘understanding’ at a certain point?
ReplyDeleteWhat stands out to me in Searle’s critique is the way it shifts focus from appearances to what’s happening inside the system. Turing’s test is about whether a machine can imitate human behavior convincingly, but Searle argues that imitation isn’t the same as genuine understanding. The Chinese Room shows that symbols can be manipulated flawlessly without anyone inside grasping their meaning. What’s unsettling is how close this feels to our own learning. When we memorize for an exam, we often work with symbols (words, formulas etc) without fully connecting them to deeper concepts. It raises the question: how much of human understanding is truly comprehension, and how much is just clever symbol use? That uncertainty seems just as important for thinking about ourselves as it is for thinking about AI.
ReplyDeleteSearle’s arguments to disprove that strong AI explains human cognition invoked a thought about the theory of other minds discussed in class today. Searle, using the Chinese room argument, supposes that T2 can be passed using computation by following a certain set of rules. Wouldn’t this mean that, by one’s own perspective, it is possible that everyone else other than the self is a T3 robot? This would mean that everyone around us could potentially use computation with a very large set of rules to react to any situation, without cognition. This further makes me question whether it is possible that nothing in the world has meaning until we decide to give it meaning because as we know, T3 robots do not get any understanding out of the process of computing, although they might seem like they do (just like in the Chinese room argument). Therefore, coming back to the theory of other minds, this would mean that it is a possibility that the self gives meaning to all.
ReplyDelete“the computer program is simply irrelevant to my understanding of the story” A computer might answer questions or translate languages perfectly, but it doesn’t know what it’s doing. The one thing I don't know if I agree with about the argument is that maybe the ‘system as a whole’ does understand, and not just the person in the room. Modern AI like natural networks doesn’t just follow rules; but in a way it learns patterns in ways that might actually resemble a kind of understanding.
ReplyDeleteSearle mentions that he, being a certain biological organism with a certain biological structure, is “causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something that had those causal powers could have that intentionality.” Many of these causal qualities fall under the “cognition” umbrella, though Searle spends most of the paper arguing for the lack of “understanding” computer programs have with the Chinese Room Argument. However, his main goal as stated at the beginning of the paper is the consequences of intentionality in organisms vs computer programs. He argues that computer programs can’t be intentional on the basis of them not understanding the CRA; but can someone be intentional about something without understanding it? And where would intentionality fall in the T0-T5 classification?
ReplyDeleteSearle is implying that when a person in the context of his Chinese Room argument, follows rules to move symbols and characters around, they are only doing syntax (manipulating shapes and patterns). But to really understand, a person needs semantics (the meaning behind those characters). The point he tries to make is that just following rules doesn’t give you meaning. However, I have a slight pushback to this argument: In his example, from my understanding, the person is located in a closed system where meaning can never appear because nothing connects the symbols to real life. But outside that setup, people can learn and adapt. For example, a child learns the word “dog” by hearing it while actually seeing and playing with dogs. That is in my opinion a big part of what allows symbols to take on meaning, rather than staying just empty shapes. So, his Chinese Room example makes sense in its own context and to prove a point; however, it falls short when it comes to a real-life setup.
ReplyDeleteUnless I am misunderstanding what you are saying, I believe that is what Searle is arguing. Strong AI is just like that man in the box. Unlike us, who can see the dog and then associate it with the symbol. Strong AI only has those symbols to manipulate and then give you an output. The argument goes that it can never understand "dog", only how to manipulate the word dog in a sentence according to its program rules. So, you are right to say that outside the box, you are able to learn, but strong AI is unable to leave this box. So, it only has those empty shapes to work with.
Delete
ReplyDeleteIn “Minds, Brains, and Programs,” John Searle challenges the premise of “strong AI”- the belief that running the right computer program could by itself produce real understanding or consciousness. Using his famous “Chinese Room” thought experiment, he argues that symbol manipulation (syntax) does not equate to comprehension (semantics). Even if a person or machine gives perfect answers in Chinese, they might still understand nothing. Searle critiques several defenses of strong AI, such as the systems reply, the robot reply, and the brain simulator reply, insisting that none add genuine intentionality. He concludes that intentionality comes from the brain’s actual physical and biological processes, not from abstract computer programs, and that real understanding would require recreating those processes themselves rather than just imitating their appearance.
ReplyDeleteThe core, I understand, is that a machine purely using and interpreting symbols is not enough to be understanding; similarly to when I copy what someone else is saying in another language, it doesn't mean I understand the language. However, when Searle mentions that a man-made machine still thinks if we can replicate all the inner causes of the human brain, I struggle to follow and differentiate it from the brain simulator reply. I understand that a man replicating neuron activations with water pipes in order to have Chinese output does not mean the man understands Chinese. But if we were to "produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours", why does Searle believe that this one works while the former didn't? Is it just because it is a simulation that the former does not work? If so, why the distinction between this supposed biological copy and a robot simulation?
In the combination reply, Searle asserts that if a robot’s behavior were explained by the fact that a man was simply manipulating formal symbols, then we would never regard this robot as a genuinely thinking being, but rather as a “mechanical dummy.” My question concerns how this extends to machines that pass the Turing test. In my opinion, a robot that fits the qualities described in the reply ( a brain-shaped computer, synapses firing, behavior indistinguishable from that of a human, etc) would be a convincing example of strong AI (and would presumably pass the Turing test). Therefore, If robots built with brain-like structures and humanlike behavior could convincingly pass the Turing test, why does Searle still deny that they have real understanding? I feel like the answer has something to do with the “man inside,” but some clarification woud be greatly appreciated.
ReplyDeleteI get why Searle’s Chinese Room feels convincing — if all you’re doing is moving symbols around, it doesn’t seem like there’s any real understanding going on. At the same time, I’m not sure his argument works if we think about a system that could actually learn and adapt over time. To pass T2 long-term, it would have to keep up with new slang, contexts, and references, which feels different from just following a fixed rulebook. I’m still not sure because even if the system can adapt like that, does it mean it understands, or is it just good at mimicking?
ReplyDeleteThe Chinese Room argument is a compelling argument against computers being able to think or feel. Indeed, if a computer is simply providing an output from an input based on set rules, it seems that the computer is not really understanding or having thoughts about this process of applying rules. However, I think that from this argument we may only claim that computers do not think in the same way humans think. I think there might be some wiggle room to say that computers do feel and think but differently. Just like more complex animals have more complex thoughts, more complex computers could have more complex thoughts but still completely different to humans and nonhuman animals. In this case, we might ask Nagel’s questions but for computers: What’s it like to be a computer ?
ReplyDeleteI can see why you'd say computers could be like animals with different kinds of thoughts, but I think Searle respond to that and point out that the point of the Chinese Room is just a running program that is moving symbols around without truly understanding. So I think even if a computer got more complex, it would still just be manipulating symbols with no real understanding. That’s why Searle argues we can’t just say “computers think differently” because from his view, they don’t think at all. But it made me wonder: if we ever did/were to create a machine with the same causal powers as the brain, what would actually have to change in that system for us to say it’s moved from mimicking to genuinely understanding?
DeleteThis comment has been removed by the author.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteSearle’s Minds, Brains, and Programs doesn’t reject the Turing Test but instead contests computational sufficiency — the claim that cognition is computation (c=c). In the Chinese Room, the T2-passing program still lacks deeper understanding. That internal evidence shows syntax =/= semantics, nor can it generate semantics.
ReplyDeleteThis refines Harnad’s point from 2b: T2 behavior is incomplete without T3 grounding. Searle adds that because computation lacks the brain's causal mechanisms, it is unable to ground symbols. Even the System Reply realizes this by including sensors and motors, but Searle's point still stands: meaning hasn't been produced if the inputs/outputs are just more symbols.
I wonder, what sort of causal organization might bridge this gap? it's an empirical question really. It may be possible to investigate whether grounding can overcome Searle's distinction between syntax and semantics if a system with actual sensory contact is able to alter its internal states in ways that mimic brain dynamics..
This comment has been removed by the author.
ReplyDeleteearle's Chinese Room Argument suggests that machines can behave as if they understand language through symbol manipulation, however, they lack genuine understanding and consciousness. Searle reinforces this through his interpretation of the Robot Reply in which he clearly separates genuine understanding from the perception and movement of the robot that seem to be just related to symbol manipulation. I think Searle makes an interesting critique on intentionality in which meaning is key. This is relevant in AI and LLMs because they often produce complex information but they lack genuine intentionality and conscious experience. This aligns with the hard problem since there is a clear gap between objective processing and subjective awareness. Thus, Searle's critique of AI suggests that computation cannot really account for consciousness because it only explains the 'easy problems' such as perception and response to stimuli (Chinese symbols). His argument shows how cognition is not all computation but it is also genuine understanding and conscious experience.
ReplyDeleteI understand Searle’s point that simply giving a program a body and sensors doesn’t automatically lead to understanding, but I still wonder how a robot could ever become genuinely grounded in the world. If it’s interacting with real objects, learning from feedback, and forming internal representations based on experience, wouldn’t that start to give its symbols some kind of meaning beyond syntax? Or would it always just be processing patterns according to pre programmed steps without awareness? I’m curious where, if ever, the line would be between a robot that merely responds to stimuli and one that actually understands what those interactions mean.
ReplyDelete