3b. Harnad, S. (2001) What's Wrong and Right About Searle's Chinese Room Argument?
Reading: Harnad, S. (2001) What's Wrong and Right About Searle's Chinese Room Argument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press.
Searle's Chinese Room Argument showed a fatal flaw in computationalism (the idea that mental states are just computational states) and helped usher in the era of situated robotics and symbol grounding (although Searle himself thought neuroscience was the only correct way to understand the mind).
This paper is a response to Searle’s Chinese Room Argument, where, while Professor Harnad largely agrees with the CRA, he redefines the core tenets of refutation for Strong AI. He refines the first tenet from the “mind is a computer program” to a statement that incorporates the notion that, not only does the mind have the “right” computer program, but is also executed dynamically and in an embodied way. The second tenet is that the “brain is irrelevant” to understanding, but our Professor clarifies the meaning of this to be that mental states must not be dependent on a single or specific form of a real world executing machine, but where the symbols are still grounded in real experiences. Lastly, he highlights how stating that the Turing Test is decisive of “understanding” is misleading as the passing or failing the TT does not necessitate a mind, or lack thereof, but because of the lack of defined structure (exhibited in the second tenet), the TT is the best tool we have to evaluate the functional aspect of the mind.
ReplyDeleteJesse C: See this slide from Week 3 (Searle). It replaces Searle's wooly words (in quotes) with what they meant:
Delete1. Computationalism (“Strong AI”)
By "Strong AI" Searle means C=C computationalism
2. Cognition is [just] computation (“the mind is [just] a computer program”)
[What is computation? And what is the difference between computation and computationalism? "Mind" is just a weasel-word for thinking (i.e., cognition)]
3. Computation is implementation-independent (“the brain is irrelevant”)
[What does "implementation-independent" mean?]
[But the symbol-manipulations do have to be implemented, whether by a computer, a brain, or a person with a paper and pencil. (Computation is not just a recipe floating in the sky.)]
4. The Turing Test is Decisive
[But remember that Searle is only talking about T2.]
***EVERYBODY PLEASE NOTE: I REDUCED THE MINIMUM NUMBER OF SKYWRITINGS. BUT THE READINGS ARE **ALL** RELEVANT TO AN OVERALL UNDERSTANDING OF THE COURSE. SO, EVEN IF YOU DO NOT DO A SKYWRITING ON ALL OF THEM, AT LEAST FEED EACH READING YOU DO NOT READ TO CHATGPT AND ASK IT FOR A SUMMARY, SO YOU KNOW WHAT THE READING SAID — OTHERWISE YOU WILL NOT HAVE A COMPLETE GRASP OF THE COURSE TO INTEGRATE AND INTERCONNECT FOR THE FINAL EXAM.***
DeleteIn reading Harnad’s paper (as well as watching the video), what I found particularly interesting is how he highlights why Searle is right and wrong simultaneously. It is true that simply running a program or algorithm cannot somehow give a system a mind, after all, symbols must be grounded rather than being empty representations. However, Harnad displays that Searle also threw out too much in claiming only neuroscience could explain minds. He didn’t consider the possibility that cognition might be captured in a hybrid manner, parts computational and part sensorimotor. This is where she introduces symbol grounding and robotics. If a system can ground its symbols in experience in the physical world through perception to act on those symbols, then perhaps the "room" is no longer empty. I found this very compelling because Searle becomes less a goal of proving wrong but actually building systems that could understand in a meaningful sense. For example, the Turing Test is not sufficient, but providing symbol grounding in the world provides a missing element that enhances the test’s capabilities.
ReplyDeleteLorena, symbol-manipulations (computations) have to be implemented (executed) by some sort of physical hardware. Whether they have to be "grounded" is another question, and that depends on the difference between T2 and T3.
DeleteWhat do you think "symbol grounding" is, and what is the difference between T2 and T3? (I think you do understand all of this.
("Mind is a WW")
I appreciate your reply! My understanding is that symbol grounding is ensuring that the symbols a system manipulates are tied to some form of real-world experience instead of just being meaningless symbols floating around. If a computer just manipulates the characters that constitute the word "apple," it does not know what an apple is. If it can touch an apple, see an apple, or even do something to an apple, then the word "apple" can be associated with something real and not just other symbols. That is the grounding.
DeleteIn T2 (the pen-pal test), the only way the machine engages in the test is by using language in conversation, just like text conversation. In T3 (the full robotic test), the machine must also perceive and physically manipulate the world the same way humans do. This distinction is important because language is not removed from experience. We do not independently describe the world around us; our words are influenced and take on meaning from the actions we can engage in and our physical senses. Therefore, I believe that grounding of the symbols themselves becomes the reason why T3 is a more valid test of "mind" than T2.
Lorena, the algorithm for understanding what an "apple" is will not be manipulating the letters of "apple" but something more like words in the definition of an apple.
DeleteBut you are right that words alone (symbols) are not enough: A T3-passer needs to view and manipulate apples too (and first).
The general argument Dr. Harnad puts forth in this article is, in my opinion, quite intuitive. I agree with Lorena and Jesse in that the argument avoids the oversimplified extremes of pure computationalism or the complete dismissal of computation altogether. In acknowledging Searle’s critique that computation alone can’t account for genuine understanding, and then reframing the conversation with the Symbol Grounding Problem (that symbols can’t just float, but instead they need to be linked to perception/action to avoid remaining “empty”), the idea of computation embedded in grounded, embodied systems moves the CRA debate into something forwardly constructive with practical applications. The only question I have is regarding “Hybrids” as mentioned in the work : do these exist yet in the modern day? E.g a Tesla robot that relies upon LLMs while interacting with the physical world in real time, using sensorimotor data (e.g, camera, touch sensors, movement) and computation (e.g language processing)? Or are these too primitive to be considered hybrids yet, and why?
ReplyDeleteHi Elle! I think you prompt a compelling reflection about hybrids. In the text, the word “hybrid” is not formally defined, but is mentioned as a model that can perform computations while also having a sensorimotor system, allowing it to interact with its environment. Assuming this definition is correct, I think hybrids do exist, such as the Telsa machine you mentioned. Indeed, the self-driving Tesla can gather data through its various sensors and perform computations to determine its next output, such as stopping when it encounters a red light. I do not think, however, that the Tesla’s sensorimotor system solves the symbol grounding problem or that the Tesla can think, given that the sensors can only “translate” the world into data (squiggles and squaggles) which does not guarantee that the Tesla “understands” what the data represent.
DeleteElle, yes, Elon Musk is just trying to make more money (and worse), not to pass the TT!
DeleteSymbol grounding starts at human-language scale. Pure computation is just syntax (Searle's "squiggles and squiggles", and doesn't need grounding.
But, by definition, grounding has to be bottom-up, starting with a baby robot -- not top-down, starting from an LLM's Big Gulp of words. (Can you kid-sib that?)
Cendrine, "hybrid" in this context means (eequivalently), symbolic/sensorimotor, or computational/robotic -- but, the "dynamic" does not refer to the hardware implementation, which is necessary even for just computation. (Can you relate sib this kid-sibly to the confusion about what is a machine vs. what is a Turing Machine?)
"Dynamic", in this context, refers to the property of any machine to produce an output from an input. By definition, a machine is a causal system, which means that it can take an input (for example, an intake of fuel or a question) and generate an output (ex. propelling a car or providing an answer). A Turing machine is thus a dynamic machine, because it can generate an input from a specific output, but it also does something more, which is symbol manipulation. That is, the Turing machine can follow a set of rules (ex. if you detect "1", erase it and write "0") to manipulate the symbols (ex. "0" and "1") into a desired output. However, the physical structure of the machine is not what gives it this property. Indeed, the Turing machine’s system could be physical, with wheels and cogs for example, just as it could be digital. Thus, its dynamical property lies in its rules (software) rather than its system (hardware).
DeleteAs others mentioned, it is interesting to see how Searle's Chinese Room Argument depicts both the possibilities and limits of conscious understanding. One aspect that adds a layer to the discussion is Harnad’s focus on implementation-independence. This is the idea that the physical system executing a computation doesn’t matter (e.g., different types of machines could implement the same program and, in principle, have the same mental states). The Chinese Room exploits this claim since even if pure computational execution is sufficient for understanding, it cannot truly understand. Harnad ties this into the argument on the weakness of computationalism and clarifies that the CRA does not deny that computation can contribute to cognition, only that it cannot operate in isolation. This opens the door for the hybrid approach where computation is embodied with sensorimotor grounding!
ReplyDeleteEmily, convince me, kid-sibly, that you are not conflating the need for physical implementation of all computation, and the need for sensorimotor grounding for TT-scale cognition (including language understanding)...
DeleteYes, the issue is not about whether computation requires a physical substrate but rather whether that substrate alone is sufficient for cognition. Physical implementation of computation is a trivial point, it is a basic fact that computation must be carried out in some medium, such as a person with a paper and pencil. Cognition is different, for real understanding, sensorimotor grounding emerges as a special requirement. This requirement applies to cognition but not computation in general.
DeleteI think what makes Searle’s Chinese Room so lasting is that it shows why computation by itself can’t be the whole story, symbol shuffling doesn’t amount to understanding. But at the same time, I like how Emily brought in the idea of hybrids. Harnad points out that the flaw is in pure computationalism, not in computation being part of cognition. That’s why symbol grounding and sensorimotor embodiment feel so important: they let us imagine a system where computation is integrated into a dynamic, situated body.
DeleteI was quite interested in the discussion of the limit of Searle’s CRA experiment in the context of the different levels of computation (T2/3/4)
ReplyDeleteI understand that T2 cannot understand, that is the premise of the CRA. Searle shows that with the same recipe as the T2 passing computer, he can turn squible squible into squabble squabble (chinese output) by manipulating the symbols, without an ounce of understanding. And the output would be indistinguishable from a native Chinese speakers’ output, ergo TT passed without understanding.
But as soon as we introduce T3 and sensorimotor capacities (some notion of grounding the senses of the computer/robot). The CRA test cannot disprove whether T3 has an understanding. This is because Searle can no longer implement the entirety of the T3 system, as it is no longer just a set of instructions but also a sensory motor system that Searle cannot possess (being relinquished to our human sensory motor system). Why might sensorimotor grounding matter? Well we don’t know, perhaps implementing the translation recipe is somehow grounded in the T3 by a sensorimotor experience that leads to understanding (eg. writing with a pen leading to encoding of motor memory of translation). Regardless of whether the sensorimotor experience the T3 has leads to some kind of understanding, the point is that Searle has no way of becoming the T3 and therefore cannot deny whether it is understanding or not. Because he cannot get into the same computational state as the T3 as he did the T2.
Pippa, very good. Now explain "Searle's Periscope" to kid-sib (and why it would only have penetrated the Other-Minds barrier if C=C had been true. Anything else would be blocked by Cartesian uncertainty (otherwise known as scientific underdetermination), just as the theory of electromagnetism or gravitation are. (But I hope you aren't conflating symbol-grounding with the need for some form of physical implementation of computation (to avoid a Cheshire-Cat's (smile) effect : the recipe hanging in the sky!)
DeleteAlright, I will give it a go in kid-sib. Searle’s periscope is like saying if a computer really understands something, then by following the same rules that it does, you could understand what it understands.
DeleteNow, if c=c is true then Searle’s experience translating the chinese symbols would feel like his experience answering a question in his own language (english). Which would prove that all that our minds are doing is symbol manipulation. But alas, cognition seems to be something more than computation because, as Searle noted, implementing the recipe did not feel like understanding the language.
Searle's thought experiment is efficient in breaking the other-minds problem in that it allows him to "become" the system by executing its programme as would the computer. It demonstrates that the task of answering in Chinese can be successfully carried out without proper understanding of the language so long as the provided rules to apply are accurate (aka the programme). The strength of this thought experiment lies in the fact that we humans know what it feels like to understand: if we can successfully carry out the CR task without this particular feeling, then it is not fundamental in the process that led to success.
DeleteWhat intrigued me from Harnad’s discussion is his framing of the TT as the empirical threshold between computation and cognition. Even if a system is embodied or hybrid, the TT represents the closest we can get to confirming human-like cognitive abilities because it tests for functional equivalence in performance. This highlights that, beyond grounding symbols or sensorimotor capacities, there is a measurable point where a system’s behavior indicates cognitive competence rather than just computation. How might this threshold inform the design of future AI systems that aim to genuinely “understand” rather than just simulate understanding? If of course this is even possible considering the hard problem and its correlation between understanding and what it feels like to understand…
ReplyDeleteSannah, the Hard Problem does not rule out grounding; it just shows that grounding is not enough to solve the HP, just the EP. (And that's the reason the HP is hard.)
DeleteThere's no empirical "threshold" between computation and cognition. It's just that if C=C is wrong, then cognition can't be just computation. There's a continuum of doing-capacity from the vegetative level in a clam or a jellyfish (or even lower) to human cognitive capacity. And sentience probably evolves much earlier in that evolutionary sequence -- possibly earlier than clams or jellyfish -- though probably not in organisms that don't even have a nervous system. But there's no certainty there either, because of the O-MP, which is even more uncertain than the truth of the theory of gravitation or electromagnetism). Why?
What I understand from the article is that reverse engineering means trying to build a system that can do everything the human mind can do, not by copying biological hardware but by replicating its functional capacities. The TT(especially T3,T4) is the benchmark for success, if a system can match human performance across linguistic, sensorimotor, and cognitive domains, then we've 'reverse engineered' cognition. This approach is performance-based. It doesn't mean we've recreated the same mechanisms as the brain, only that we've built a working model with equivalent capacities. Harnad contrasts this with AI that produces clever tools, since reverse engineering is about cognitive modeling which explain how human cognition works by building a system that can do what we do.
ReplyDeleteRachel, correct; just a few I's to dot and t's to cross. What is being reverse-engineered is not performance but performance-capacity. Why is that important; and how is it related to T-Testing?
DeleteT4 (internal performance capacity) is different from T2/T3: some of it is just vegetative, for sustaining life, rather than specifically cognitive. Bbut it is also possible that some vegetative capacities are essential to cognitive capacities, such that T2/T3 can't be successfully passed without them. A simple way to put this is that internal chemical T4 functions might turn out to be necessary for successfully producing external T2/T3 doing-capacities.
It is also not out of the question (even though no one has a clue of a clue how or why) that sentience, i.e. feeling, is one of these prerequisites, except that, unlike other T4 functions, feeling itself is not observable directly (except by the feeler). Its only symptom might be as something missing in T4, or as something in T4 without which T2/T3 cannot be passed.
So T4 is a dark horse, and this feeling factor may be something that evolves near the jellyfish level! (This will arise again in Week 7 on evolution, Week 10 on the HP, and Week 11 on nohuman sentience.
{From the reading : "[...] the force of his CRA depends completely on understanding's being a conscious mental state [...]"}
ReplyDeleteI see “intentionality” and “conscious” as WWs, but I don’t think it’s necessarily wrong for the CRA to hinge on “understanding” being a conscious state, IF we clearly define what we mean by “conscious.” Using “conscious” as if it already explains something is indeed replacing one mystery with another (Harnad), but can’t we use it to frame cognition research? Because I feel like “on tourne en rond” by requiring to understand how it works to use it, while with the right “temporary definition” it could help direct how we study cognition/what we focus on.
e.g. by saying “understanding is a conscious mental state” we require understanding to be something more than purely computation, maybe what we are actually doing is requiring the presence of "feeling".
Emmanuelle, how about "the state of understanding what "apple" refers to is a state that it feels like something to be in".
DeleteNo WWs, no circularity. And we all understand what it feels like to be in a state that it feels like something to be in.
I noticed while reading the comments that Dr. Harnad often highlights the difference between the physical/dynamical system that is needed to implement computation and the sensorimotor aspect of T3 that allows symbol grounding. I would like to try to define that difference (please let me know if it is accurate and kid-sibly). On the one hand, the physical implementation of computation is anything in which computation occurs, whether it is a human body, a laptop, a pen and paper or a pen and paper. I think it is also what Searle refers to in his second tenet, which is implementation independence of computation. On the other hand, the sensorimotor grounding is the bottom-up processing that allows the use of external stimuli to interpret the symbols otherwise meaningless, and that therefore allows the actual cognition like language understanding. This aligns with the point I made in Reading 3a where I explain how Searle in the Chinese Room does not have the necessary interactions and environment to successfully learn and understand Chinese, compared to the movie character who did learn Polish by actively interacting with and using the language.
ReplyDeleteAnne-Sophie, I think you may be on the right track, but the sensorimotor interaction is between the speaker and the things in the world that his words refer to: "apples" and apples. It is not just the interaction between speakers and speakers, or between their words.
DeleteI’ve gathered that Searle drew an important line between T2 and T3, using the CRA to falsify computationalism only when examined through T2. But why this line? It seems to me more a matter of degree than of kind. One could, in principle, memorize the “Chinese rulebook” and pass T2 that way. But it would be impossible to memorize an equivalent rulebook for T3 — one that specifies every bodily action in response to hearing and speaking Chinese. Those sensorimotor rules would be endless, and therefore could not be “memorized” without understanding.
ReplyDeleteRevisiting T2, Searle claims (in the video lecture, 8:49) that the systems reply can be “easily handled by having Searle memorize all of the symbol manipulation rules.” But is this really believable? Even the T2 rulebook would contain enough rules to make Searle Turing-indistinguishable from a native Chinese speaker for the entire lifetime of the interlocutor. Memorizing such a rulebook — with hundreds of thousands or millions of entries — seems impossible to me, and so perhaps the systems reply has more force than Searle admits.
And even if it were possible, once Searle internalizes the rules and “becomes” the system, isn’t understanding now embedded in that system through the rules he has mastered? How can the mere location of the rules — on a page or in memory — decide whether understanding exists? If the system functions equivalently, why deny that it understands, simply because Searle himself does not feel it?
Jesse M, the line is between:
Delete(1) passing T2, by being able to discuss anything any average human can discuss with any other human (in Chinese), for a lifetime, by doing only what a Turing Machine can do (what is that?)
and
(2) passing T3 by bring able to do anything and average human can do in the world, for a lifetime.
It is because, in computation, the physical hardware that is doing the computing is irrelevant that Searle can become the hardware that is doing what the Chinese T2-passing computer is doing: passing T2. And so Searle can truthfully report that he is not understanding Chinese.
You've missed to forgotten about Searle's Periscope (what is that?)
Searle cannot become the T3-passing system, because a sensorimotor robot is not a hardware independent symbol-manipulator. A robot is a dynamic, physical system. So Searle cannot "become it" and report anything.
If you think the CRA is wrong because no one could memorize and execute so much code, imagine it bit by bit. Start with training someone (who does not know what tic-tac-toe is, to memorize the recipe for playing tic-tac-toe (but coded in hexadecimal code). Once he can play t-t-t till with anyone via an interface that translates the hex onto a screen in another room and displays if as X's and O's (and ask the hex player whether he has any idea what the code means).
Now does the hex player not know, but the "system" knows? Is it embedded somewhere inside the hex player?
Welcome to the Matrix...
But, yes, it is crucial that understanding is not just something you do: it feels like something. And that's why Searle's Periscope works (or would work, if there really were a purely computational recipe that could pass T2). If C=C is wrong, there isn't one.
"Searle thought that the CRA had invalidated the Turing Test as an indicator of mental states. But we always knew that the TT was fallible; like the CRA, it is not a proof."
ReplyDeleteI find this passage in professor's Harnad's article very interesting and it had me thinking that Searle’s Chinese Room and the Turing Test don’t actually prove anything about minds, but they’re still really useful. They're useful because they force us to ask the key questions. The Chinese Room makes us wonder if just manipulating symbols ever actually creates real meaning. While the Turing Test makes us ask if something acts like it understands (and fools some people into thinking it does), is that enough to say it really does? These aren’t answers, but they shape how people think about the problem. Without them, scientists might not even agree on what the real question is, let alone what valid answers are. That’s why they matter: not as final proofs, but as tools that guide how we think about minds and machines.
Jad, but don't forget (lifelong) Turing Indistinguishability. That would be pretty strong evidence.
DeleteJad, I agree with you that the CRA and the Turing Test work more like tools for thinking than actual proofs. But something from Professor Harnad’s lectures keeps standing out to me, his point that Searle’s real target is the idea of implementation-independence. If mental states really were independent of the material they’re built from, then any system running the same program would have the same mental life. That’s exactly what Searle’s Periscope challenges. So I’m wondering if focusing only on behavior (like the Turing Test does) or on symbol manipulation (like the CRA does) misses what really matters. Maybe the key factor is how a system is physically connected to and interacting with the world, not just whether it passes tests or runs the right program.
DeleteWhile reading this, I had a hard time trying to convince myself with the argument that Searle’s Periscope refutes computationalism with the help of the other mind’s problem. With further thought, I realize I had a difficult time even accepting the other minds problem to begin with, since it cannot be proven empirically that we cannot directly access another person’s mental state, for me, this becomes a matter of intuition. My thought, and perhaps this is misguided, was that what if we could, as the Searle’s Periscope states, peak into another person’s mental state by being in the same computational state as them but we can’t consciously know that we are sharing the exact same mental state as them.
ReplyDeleteLucy m, "proven empirically"? You can prove computationally, on pain of logical contradiction, in maths. But there's no proof in science -- only overwhelming observable evidence (but that's enough!). And the only other certainty (for each feeler) is that they are feeling, whilst they are feeling. (A later memory: not so sure.) But others? Other humans are similar enough to us that it's hard to imagine they don't feel; and our mirror neurons make us feel like we know that others feel.(And at least Searle would have been less confident that C=C was wrong if memorizing and executing the Chinese T2-passing recipe actually did make him understand Chinese!)
DeleteSearle’s Periscope is a term coined to describe Searle’s argument against computationalism, tying the physical implementation and computational program together. The view is that if the computational program is implementation-independent, and the program will behave the same way no matter how it is instantiated, then any presence/absence of some property of the program in one implementation will be present in all implementations. Furthermore, since mental states are just implementations of a computational program, it follows that mental states present/absent in one machine will be present/absent in all machines with the same program.
ReplyDeleteAdditionally “The CRA would not work against a non-computational T2-passing system;” what on earth would a non-computational system even look like?
Emma, spot on -- except that every physical system is a non-computational system (except when it's implementing a computer, executing an algorithm -- or it's hybrid, like a Tesla, or a T3).
DeleteHarnad points out that the CRA only undermines pure computation and systems that just manipulate symbols with no grounding. This means that there is a possibility that cognition might require something more: embodiment, sensory interaction with the world, or even neural-like dynamics. That’s where the idea of a “non-computational T2-passer” comes in, which at first sounds paradoxical because the Turing Test is based on symbol manipulation. But again it suggests that intelligence might not be just about computation alone, but about how a system is connected to its environment. Which as of right now systems that are not living do not connect with its environment the same way that living beings do with sensorimotor experiences.
ReplyDeleteI like how you tied the “non-computational T2-passer” to embodiment and grounding. I think what’s tricky is that Turing’s test itself doesn’t tell us what kind of mechanisms are at work, only that the behavior is indistinguishable. So in theory, a system could pass T2 without being purely computational, if it had the right sensorimotor grounding or neural-like processes driving it. But that also makes me wonder: does that mean the TT might not be as “implementation-independent” as people assume? Maybe Searle’s mistake was treating the TT and computationalism as the same thing when in reality, passing the test could involve a lot more than symbol manipulation. I agree with you that living systems still have a kind of connection to the environment that current machines just don’t and that might be the missing piece in these debates.
DeleteThe duck analogy helped me better understand the limitations of Searle’s Chinese Room argument. A D3 duck is a reverse engineered duck that is functionally equivalent to a real duck (waddle similarly to a duck, swim, quack etc). A D4 duck is indistinguishable inside and out. T2 is a mind version of a more macrofunctional D3 end duck (almost D2), while T3/T4 is more like D4 as there needs to be sensorimotor interaction with the real world (T3) and structural similarities (T4). The CRA can only be used against implementation-independent computation (D3/T2 scenario) because the person in the Chinese room is merely functionally equivalent to a Chinese speaker. Searle can follow rules perfectly to produce indistinguishable replies as a Chinese speaker without any understanding, showing the symbol manipulation alone is not enough to explain understanding. But the CRA cannot be used against T3/T4 or D4 scenarios because Searle cannot become the inside dynamics and structures of a Chinese person by just following rules. If we were able to successfully reverse engineer a D4 duck, no one would deny we have a “complete understanding of how a real duck works.” Same logic follows for passing T3/T4 tests and the CRA cannot be used against these.
ReplyDeleteThe reformulation in this reading helped me better understand Searl’s argument and where it falls short; the conclusion that cognition can’t be computation at all cannot be drawn from the argument that a T2 passing programming does not exhibit conscious understanding, if computationalism is true. However, I have difficulty with another aspect of Searl’s CRA: he is basing his conclusion about not understanding Chinese (despite manipulating the symbols correctly) on his own introspection, were he really in the CRA scenario. That, in itself, requires a certain level of cognition (which he disagrees is computation and can be modelled), no? That would mean that, in order to understand cognition, we would then have to model the “cognitive process” of the whole system which Searl is (metaphorically) a part of, which would also involve modelling the cognitive process inside of Searl’s mind leading him to be able to do the symbol manipulation, memorization of symbols, introspection, etc. Wouldn’t that lead to an infinite regress?
ReplyDeleteHi Nicole! I agree that the reading helps disentangle the flaws in Searle’s argument of the CRA. The point that you highlight about him inserting himself into the system as evidence that the entire system does not “understand” because he himself does not understand Chinese is what Professor Harnad calls Searle’s Periscope. And this is where Searle tries to bypass one of the principles of computation: implementation independence – the idea that the same computation can be implemented using any substrate. However, this does not explain that cognition cannot be computational at all. Instead of relying on Searle’s introspection, perhaps we should focus on the whole system’s capacity for grounding (i.e., to avoid the regress). T3, a robot with sensorimotor experiences shows us that symbols can be grounded in meaning through interactions with the external world – revealing that meaning requires more than mere symbol manipulation.
DeleteI struggle to consider the point of implementation-independence in computationalism, for reasons which were not necessarily mentioned in Harnad’s response to Searle’s Chinese Room Argument. Let us as an example consider neuronal plasticity and Hebb’s law according “what fires together wires together”, suggesting that an increase in synaptic efficacy arises from a presynaptic cell’s persistent and repeated stimulation of a postsynaptic cell. This ability of the brain (the “hardware”) to modify itself due to the implementation of the software (cognitive processes and states), which in turn are implemented differently because of modifications in the hardware’s make-up. It therefore seems that cognition is not implementation-independent – it modifies the hardware it uses to make it easier to conduct certain cognitive actions.
ReplyDeleteSofia, I thought your point about plasticity was really interesting. Harnad says computationalism depends on implementation-independence, the idea that different physical systems can realize the same computational state. Your Hebbian example makes it seem like the brain’s hardware matters more than that, since it reshapes itself through experience. But couldn’t neurons still be swapped out for some other kind of substrate, as long as the same adaptive structure was preserved?After all, artificial systems like neural networks also adjust their “connections.” So maybe the real question might be whether that kind of plasticity is functionally equivalent, or if the brain’s version involves something fundamentally different.
DeleteI see how plasticity challenges implementation-independence since the brain's harware changes with experience which kind of makes it fuzzy between software and hardware in that sense -but what if this adaptive rewiring is just another layer of computation where it continuously updates its architecture? That would mean implementation-independence needs to include not just static structures but self modifying systems. Then could an artificial system replicate that kind of self-organizing plasticity and subjective aspect that arises from it? neural networks adapt connections yes but would that create anything remotely similar to genuine understanding or just do better pattern matching. I guess i thought, while reading this thread whether plasticity alone is enough or if theres another somethign that gives cognition its grounding.
DeleteI thought one of the most interesting parts was Harnad’s idea of “Searle’s Periscope.” Normally, we can’t directly access another being’s conscious states, the classic other minds problem. But if mental states really were just computational states, then by running the exact same program ourselves we should be able to know whether those states are present. The fact that Searle can run the Chinese program without actually understanding Chinese shows that no such periscope exists. I think this makes the CRA less about symbol manipulation and more about the nature of conscious experience.
ReplyDeleteHarnad comments on Searle by pointing out that syntax alone is not enough for meaning. Instead, symbols need to be grounded in real-world experiences through perception and action. His argument that passing a “pen-pal” style Turing Test, which is a T2 test, is not the same as truly embodying, as per the T3 test. This really stood out to me as it showcases how important it is that cognition connects back to our world, not just to more symbols.
ReplyDeleteOn the other hand, one thing I think Harnad doesn’t emphasize enough is the social side of grounding. Beyond it being about an agent that interacts with object, it is also about interacting with other people. From an early age, humans learn meaning through joint attention, feedback, and shared emotional cues. In that case, maybe the missing ingredient for AI isn’t just embodiment but also participation in socially rich environments. A robot might be able to ground symbols much faster if it could engage in shared experiences with humans.
Essentially, what Searle's periscope is that if thinking is all computation, then we should be able to put ourselves in the same state as anything to get a peek in what's going on inside its mind. However, there is no way for us to become another entity. Thus, it has to grapple with either giving up that the computational state is part of the mind at all, or that being in a computational state doesn't mean you are in a mental state. Neither wanted by computationalism. A hybrid model capable of using senses to "see" what's out there and also do symbol manipulation works better (although Searle disagreed by saying computationalism isn't part of the mind at all). Which begs the question. If a t3 robot with sensorimotor capabilities is better than a simulated t2 one, what are the senses that matter? For example, if a theoretical t3 robot existed but at some point an accident happens that takes away its sight (or worse, all its senses), does that mean its "understanding" capacities go with it as well? I feel that's a bit much, so is having had senses before enough even if you don't have them now?
ReplyDeleteThe point that piqued my interest was that yes, the CRA invalidated the computationalism argument and pure C=C, but that it did not invalidate the use of computationalism in cognition like Searle had hoped. This only makes sense, due to the logical need for a computational-like process to decipher language, accompanied by the understanding of semantics. I also found the 2nd tenet, the implementation-independent aspect of computation, to be weak. The 2nd tenet itself says that computations can run on any hardware. If you also assume that mental states just are computations, then it necessarily follows that mental states are substrate-independent, which is put in question by the CRA. Ultimately, I do believe the CRA is unrealistic, mainly because a human like Searle describes would be highly unlikely to have the memory capacity to be able to “speak” Chinese through computation without a semblance of semantics helping the human store that information. That seems to be close to the “Big Gulp” of GPT in terms of superhuman memory.
ReplyDeleteWhat stands out to me about Searle’s CRA is that it really only works against T2. At that level, he can run the recipe and prove that passing doesn’t equal understanding. But once you move to T3 with sensorimotor grounding, he can’t replicate the whole system, so he loses his “periscope” into the other mind. That’s why I think Harnad’s point is interesting: if C=C were true, Searle’s trick would have broken the barrier. Since it didn’t, the CRA ends up showing the limits of pure computation, not of cognition itself.
ReplyDeleteIn the response to Searle, the idea of functionalism is brought up. Functionalism assumes that structure matters only insofar as it supports function. In the duck analogy, two webbed appendages are required for the reverse-engineered duck to walk and swim like a real one. If we connect this notion of functionalism to our discussion of computation and mental states, how can we reconcile it with the assertion that computational states are implementation-independent? I tend to fall more into the functionalist camp, since certain “hardwares” are necessary for the execution of certain “softwares.” How do these two notions relate, if at all?
ReplyDeleteI like how Harnad takes Searle’s point and doesn’t counterargue necessarily, instead redirects it. Downstream from assuming computational sufficiency, Searle’s hand-run program should have produced understanding, but it didn’t. It feels like more than a thought experiment, moreso a stress test for what we mean by ‘explanation’. I keep wondering whether “implementation-independence” was ever a coherent idea to begin with, can meaning really survive being abstracted away from the causality? Harnad’s reformulation of the three tenets makes that question unavoidable . The first two feel intuitive, but the third, that Turing-indistinguishability is the strongest test we have, was interesting to think about. It doesn’t prove cognition but it marks where science has to stop guessing and start building. What stands out to me now is how that ‘threshold’ makes the Turing Test feel less like a finish line and more like a limit on what testing can ever show. It makes me wonder if understanding could ever be something you can’t fake like cheatGPT; if there could be a kind of system where meaning can’t be bypassed for the behavior to work at all, where meaning isn’t scaffolding but part of the mechanisms holding the structure together. I’m not sure what that would look like, but I like that Harnad leaves space for that question instead of pretending it’s solved.
ReplyDeleteIn his paper, Harnad notes “the soft underbelly of computationalism” as the central target of the CRA: the tenet of implementation-independence. Computationalism holds that mental states are merely hardware-independent implementations of computer programs. Searle's argument exploits this claim by highlighting that if the computational state alone guarantees understanding, then any implementation, including a non-Chinese-speaking person (Searle) executing the program, must also understand. This reliance on implementation-independence is the key idea that allows for what Harnad terms "Searle's Periscope," a hypothetical way to bypass the “other-minds” problem by checking for mental states if they are truly just computational states. Importantly because of this, as Harnad points out, the ultimate success of Searle’s Periscope and the CRA rests completely on the notion that understanding must be a conscious mental state.
ReplyDelete“It is only T2 (not T3 or T4 REFS) that is vulnerable to the CRA, and even that only for the special case of an implementation-independent, purely computational candidate.“ I found this passage interesting as it really shows the flaw in Searle’s CRA. In fact, Harnard refers to the different levels of the TT here. T2 is language part, T3 is the robotic where the system has sensorimotor capacities, and T4 is the neurological part of the system, where the system should be able to replicate neural and biological processes of the brain.
ReplyDeleteHarnard says that the CRA is only applicable to T2, which is implementation-independent since it’s only at this level that computational systems manipulate symbols without symbol grounding in the real world. However, the CRA fails at T3 and T4 where the system is no longer simply manipulating symbols but it uses its sensorimotor capacities to manipulate symbols in a meaningful way that shows understanding. Thus, cognition must have a part for symbol manipulation (computation) and a part for genuine understanding that is most likely grounded in the real world and our experiences. This made me think about the role of experience in children’s lives. In fact, their language is based on their understanding of things based on the grounding of their sensory and emotional interactions. For instance, they recognize colors, animals or food because they interact with them. Thus, their language comes from sensory experiences as well as symbol manipulation of words to form sentences and ideas.
Dr Harnad says that his Chinese Room argument would only work to invalidate the claim that a computation only machine indistinguishable from a human in a text-based only interaction is able to associate the content words in its input and output to referents in the real world. However, the text-based human indistinguishability level (T2) is not solely achievable by a ‘purely computational candidate’; therefore, the CRA is not sufficient to apply to all cases of T2 passing machines. This led me to the observation that you can elicit a similar local firing pattern of neurons across different humans, however, produce different qualia for each human (at least through self-report) because everyone’s schema/construct of the world is grounded in their own unique life experience. On the other hand, running the same computation would always lead to the same output across different computation-only machines, and that’s what separates a machine that can grasp meaning and one that doesn’t empirically.
ReplyDelete