Wednesday, August 27, 2025

1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20

 1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20

Reading: Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press

Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment, showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier.

Instructions for commenting: Quote the passage on which you are commenting (use italics, indent). Comments can also be on the comments of others. Make sure you first edit your comment in another text processor, because if you do it directly in the blogger window you may lose it and have to write it all over again.

58 comments:

  1. The computational mind hypothesis argues that it could explain “how the mind works”, depending on the fact that the mind is computational or not. This statement also hinges on the assumption that cognition is solely computational, and it leads to the same question Zenon posited regarding image theorists, who he accused of “[masquerading] non-explanations as explanations” and “[deferring] the explanatory debt” by creating more functional questions. How is computation implemented? We might understand that the mind is computational but how does the software decide what computation to utilize in a given situation? Could it be hypothesized that cognition drives the implementation of computation ?

    ReplyDelete
    Replies
    1. Julien, I don't know what the "computational mind" hypothesis is, but if you mean "computationalism" -- the hypothesis that cognition (thinking) is just computation (C=C) -- then all that's needed is a computer that is implementing the right software: the software that can produce the capacity to pass the Turing Test (indistinguishably from any of us)!

      BTW, what is computation? And what is the Turing Test?

      Delete
    2. I realized later on that the computational mind hypothesis is not very clear, I was referencing the statement in reading 1b that the mind could be understood under the assumption that it is computational. From what I understand, computation is the use of algorithms and symbols to solve situations, the algorithms being the software. The symbols are used solely based on their form and not their meaning and are implemented in a set of rules to yield a result, the set of rules being the algorithm. A Turing machine is a machine that can compute a task from a set of instructions used to indicate atomic operations it has to perform. A universal Turing machine, in theory, can compute anything with respect to its instructions, assuming it is programmed in accordance to U. The input of the Turing machine has to be finite and computable to yield a result, though, but if given enough tape, it can simulate any computer algorithm.

      Delete
    3. Julien, any particular algorithm (symbol manipulation recipe) can be executed by a Turing Machine. A UTM (Universal TM), like a digital computer, can be reconfigured to execute any algorithm (by inputting and storing the algorithm (software) as part of its input).

      The important question for cognitive science is not whether the brain is a digital computer (it’s not) but whether the causal mechanism that produces cognitive capacity is just computing (symbol manipulation): C=C. That's what Searle's Chinese Room Argument tries to refute. It is not clear whether Turing believed C=C, nor whether he believed that T2 could be passed by computation alone. The question arises in this decade if ChatGPT has passed T2: has it? can it? Why, or why not? (Ask GPT, but you'll have to define T2 as well as C=C to GPT.)

      Delete
    4. ***EVERYBODY PLEASE NOTE: I REDUCED THE MINIMUM NUMBER OF SKYWRITINGS. BUT THE READINGS ARE **ALL** RELEVANT TO AN OVERALL UNDERSTANDING OF THE COURSE. SO, EVEN IF YOU DO NOT DO A SKYWRITING ON ALL OF THEM, AT LEAST FEED EACH READING YOU DO NOT READ TO CHATGPT AND ASK IT FOR A SUMMARY, SO YOU KNOW WHAT THE READING SAID — OTHERWISE YOU WILL NOT HAVE A COMPLETE GRASP OF THE COURSE TO INTEGRATE AND INTERCONNECT FOR THE FINAL EXAM.***

      Delete
  2. "The gist of the Turing Test is that on the day we will have been able to put together a system that can do everything a human being can do … will have come up with at least one viable explanation of cognition."
    At the current capabilities of LLMs, it seems to falsify this statement in the text. Like the behaviorists, our understanding of ‘how’ models of this scale perform their actions end up revolving around assessing their outputs or finding adjacent metrics to observe the different layers or weights (akin to neurons in the brain) that would activate when certain results are produced. Let’s bring this back to the simple example of computation of ‘1 + 1’, where symbols—here written as numeral 1 and the +—are manipulated through an algorithm that will always, no matter the machine that performs the operation, give the sum of the numbers. As we know it, we still do not have a satisfying explanation of the internal process that occurs when we perform this calculation, but observing the inner workings of an LLM—looking at which weights are ‘lighting up’ and their connections with one another—is not enough to determine that one part is responsible for a certain portion of the calculation (or even if calculations themselves are split up into ‘parts’ in the LLM’s process of computation).

    ReplyDelete
    Replies
    1. Lucy, as there are 2 Lucies this year, could you please put a “j” or an “m” at the beginning or the end of each of your skies so I can credit it to the right Lucy?

      Yes, if GPT can pass T2 (total indistinguishability in verbal capacity) that does not mean the internal mechanism with which it succeeds is the only way to pass T2, or the right way.

      But the criterion for passing T2 remains only verbal indistinguishability. So a pass on that is a pass. So any successful T2-passing mechanism is a potential mechanism for passing T2.

      (Except if it is cheating, such as if it is connected online to a real human that is helping it to pass — or if it is connected to a superhuman database, such as GPT’s “Big Gulp,” which it is using as crib notes in order to pass”.)

      If you want indistinguishability in more than just verbal capacity (e.g., indistinguishability in both verbal and sensorimotor capacity [T3] — or in both of those plus indistinguishability in all observable and measurable internal brain processes [T4]) — then that is another way a T2-passing mechanism would be the wrong mechanism.

      The T2 to T4 hierarchy is discussed in Week 2b,

      Delete
    2. It’s interesting to mention that there is a right or wrong way of passing the Turing test. I understand as you mentioned here and in class that the “big gulp” essentially connects an LLM to an unfathomably large database is a cheat—would this count more then that the machine is not thinking, rather it is curating what information should be presented when seeing a prompt? I wonder then, this process of choosing what information should be shown to the user—is that process still considered thinking even though there is a distinction between how it computes and how humans’ internal mechanisms compute our answers (T4)?

      Delete
  3. "It turns out they are not physical states! They are computational states."
    What is a computational state? Infinite regress and humuncularity seems persistent, yet I believe these states could be defined. Oftentimes, computation is thought of as a process or a function, per the functional view: you put something in, it is transformed, and delivered. What about the evolution of a computation? Instead of the ever-so-simple sum or difference, let’s take a real-world computation: observing the clouds, say. Here, there is an initial biological process, which is the transduction of light energy into electrical energy, then activation of the areas responsible for seing in the occipital lobe. But the brain does not stop working there (hopefully): where does it go afterwards? Surely another computation, another thought, most likely connected to the initial one. Computations in it of themselves are strings of functions (or only one function), such as the quadratic formula comprising multiple, simpler mathematical operations. Yet, I wonder if sequences of computations, and how these sequences elongate, evolve, could tell us about the nature of cognition, and more specifically, the acquisition of meaning.

    ReplyDelete
    Replies
    1. Camille, a computational state is the state of a physical system that is executing a computation (I.e., doing symbol manipulation by following rules [algorithms]), e.g., a computer.

      Turing was one of several logicians who formalized what computation is. All the formalizations turned out to be equivalent, but Turing’s was the simplest one: the “Turing Machine” (week 2a and b). Have a look at that,

      Part of the definition of computation is that it is “implementation-independent”: The same computation can be done by countless different kinds of physical systems. (Today this is called the hardware/software distinction and “hardware-independence”: the same computations [recipes| can be executed by countless different hardwares.) The recipe is independent of the hardware (though of course it has to be executed by some physical hardware or other.)

      A computation is not a static state. It is a series of states, occurring in a finite-state machine (a computer) that can pass from one state to another, according to the algorithm (software) it is executing. So it is executed as a dynamic series of states (hence a “process”).

      The rest of what you mentioned is not about computation but about cognition. We don’t yet know what cognition is, and computation is just one candidate for what it might be.

      And right now, what has to be clearly understood is what computation is.

      Delete
  4. Pylyshyn says that our mind’s work is computational in the sense that symbols, rather than pictures in our head, are being manipulated by rules. However, Searle’s Chinese Room idea points out that someone who can simply follow rules very well can act like someone who understands when they don’t actually understand anything. Our Prof says computation is an important aspect but we also need to ensure symbols are grounded with sensorimotor capabilities that make words meaningful. I’d be interested to see how we could conduct a robotic Turing Test that was suggested in the reading since we can't hide the robot behind an email or a room.

    ReplyDelete
    Replies
    1. Annabelle, we’ll be discussing the sensorimotor grounding of symbols in weeks 5 6, and 8.

      Symbols can be grounded through category learning: The names of content-words (like “cat” and “mat” and “cataclysm” and “materialize”) can be connected to their referents in the world through learning to recognize, identify and manipulate them by detecting their distinguishing sensorimotor features.

      Once you have grounded enough categories, their names can then be combined in (true or false) subject/predicate propositions in natural language.

      In mathematics, the symbols do not need to be grounded; they just need to be manipulated (if we have the right recipe [algorithm]).

      Computation is spectacularly powerful: Not only can it do everything mathematicians and logicians can do in the formal world of symbols [this is called the “Weak Church/Turing Thesis”, but computation can also be used to model and simulate just about anything in the real world of physical objects [the “Strong Church/Turing Thesis”].

      But natural language (of which computation is a component) is even more powerful [Katz’s “Effability Thesis”].

      Are ChatGPT’s content-words grounded? How does it do what it can do?

      Delete
  5. In class we defined cognition as something that “goes on in the head when we are thinking”. In Harnad’s 2009 article it is stated that in an attempt to form “an impenetrable boundary between the cognitive and the noncognitive” Zenon classified imagery as non-explanatory and therefore noncognitive (p.6). However if it is possible for imagery to have some form of behavioral capacity such as visual rotation then should it not be classified as cognitive, since the imagery must be present for the possibility of rotation?

    ReplyDelete
    Replies
    1. Erilyn, good point, and you are right, and we will be discussing this soon. But the Shepard mental rotation (same/different judgment) happens too fast to say the participants are actually doing the mental rotation. What they're doing is reporting whether the rotated image is or is not the same as the original image. They are doing a same/different judgment. We infer that something is going on in their brains that is like rotation, but is it "mental" if they do not feel that they are doing it? In contrast, when we are doing mental "long division" we do feel we are doing it in our heads, just as when we are doing it with pencil and paper.

      Delete
  6. During class, I wondered what Professor Harnad meant when he said that today’s AI may pass the Turing Test, but that it is “cheating”. Now, I think he is referring to Searle’s argument that a model (such as AI), even though it may have learned all the rules of language which would allow it to string words together into an infinite number of thoughts, may simply apply those rules without understanding the meaning behind the words. This circles back to the symbol-grounding problem, which asks whether (and how) a model such as AI can connect a symbol, e.g. a word, to its physical form.

    ReplyDelete
    Replies
    1. Cendrine, no, Searle's Argument is not that C=C is cheating, but that it is wrong! (The one who is cheating in ChatGPT. How? See my other replies, the answer's there and it's about the "Big Gulp."

      Delete
  7. Following the Symbol Grounding Problem, computation is “rule-based symbol manipulation,” where the symbols are only “arbitrary in their shape” and the rules operate on these shapes, not on the meanings (Harnad). But if grounding does depend on direct sensory experience, then what about abstract ideas (like “justice,” “infinity,” or “truth”)? These aren’t tangible objects you can ground a symbol to, like you could a cat or an armchair. I’m not sure if the same symbol grounding problem applies to these abstract concepts, or if perhaps a different mechanism is at work.

    ReplyDelete
    Replies
    1. Elle, you are right to wonder how “abstract” words are grounded. In week 8b we will learn that not all content-words need to be grounded directly; in fact most don’t. Words can also be grounded indirectly, through language, combining already grounded words to define or describe new, ungrounded words and the things they refer to. (This is part of the power of propositions.)

      Delete
  8. Elle, I think your point is really thought-provoking, though I don’t interpret the Symbol Grounding Problem to mean that symbols must be tied only to tangible objects. The challenge is connecting symbols to meaning in the real world — and that connection doesn’t just come from physical tangibility, but from subjective experience built on our sensorimotor capacities. For example, my understanding of “infinity” isn’t grounded in a physical object but in my personal history of engaging with the idea. That relationship — my experience of “infinity-ness” — is what grounds the symbol in my mind.

    ReplyDelete
    Replies
    1. Jesse, content-words are not grounded in “meanings” (a WW): they are grounded in whatever things they refer to: “Cats” refers to cats; “catastrophe’ refers to catastrophes, and “catalysis” refers to catalysis. Look them up in a dictionary. Their definitions are made up of content-words too, and the only way you can understand them is if you already know that those defining-words refer too. Grounding is inherited through language, (Now look up — or ask GPT — what “content-word” refers to.) Notice that individual content-words like “cat” or “catalyze” only have referents. It is true/false, subject/predicate propositions that have "meaning" (a WW). This will become clearer as we go on…

      And grounding is not just a felt experience; it is also the result of learning to detect the features of the memebrs of the category (“kind of thing”) that the category-name refers to: the features of the members of the category that distinguish them from the members of other categories (with other names).

      Delete
  9. "The gist of the Turing Test is that on the day we will have been able to put together a system that can do everything a human being can do, indistinguishably from the way a human being does it, we will have come up with at least one viable explanation of cognition."

    Computation is the manipulation of symbols following rules (algorithms). Symbols do not carry meaning or value; human beings interpret the symbols and make sense of them. Nevertheless, computing can take place in different the physical systems (hardware) because it is symbol manipulation. Mental states are said to be computational states. Therefore, a system that follows the same algorithm to do everything a human being can do, should explain cognition (see excerpt above). According to Searle, the TT can be passed by symbol manipulators with the same algorithm responsible for mental states, but the “meaning”, “understanding” or interpretability found in human cognition is lacking. Does it really do it indistinguishably from the way we do? If we want to explain what happens in the head of a human being when thinking, the answer cannot be found solely in computation; there is a missing thing in systems keeping them from explaining cognition.

    please let me know if it is kid-sibly friendly!

    ReplyDelete
    Replies
    1. Sophie, your comment was kid-sibly enough. But you said that ““meaning”, “understanding” or interpretability found in human cognition is lacking” in computation. Fair enough. But they’re not lacking in language. Why not? How do they get there? Kid-sib wants to know… Once you get to the part of the course that tries to answer that, your challenge will be to explain it to kid-sib.

      Delete
  10. I found this paper helpful as it presented a historical account of different perspectives of computing. In particular, the strongest takeaway was that computation alone cannot explain cognition. In fact, this debate over whether cognition is computation misses the point unless it also describes how cognition delivers behavioral competence. So, while we start with symbolic computation, Harnad specifically challenges Pylyshyn's exclusion of these "dynamical functions such as internal analogs of spatial or other sensorimotor dynamics" and "real parallel distributed neural nets" from "cognitive" status, arguing that if they generate behavioral capacities, they are undeniably cognitive. We then get a critique (Searle), before landing on a synthesis. In essence, Searle's critique showed that "cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all)."

    In particular, the symbol grounding problem stood out to me as, like other students, I was curious about the grounding of abstract concepts like "love" and "peace." This was where the "categorization" part of this class came in, as this week's YouTube lecture discussed "chunking," using fruits as an example. It seems like the initial grounding for abstract concepts would trace back to sensorimotor experience. However, the manipulation of these higher-level "chunks" might involve more symbolic processing, which leads back to Harnad's argument of hybridity.

    (As I read more, I was reminded of a digital humanities class I took, where I learned to perform topic modelling of different literary themes with R. While it is not exactly the same, I think it could fall under mediated symbol-grounding, where "the link between the symbol and its referent is made by the brain of the user"?)

    ReplyDelete
    Replies
    1. Audrey, your reflection on the symbol grounding problem stood out to me. I also find myself wondering about the extent to which abstract concepts are grounded in the same way as concrete ones and how much of their meaning comes from experience versus culture and language. Harnad’s hybrid argument leaves me to question if in the context of AI, does symbolic processing always need grounding first, or could it operate independently once enough structured categories are established? Although it makes clear sense through the paper that both grounding mechanisms and symbolic processing are necessary I’m left wondering about the balance between the two. How much grounding is sufficient to support higher-level symbolic reasoning, and how much symbolic processing is needed to extend beyond sensorimotor categories into more abstract concepts?

      Delete
    2. Audrey and Emily, good points, and good questions. See the Reply to Jesse above.

      [In general, please always read the Comments, and especially the Replies in the skywriting thread before posting yours.)

      The answer is that enough direct grounding (through sensorimotor category learning) is needed so that the rest of the potential categories that exist (there are an infinite number) can all (in principle) be learned via indirect grounding through language alone.

      But this is true only as long there is a dictionary (or encyclopedia or textbook or LLM or someone who already knows) to define or describe or explain (kid-sibly!) the features of the members of the new category that distinguish them from the members of other categories with which they could be confused.

      (This should remind you of the kid-sibly definition of “information” last week, with the vegan sandwich machine…)

      There is another important pre-condition for indirect grounding through words to work: definitions (or descriptions, or explanations) are true/false subject/predicate propositions (e.g., “an ‘apple’ is a round, red fruit”):

      To learn the category to which “apple” refers from just this definition, you have to already know what the content-words “round,” “red,” and “fruit” refer to: those words have to be already grounded (whether directly or indirectly) for the learner to be able to learn the new category indirectly from words (propositions) alone.

      Those prior content-words are the predicates of the proposition that defines the new content-word (”apple”). (This sample definition is a very weak definition, though: apples can also be green or yellow: verbal definitions reduce uncertainty, but only in mathematics and logic (and in stipulated or socially agreed definitions) do they reduce uncertainty to zero.)

      More about this in the language weeks, 5, 6, 8, and 9.

      See: Vincent-Lamarre, Philippe., Blondin Massé, Alexandre, Lopes, Marcus, Lord, Mèlanie, Marcotte, Odile, & Harnad, Stevan (2016). The Latent Structure of Dictionaries. TopiCS in Cognitive Science 8(3) 625–659

      Delete
  11. This comment has been removed by the author.

    ReplyDelete
  12. I am interested as to how the content we have been discussing connects to more physical studies of the brain. We have seen through the Church/Turing Thesis that the physical details of the hardware are irrelevant, but do studies in neuroscience help us to ground the computational theories we have looked into? I’m curious as to why neurological research has not come into the conversation, especially in symbol-grounding. Are these problems truly independent?

    ReplyDelete
    Replies
    1. I thought exactly the same thing as you when doing the reading 1a since I love studying about psychology and neuroscience. I think the way Harnad (2005) mentioned the neurological components of computation/cognition as being irrelevant according to Fodor if they only consist of the hardware implementing a computational system or if they can be simulated computationally. Although I understand the concept of implementation independence for computation, I do not agree with the statement because there are so many things we do not know about the brain and that I believe a computational device could not do.

      Delete
  13. “The root of the problem is the symbol-grounding problem”

    Past work has largely focused on the foundational question of "what constitutes computation?" – often framed in opposition to cognition. Yet, such work has failed to address the “in-between”, the underlying mechanisms that explain how and why humans are capable of cognitive tasks. Searle’s Chinese Room Argument further reinforces that computation does not rely on the process of meaning but rather the manipulation of arbitrary symbols based on an explicit set of rules (i.e., algorithms). However, as the professor notes, there remains a gap concerning what bridges computation and cognition – that is, how meaning arises in the first place. This suggests that computation may never truly “become” cognition as it would require for machines to be grounded in meaning (or in this case, sensorimotor experiences). This makes me question why the mechanisms of grounding have yet to be explored. How can they be studied? What are the limitations of the symbol-grounding problem?

    ReplyDelete
    Replies
    1. Hi Grace, I really like how you make room for the in-between space here.

      I am also curious about grounding mechanisms and am looking forward to discussing the symbol grounding problem.

      A question I've been asking myself is how sure we can be that the shape and semantics of symbols are separable. Are we saying that they are for computers but not for us (or that we don't know yet for the latter hence the symbol grounding problem)?

      When I see “5” I know that the shape of this symbol doesn’t have to do with the quantity of five people standing in a room with me but I also know it refers to this quantity. We have many symbols corresponding to it “5, five, V,” etc. What about languages without numbers like Pirahã?

      Isn’t saying that the shape of the symbol we make is not related to the shape of the physical thing we make by assigning symbols to it circular in some way? Mustn’t a symbol’s referent and form be somehow coupled whether through our body or our use of materials as a conduit?

      Delete
  14. So the computation - the manipulation of arbitrary symbols by a certain set of syntactic rules (Harnad) - is not useful unless it is semantically interpretable (meaning you can derive a meaningful interpretation from it). This means that an interpretation of a sentence relies on our interpretation of the words (symbols) being manipulated by the compositional semantic rules (algorithm/rules) of a language. The individual symbols may have a meaning attached to them, but this meaning is not a part of the symbols and thus is not technically part of computation. So it’s just our lack of understanding of how the words connect to their meanings that is a problem for cognition as computation (in this context)? What does it mean for the computation generates something semantically uninterpretable i.e. not useful?

    ReplyDelete
    Replies
    1. Emma, according to Harnad, computation itself is not sufficient to explain cognition, as symbols need to be grounded in the world through sensorimotor processes (like touch or sight) to derive meaning from them. That said, you are mixing up two things based on the questions you asked. Humans do understand the meaning of words we know by interacting with their real-life referents. The issue is that computation inherently cannot ground symbols in the real world, as it operates strictly on syntax without connecting them to the things they are meant to refer to. Moreover, if a computation generates something that is semantically uninterpretable (or not useful, as you say), it means that its output has no purpose in explaining the internal causal mechanisms that allow us to understand things, which is primarily what cognitive science is trying to answer.

      Delete
    2. Gabriel, thank you for your reply. I think I understand it better now; computation can't connect words to their meanings (or ground them in the world through sensorimotor processes). So it’s not a problem of how we get computation to do this - it’s a problem that computation by nature can’t overcome. Also, thank you for the clarification on what semantically uninterpretable outputs mean.

      Delete
  15. I think what I struggled with understanding in this article the most was ‘dynamic processes’. What I understand now is that a dynamic process is an ongoing physical process in the brain that does not rely on symbol manipulation rules, whereas a computational process follows rules based upon symbols step by step. As the article points out, understanding how these two processes work together to ground symbols and give them meaning is important in understanding how to possibly overcome the homunculus problem. This is the idea that when asked ‘how does someone imagine something’ the answer is ‘a little man in their head sees it’ and leading to an infinite regress. In my view an AI’s way of ‘grounding’ is through statistics as machine learning systems take data and then makes predictions based off of the statistical regularities of the data in our world. This feels similar to how humans learn and find meaning in words. For example if I’ve only seen cats with whiskers I would assume that all cats have whiskers and this would shape the meaning of ‘cats’ and ‘whiskers’ for me. To me, what separates the two is that statistical regularities do not provide enough grounding as real world sensorimotor experiences like dynamic processes, which are uniquely human.

    ReplyDelete
  16. "Naming things is naming kinds (such as birds and chairs), not just associating responses to unique, identically recurring individual stimuli" (Harnad)

    This line caught my attention because it is when the reading highlights why behaviourism alone can’t explain how we learn words. Children don’t just pair sounds with single objects; they figure out the shared features that define a category, like what makes something a “dog” even when dogs differ in size, color, or shape. That process goes beyond rote association and demands internal mechanisms for categorization. This connects to Hebb’s critique that behaviourism skips over the “how” of learning, and to Chomsky’s point about universal grammar showing that input alone is insufficient. It also helps explain why mental imagery or pure computation falls short, both risk circularity, relying on an inner “little man” to interpret symbols. Cognition, then, might need to be a more hybrid understanding- partly computational but also dependent on the dynamic embodied processes that give symbols meaning.

    ReplyDelete
  17. I wonder if a robotic Turing Test could ever fully answer the question of where meaning comes from in our brains, even if robots "experienced" sensorimotor interactions. It seems there are certain kinds of experiences a robot may never have because, as we know, a problem for robots and AI is that, even with grounding in sensorimotor data, they cannot truly “feel”, which leaves a gap between machine cognition and human cognition. For example, when you asked us to recall our 3rd grade teacher’s name, I remembered mine not just because she was an elementary school teacher, but because of how she made me feel with her empathy and kindness in a difficult situation, which shaped the way I categorized her in my mind. In contrast, I can’t recall the name of my 4th grade teacher because I never had any truly personal experiences with her. I think this suggests that meaning for humans is tied not only to perception and categorization, but also to subjective and emotional experiences. Does subjective experience play a large role in grounding meaning and could it be as important as sensorimotor interaction when trying to answer the question about where meaning comes from?

    ReplyDelete
    Replies
    1. Hi Sannah, I really appreciate you tying feeling into this conversation. I have been asking myself, related to the easy and hard problems of cognitive science, if we can be sure that feeling and thinking are not the same phenomena. As we've been saying, we know what it feels like to know something, what it feels like to think.

      Does the hard problem exist only if we begin by explaining processes of thinking, by solving the easy problem. How would these problems and endeavors of cognitive science shift if we treated thinking as a form of feeling and instead of defining cognition as thinking defined it as feeling, of which thinking may be a subform.

      Delete
  18. “Beware of the easy answers: rote memorization and association. The fact that our brains keep unfailingly delivering our answers to us on a platter tends to make us blind (neurologists would call it “anosognosic”) to the fact that there is something fundamental there that still needs to be accounted for.”
    To me, this passage points out how it’s easy and intuitive to think of learning as just memorizing or linking things together. Sure, our brains can come up with answers quickly, like the example we talked about in class about remembering a 3rd grade teacher’s name, but that doesn’t mean the process behind it is simple. This passage reminds us that the real challenge is figuring out what’s happening inside our brain that makes these quick answers possible. The point is that “easy answers” hide the complexity, and our job is to uncover the hidden processes that actually power our thoughts.

    ReplyDelete
  19. Chomsky’s notion of ‘poverty of stimulus’ says that while children are learning language, they simply do not receive enough feedback and data from their environment to account for all the linguistic rules they internalize, suggesting an existing bank of knowledge from birth. From the reading, we know that this applies to syntax and vocabulary, but I wonder if it could apply to other concepts like math. For example, could a baby understand that a greater volume of mush in their bowl means they get to eat more? And does this imply an understanding of greater/less than? (Perhaps this is a little off track from the main takeaways of the reading, but a thought that intrigued me regardless)

    ReplyDelete
  20. “Searle thought the culprit was not only the insufficiency of computation, but the insufficiency of the Turing Test itself; he thought the only way out was to abandon both and turn instead to studying the dynamics of the brain. […] We cannot prejudge what proportion of the TT-passing robot’s internal structures and processes will be computational and what proportion dynamic. We can just be sure that they cannot all be computational, all the way down.”

    Another approach to studying cognition is studying the dynamics of the brain. The key to understanding dynamics may be to study the cognition of individuals with “non-normative” cognition, such as Dissociative Identity Disorder (DID) patients (i.e., subject a DID patient to several cognitive tasks and evaluate the variability in responses for each identity – what is consistent and what is variable). Because individuals with DID think differently depending on the identity that is “in control”, their cognitive processes change. Understanding how that shift in cognition/thinking patterns affects their responses may help us understand which processes are dynamic and sub-cognitive, and which are entirely computational. Is there any reason why this angle of study wouldn’t bridge the knowledge gap?

    ReplyDelete
  21. Pylyshyn suggests that the mind and cognition act as software, whilst the brain acts as hardware – suggesting that “the physical details of the hardware are irrelevant”. So long as there are the sufficient parts of the hardware to carry the computational activities of the software, the cognitive activities may carry on. An issue with this seems to be that it cannot adequately account for the influence of cognition on neurobiology (e.g. neuronal pathway reinforcement through repeated action, behaviour or thought pattern), as software does not generally have an impact on the topology of the hardware. This presents a problem for the computational view of cognition, as it suggests that this view needs to be modified to account for a greater relationship between cognition and neurobiology.

    ReplyDelete
    Replies
    1. Hi Sofia, I appreciate the insight your comment has on the relationship between the dynamic and formal aspects of thinking. Does this relate to Zenon's Cognitive Impenetrability for you as cognition and knowing about it can change the neurobiology of the thinker?

      It interests me how dichotomies of dynamic/ formal and analog/ digital and the image they draw of a physical world relates to the symbol grounding problem and the struggle to grasp the seemingly transitory processes from what is tangible and visible and that which is liminal or invisible yet felt to us.

      Delete
  22. “Computation is rule-based symbol manipulation… useful only if it is semantically interpretable.”
    If meaning comes from grounding, then maybe LLMs aren’t “cheating” so much as missing a body. Instead of just adding more text, what if they could sense and act in the world, linking words to real experiences? The question then is: what’s the simplest loop: see, act, name, test, that could turn symbols into grounded categories? That feels like a clearer research challenge than arguing in circles.

    ReplyDelete
  23. This comment has been removed by the author.

    ReplyDelete
  24. “The gist of the Turing Test is that on the day we will have been able to put together a system that can do everything a human being can do, indistinguishably from the way a human being does it, we will have come up with at least one viable explanation of cognition.” This passage begs a question: does mimicking human behavior truly equate to understanding how cognition works?
    Consider this: sometimes when we combine two things (”inputs”) the result isn’t their sum, but something more. Think of it as 2 + 2 equaling 5, not due to error, but because the interaction between the elements produces an emergent effect, a chain reaction. We might refer to these effects as "synergies" even thought that’s a weasel word. In cognitive science it is those “synergies” (interactions) that we seem to be interested in and in wanting to explain. Assuming, for the sake of argument, that cognition depends on such complex, dynamic synergies, then I fail to understand how by programming a machine could lead us to a genuine explanation of cognition. I think this refers to, and that I agree with, Searle’s “refutation” (or at least skepticism) about the cited passage.

    ReplyDelete
  25. “Searle thought the culprit was not only the insufficiency of computation, but the insufficiency of the Turing Test itself;”

    Searle’s thought experiment certainly made me skeptical of the Turing Test (TT) as an explanation for cognition, but in that process, did not convince me in disproving the ability for computation to explain cognition. It seems fairly clear from Searle’s argument that the TT can be passed without properly accounting for the internal processes of cognition, but then if the TT is not an adequate explanation for cognition, how can passing it also disprove the ability for computation to explain cognition? If, as Searle seems to suggest, the complexities of cognition are simply outside of the scope of the TT, then wouldn’t passing it (regardless of the internal functions involved) be insufficient to prove or disprove any explanations of cognition? How can it disprove both the TT and C=C at the same time? Could there not be a more complex explanation that is not yet understood involving computation and some biological machinery to account for the symbol-grounding problem?

    ReplyDelete
  26. Some believed the brain was simply a computer that followed rules, like a recipe. "So why not do it all with symbols and rules instead?" (Harnad, 2009). Pylyshyn adopted that stance, stating that cognition could be explained without pictures or sloppy brain details. But Searle's Chinese Room proved pure symbol-following rule-following wasn't the same compared to true understanding. The Pylyshyn video gets through clearly why this was debated: can head images be replaced with code? Harnad argues that real thinking involves symbols and sensorimotor anchorage. Turing had already guessed that it wouldn't work like that.

    ReplyDelete
  27. "The only way to do this, in my view, is if cognitive science hunkers down and sets its mind and methods on scaling up to the Turing Test, for all of our behavioral capacities"

    I think this is the part that sticks with me because I am unsure of how one would even approach this. If you were to create a robot analogue that took the information from what is happening around it and then acted accordingly, how would you even know it is able to do so by itself automatically? I would tend to agree that it is not computational all the way down, but I am unsure if even an upgraded Turing type test would be able to tell. It could also just be part of the hard problems by Chalmers that we discussed in class, and we can never truly explain the why something is happening or what it "feels" like.

    ReplyDelete
  28. Computation alone does not explain how computations have acquired meaning which is why we cannot equate computation to cognition. We can manipulate symbols following a rule (carry out computations) even if we don't understand what the meaning is, which is illustrated by Searle's Chinese Room argument showing that computations create interpretable results but whether the results are interpretable depends on user, interpretation is not generated by the system of computations itself. “How can the symbols in a symbol system be connected to the things in the world that they are ever-sosystematically interpretable as being about”. Right now nonliving things like robots don't have cognition because they are only able to carry out processes due to computations and don't have sensorimotor experiences that allow them not only carry out processes based on symbol manipulation. Until we can build a system that can incorporate their own sensorimotor interactions and experiences we cannot argue that computation is cognition.

    ReplyDelete
  29. “Zenon, in rightly resisting the functional question-begging of imagery theorists in favor of goods-delivering computational explanation, went a bit too far, first denying that noncomputational structures and processes could occur and explain at all, and then, when that proved untenable, denying that, if they did, they were ‘cognitive.’”

    From what I understand, Pylyshyn’s strength was that he pushed cognitive science toward real explanations based on rules and computations, instead of vague appeals to mental images. But he may have set limits that were too strict by denying the role of noncomputational processes. For example, tasks like mental rotation suggest that our minds rely on continuous, analog processes that can’t be reduced to simple symbol manipulation. If these processes help us perform actual cognitive tasks, then excluding them seems like ignoring part of the evidence. To me, this shows that computation is crucial, but it has to work alongside dynamic processes in order to fully explain how people think and solve problems.

    ReplyDelete
  30. 'But if symbols have meanings, yet their meanings are not in the symbol system itself, what is the connection between the symbols and what they mean?'

    I think this passage perfectly illustrates the symbol grounding problem, which is used to show that computation alone is insufficient for cognition. In fact, symbols don't have meanings which means that cognition cannot be explained by computation alone since symbol systems depend on an external source for interpretation (which is usually us). However, I wonder how (if possible?) AI systems can be designed to overcome this limitation in order to achieve genuine understanding, rather than simply symbol manipulation.

    ReplyDelete
  31. This comment has been removed by the author.

    ReplyDelete
  32. “The gist of the Turing Test is that on the day we can put together a system that can do everything a human can do, indistinguishably, we’ll have come up with at least one viable explanation of cognition.”
    Harnad redefined “explanation,” from asking why cognition works to showing how it can work in practice, conceptually shifting from speculation to construction. The following warning still stands: at the end of the day performance isn’t explanation. Behaviorist interpretations mistook outputs for causes, and computation faces the same challenge if passing T2 or T3 becomes the end goal.
    If cognition is both computational and sensorimotor, the mystery is how those layers integrate with causality. Grounding ascribes meaning to symbols through direct experience, once grounded, language can generalize and complexify those meanings. Testing this integration would compare a purely computational T2 system to one coupled with sensory feedback; seeing linguistic coherence is improved by grounded interactions. The challenge, then, is not just to build machines that act like us, but to design architecture making that behavior necessary rather than accidental.

    ReplyDelete
  33. This discussion on computation has genuinely changed how I approach the idea of intelligence, specifically when I think about the mind as a piece of software. I started wondering: if our thought processes are essentially algorithms, why do individuals rely on completely different internal strategies to complete the same task?

    For instance, when were asked in class to recall an elementary teacher, many people begin with a visual memory before retrieving the name. But for individuals who are blind or who have a condition like aphantasia, that visual step is impossible. They must use alternative pathways, maybe auditory cues, emotional connections, or descriptive language to reach the same outcome as everyone else.

    These different approaches challenge the concept of "Strong AI" from the reading, which suggests that the thinking program should be separate from the physical hardware. The variation we see proves that the physical system is crucial. For our internal symbols, like a name, to have real meaning, they must be "grounded" in some sort of sensory experience. Since each person’s sensorimotor reality is unique, the cognitive steps required to sort and identify that person (essentially categorization) must also be unique.

    Therefore, the supposed mental "algorithm" is not a universal script; it’s a flexible, personalized pathway based on the unique structure and inputs of our body. The mind appears to be a hybrid system where ourphysical reality shapes our function, making these variations in our thinking as a reflection of our mind's uniqueness.

    ReplyDelete
  34. In the paper, the question of how we can perform actions that do not have a computational role is raised. An example of an action that, according to the article, supposedly lacks a computational rule–but that we still have the ability to carry out–is learning from experience. However, I beg to differ. I believe that reinforcement learning, or the process of encoding decisions toward the optimal result, is essential to learning from experience.
    For instance, take the following case study: You are stranded on a deserted island with nothing to eat. To save yourself from starvation, you decide to harvest the native mushrooms, but having little botanical knowledge, you don’t know which ones are poisonous. Here is where I believe computation plays a role. You could harvest one of each mushroom and ingest a small amount to test for harmful effects. If you can eat the mushroom without becoming sick, then it is safe; however, if ingesting the mushroom causes illness, you can conclude it is poisonous.
    Computation can be boiled down to symbol manipulation, where a state is represented by a set of symbols, and algorithms allow us to traverse between these states. In this case study, the symbols or states can be defined as “1” if the mushroom is safe and “0” if it is poisonous. The algorithm that transitions between states can be thought of as an ingestion-to-illness process, where the outcome (1 or 0) depends on whether illness occurs after ingestion. Therefore, I argue that even actions like learning from experience inherently involve computational processes, as they rely on updating internal states based on feedback to achieve better outcomes.

    ReplyDelete
  35. I found it very interesting that despite his strong views on the limitations of computation in replicating cognition, Harnad still argues a robot could eventually mediate the connection between its internal symbols and the external world by itself. It made me wonder how that would actually work. If a robotic system is still an inherently symbolic system, how could it ever “ground” itself in the world instead of relying on human interpretation? Would achieving this require building a system that develops and learns the way a child does, by forming connections through experience and interaction? And even then, wouldn’t it still just be manipulating symbols according to rules? I’m curious whether a robot could ever really interpret the world autonomously and see meaning in things rather than just simulating understanding.

    ReplyDelete
  36. “The link between the symbol and its referent is made by the brain of the user”

    Meaning arises out of learned associations between a symbol and a referent. We can associate a content word like ‘dog’ with its referent: a dog in the material world, when we see and touch one. We learn what distinguishes a dog from other things when touching/seeing a dog generates feedback that can answer yes/no questions about what a dog is and is not. Through this process, we learn distinguishing features that are unique to the object, turning it into a semantically interpretable category which can be used to derive meaning for other content words.

    ReplyDelete
  37. ***EVERYBODY PLEASE NOTE: I REDUCED THE MINIMUM NUMBER OF SKYWRITINGS. BUT THE READINGS ARE **ALL** RELEVANT TO AN OVERALL UNDERSTANDING OF THE COURSE. SO, EVEN IF YOU DO NOT DO A SKYWRITING ON ALL OF THEM, AT LEAST FEED EACH READING YOU DO NOT READ TO CHATGPT AND ASK IT FOR A SUMMARY, SO YOU KNOW WHAT THE READING SAID — OTHERWISE YOU WILL NOT HAVE A COMPLETE GRASP OF THE COURSE TO INTEGRATE AND INTERCONNECT FOR THE FINAL EXAM.***

    ReplyDelete

Closing Overview of Categorization, Communication and Cognition (2025)

Note: the column on the right    >>>  starts from the 1st week on the top  downward to the last week at the bottom. Use that right ...