Wednesday, August 27, 2025

10c. Harnad, S. (2012) Alan Turing and the "hard" and "easy" problem of cognition: doing and feeling

 10c. Harnad, S. (2012) Alan Turing and the "hard" and "easy" problem of cognition: doing and feeling

Reading: Harnad, S. (2012) Alan Turing and the "hard" and "easy" problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012, Summer Issue

Instructions for commenting: Quote the passage on which you are commenting (use italics, indent). Comments can also be on the comments of others. Make sure you first edit your comment in another text processor, because if you do it directly in the blogger window you may lose it and have to write it all over again.

21 comments:

  1. ***EVERYBODY PLEASE NOTE: I REDUCED THE MINIMUM NUMBER OF SKYWRITINGS. BUT THE READINGS ARE **ALL** RELEVANT TO AN OVERALL UNDERSTANDING OF THE COURSE. SO, EVEN IF YOU DO NOT DO A SKYWRITING ON ALL OF THEM, AT LEAST FEED EACH READING YOU DO NOT READ TO CHATGPT AND ASK IT FOR A SUMMARY, SO YOU KNOW WHAT THE READING SAID — OTHERWISE YOU WILL NOT HAVE A COMPLETE GRASP OF THE COURSE TO INTEGRATE AND INTERCONNECT FOR THE FINAL EXAM.***

    ReplyDelete
  2. In this essay, Harnad defends that Turing’s goal with the Turing Test was to explain how we can do what we can do. In other words, he was interested in the causal mechanisms that allow human beings to “cognize” while being aware that generating observable capacities is not equivalent to feeling. Thus, Turing was trying to solve the “Easy Problem” knowing that sensorimotor capacities were essential to it. With Searle, we learn that computation alone cannot explain cognition because it feels like something to understand and this particular feeling was not found when manipulating symbols on the basis of their shape. How and why do we feel that way? How and why do we feel at all? When solving the “Hard Problem” we would be able explain how and why we feel. Inspired by Descartes’ “Cogito”, Harnad says that we can doubt everything, but that the only thing we know with certainty is “what it feels like right now is what it feels like right now”.

    ReplyDelete
  3. I believe this paper is a good summary/overview of the material covered during the semester. It is helpful to group and link computation up until categorization and the hard problem in one essay. It allows to test our understanding in "one flow". I think my main takeaway is how interconnected the concepts are. Indeed, the interconnectedness is the essential argumentative flow of the essay: the pursuit of the scientific explanation of human doing (the TT) revealed the insufficiency of computation alone (via Searle's CRA), necessitating sensorimotor grounding (categorization via direct grounding), yet even this successful explanation of doing (the easy problem) does not solve the mystery of feeling (the hard problem). I think my only critic is that it fails to cover the topic of language, but yet again maybe that is because it's just part of the easy problem and no the hard problem.

    ReplyDelete
  4. “Turing’s Test solves the easy problem — explaining what we can do — but it leaves untouched the hard problem of why doing is accompanied by feeling.”

    This passage is interesting to me because it points to something unique about humans: we don’t just do things — we feel complex feelings. Turing argued that if a machine can do everything we can, we should treat it as intelligent, but he never said this would explain feeling or that passing the Turing Test means the machine is conscious. This misunderstanding highlights that there is some kind of consciousness or “thing” in us that gives rise to feeling, which sensorimotor abilities alone can’t produce. I also wonder how God and faith fit into this. If AI can eventually do everything we can do but never shows evidence of truly feeling, that might point to a Creator who gave humans and animals this capacity, rather than us being able to create it ourselves.

    ReplyDelete
    Replies
    1. I agree that passing the Turing Test never settles the question of feeling, but what your point made me think about is a different challenge. Harnad argues that even if we built a perfect T3 robot, we still have no method for detecting feeling in anything except ourselves. So the gap is not only about what causes feeling. It is also about how we could ever know if another system feels.
      Because of that, I think the God question cannot be answered just by looking for “evidence” of feeling in AI. If feeling is always private, the mystery remains open for both machines and humans.

      Delete
    2. I really enjoy both of your takes. I think that this paper really focuses in on how behavioural capacity does not guarantee subjective experience. Harnad's arguement pushes us to recognize that explaining cognition is not the same as explaining consciousness, something he really honed in on in the first few weeks, and we struggled to grasp. We seem to be tying everything together with this paper and emphasizing that scientific progress on doing may never bridge the gap surrounding feeling itself.

      Delete
  5. Harnad uses Descartes’ famous idea of "Cogito" (that I cannot doubt I am thinking while I am doing it) to support Searle’s argument against the Turing Test. He points out that we know we are conscious not because of how we behave, but because of how we feel inside.

    The paper suggests that the only thing we can be 100% sure of is our own internal feeling , yet most science is entirely based on observing external behavior. Even in the case of the TT, we are observing output/external behavior. This leads to a discouraging realization: if the only true proof of consciousness is private (as Descartes supports), then a public science of the mind is impossible. As soon as we try to measure consciousness by what a robot does, we stop studying the feeling itself and only study the programming and the output. We are trying to use an output to measure an "inside" experience.

    ReplyDelete
    Replies
    1. Yes, if the only thing we can ever be certain of is our own subjective feeling, as Descartes insists, then the entire scientific study of the mind seems like a failure and pointless from the start. Harnad fully acknowledges this paradox, but I don’t think he uses Descartes’ Cogito to support Searle’s attack on the Turing Test. Instead, I feel like he uses it to clarify what the Turing Test can and cannot do.
      Science, is built on public, observable evidence and conscious feeling, by definition, is private. Turing understood this long before consciousness became a mainstream philosophical problem: we can study what organisms do, but never directly what they feel. Where I diverge slightly from your reading is that Harnad isn’t siding with Searle to undermine the TT; he’s showing that Searle’s critique only succeeds when it targets the wrong thing. If the TT were meant to detect consciousness, Searle would be right, yes it is true that no behavioral test could ever reveal what it ‘feels like’ to be a system, but I don't think that was ever Turing's goal.

      Delete
  6. I think this may have been one of my favorite reads thus far as it provides a nice overview for the key points of this course – from the Turing Test to Searle’s CRA to the Symbol Grounding problem. I think one of the most interesting lines in the paper was the following: “The successful TT passing model may not turn out to be purely computational; it may be both computational and dynamic; but it is still only generating and explaining our doing capacity. It may or may not feel.” Even a perfectly passing T3 robot with symbol grounding in sensorimotor experience, language ability, human-like behaviour, etc may be void inside – no feelings, experiences, consciousness. And this at the very crux of possible future ethical dilemmas society may face with advanced AI models. If we do create such robots that pass all T3 tests (but we remain uncertain if they possess consciousness) is shutting down such a system murder or merely turning off a machine? On what basis can we make the consciousness determination if it is never observable from the outside…?

    ReplyDelete

  7. The easy problem is why and how we do what we do, while the hard problem is why/how we feel what we feel. Is there a distinct boundary between doing and feeling? There is a difference between our external behaviours and our inner feelings, but what about our thoughts? If I’m making a plan for dinner in my head maybe that’s more ‘doing’, but what if I’m having an intense emotional reaction to something and all I can think is “#@$%&” (I’m not exactly sure how to phrase this in actual language)? Is that doing or feeling? And would a T3 robot do this?

    ReplyDelete
    Replies
    1. Emma, I had the same question about whether doing and feeling are entirely distinct! As you mention, boundaries seem clear when considering doing as something observable or measurable like external behaviours, actions, or even cognitive functions. However, the lines become blurred when we consider feeling, especially in cases where thinking is not just a cognitive process but also feels like something (i.e., how it feels like something to think). Whether or not a T3 robot could feel remains undetermined, though improbable. As Harnad (2012) notes, the Turing Test was designed to address the easy problem—explaining how and why organisms do what they do. A T3 robot would likely achieve functional grounding and do whatever humans can do, but it would still be unable to feel as humans do.

      Delete
    2. I'd like to say that I think Harnad was outlining a firm boundary between doing and feeling where he talks about Searle's chinese room, where he can do all the right symbol manipulations to pass the text but he knows definitively that he doesn't feel understanding of chinese. So even when the "doing" is perfectly correct the feeling is still not there. the dinner plan example you gave Emma was interesting but I think Harnad could still say its part of the easy problem because it about decision making which is ultimately somehting we do even if its internal. the emotional reaction might feel like something but the cognitive process of planning itself is moreso a doing capacity in my opinion. I also think yes its almost unanswerable: to know if the T3 robot would feel, but maybe its just exactly what Descartes suggests that feeling is in a completely different category vs doing.

      Delete
  8. This piece weaves together many of the concepts we’ve covered this year, showing how they interact and build on one another. Harnad draws on the Turing Test, the easy and hard problems of consciousness, computation, the symbol-grounding problem, and Searle’s Chinese Room argument to illustrate the crucial distinction at the heart of cognitive science: explaining how we do what we do versus explaining why and how we feel. As we have rigorously discussed throughout the course, the Turing Test represents a methodological shift, proposing that we explain cognition by designing a system that can act indistinguishably from a human. Although this approach allows us to evaluate a model's capacity to do, a computational system passing the test still lacks genuine understanding. Connecting symbols to their referents requires that symbols to be grounded in sensorimotor experience (this connection of symbols to their referents is then kept in our heads), through perceiving, categorizing, manipulating, and interacting with the world. We are thus left with the conclusion that although the easy problem is solvable, the hard problem of consciousness remains.

    ReplyDelete
    Replies
    1. I really like how you traced the connections among Turing, Searle, grounding, and the easy/hard problem and I think that’s exactly what Prof Harnad is doing in this piece. What the reading adds, though, is his sharper claim that even once we integrate grounding and robotics into a Turing-scale model, we’ve still only explained the doing. Prof Harnad argues that the Turing Test was never meant to solve feeling, only to define the limits of scientific explanation. So while the easy problem may be solvable in principle, the reading pushes further: grounding and successful TT-performance don’t bring us any closer to explaining why any of it is felt rather than merely performed.

      Delete
  9. This reading succinctly explains the hard and easy problem and ties in a lot of the ideas covered in the course. Thinking is not something that can be observed externally, it is something that happens internally and can be difficult to measure. The easy problem is explaining how and why we do what we do. Turing though can be solved using the Turing test, or building a model that can do all of the functions that a real human can do. So once this is modeled then we would explain cognition. But the CRA showed that computation alone is not enough, meaning requires grounding. So a true TT passing system requires sensorimotor capabilities. Yet if we explain the easy problem we still haven’t solved the hard problem or why we feel. Only a person themselves can know whether feeling is happening at all. So solving function using the TT does explain feeling.

    ReplyDelete
  10. Turing’s real contribution was not a claim about what minds are, but a claim about how we should study them. Turing basically replaced the question “what is thinking made of” with “what capacities does thinking let us exercise.” Harnad extends this by pointing out that this shift lets cognitive science avoid guessing about hidden inner mechanisms and focus instead on building systems that actually work. What struck me is that this methodological pivot already shapes everything the field sees as a legitimate answer. By making “doing” the criterion for explanation, we may unintentionally be building a science where feeling is excluded not because it is mysterious, but because our starting method silently filters it.

    ReplyDelete
  11. 10.c. Harnad splits cognition into “doing” and “feeling,” arguing that Turing solved only the easy problem: explaining our capacities. He’s convincing that passing the Turing Test can never guarantee or explain feeling, and that symbol grounding + robotics still don’t touch the hard problem. But saying the hard problem is “perhaps insoluble” feels like surrender. If we stop at pessimism, we risk freezing progress. Maybe understanding feeling requires new theories, not just expanding Turing, nor dismissing consciousness as causally irrelevant. Sometimes pushing past “insoluble” is how science moves.

    ReplyDelete
  12. Echoing other peers, I enjoyed that this paper brought up concepts that we were introduced to all the way at the beginning of this course. After all, we’re once again looking at the Turing Test and confronting the argument that cognition cannot be reduced to mere formal symbol manipulation. At least, not entirely.

    In this reader, Harnad emphasizes a distinction between the “easy” (doing) and “hard” (feeling). Naturally, because our current models are not grounded through sensorimotor experiences, they cannot go beyond simple textual exchanges. This irony reminded me of a paper I read for another class on AI, where we touched on the human pitfall for Moravec's paradox, which contends that what we think is hard (like complex calculations) is easy for machines; what’s hard for machines is what we consider “easy” (like the perception and mobility of a one-year old). This distinction reveals that these limitations are not just material. Specifically, there is a fundamental lack of the cogito, a subjective understanding, that underscores a deeper epistemological difference between humans and machines.

    ReplyDelete
  13. As we’ve also been discussing during class, this paper on Turing really clarified the difference between the easy problem of cognition (which is about doing) and the hard problem (which is about feeling or consciousness). The easy problem is trying to explain our performance capacity, how we can do what we can do, which is the functional focus of the Turing Test. But just successfully explaining the doing doesn't touch the feeling. As Harnad points out, even if scientists could design a perfect machine that acts indistinguishably from a human (a successful TT model), that model would only be generating and explaining our doing capacity. The successful system may or may not feel. The problem of consciousness is the much harder question of explaining how and why we feel at all. The feeling part is unique because consciousness is a matter of subjective experience, and even a system that perfectly mimics behaviour (doing) might still be lacking that feeling.

    ReplyDelete
  14. This paper provides the rationale behind the claim that the T3 test is the correct Turing Test, and that a (proper) T2 passing robot would actually need to pass T3 first. Professor Harnad refers back to Searle’s Chinese Room Argument to drive home the point that a T2 passing robot without sensorimotor capabilities could not ground the words that it exchanges to the referents those words refer to. Therefore, a T2 passing robot could not understand and think like organisms do, which would make it an undetermined model of cognition. However, T4 is also overdetermined given that there are many functions in the brain modeled by T4 which are not necessary for cognition, which makes the T4 test not the ideal test as well. The conclusion one could come to is that Alan Turing designed the Turing Test to be passed by a T3 robot which would model cognition in its entirety. Nevertheless, we know that this wasn’t Turing’s intention for the Turing Test. In reality, he meant that explaining doing power via the Turing Test and its passing model would be the closest we could get to modeling cognition, as there doesn’t exist any test that can explain how or why we feel because, in explaining doing power, we’ve run out of degrees of freedom to pursue to explain feeling.

    ReplyDelete
  15. TT is fundamentally an assessment of a system's doing capacity (EP), requiring T3-indistinguishability in every functional and cognitive respect, including complex behaviors like planning and a sense of time. However, the TT cannot verify feeling because the certainty of conscious experience is inherently first-person, as demonstrated by Searle’s Chinese Room, where he performed the doing without the accompanying understanding or feeling. This inability to test for consciousness confines cognitive science to the reverse-engineering of doing, which is the closest it can scientifically come to addressing the OMP, while leaving the HP unaddressed, because indistinguishable behavior (T3) does not guarantee indistinguishable, or even present, subjective experience.

    ReplyDelete

Closing Overview of Categorization, Communication and Cognition (2025)

Note: the column on the right    >>>  starts from the 1st week on the top  downward to the last week at the bottom. Use that right ...