10b. Harnad, S. (unpublished) On Dennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problem
Reading: Harnad, S. (unpublished) On Dennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problem.
Instructions for commenting: Quote the passage on which you are commenting (use italics, indent). Comments can also be on the comments of others. Make sure you first edit your comment in another text processor, because if you do it directly in the blogger window you may lose it and have to write it all over again.
***EVERYBODY PLEASE NOTE: I REDUCED THE MINIMUM NUMBER OF SKYWRITINGS. BUT THE READINGS ARE **ALL** RELEVANT TO AN OVERALL UNDERSTANDING OF THE COURSE. SO, EVEN IF YOU DO NOT DO A SKYWRITING ON ALL OF THEM, AT LEAST FEED EACH READING YOU DO NOT READ TO CHATGPT AND ASK IT FOR A SUMMARY, SO YOU KNOW WHAT THE READING SAID — OTHERWISE YOU WILL NOT HAVE A COMPLETE GRASP OF THE COURSE TO INTEGRATE AND INTERCONNECT FOR THE FINAL EXAM.***
ReplyDeleteProfessor Harnad's critique of Dennett and Chalmers points to the flaws and circular reasoning put forth by their arguments. Dennett prones an explanation of consciousness by distancing oneself from the subjective, potentially misleading experiences, supplanting it with so-called 'objective' measurements. This essentially demotes feelings; nonsensical in an attempt to tackle the hard problem, which seeks to explain how/why we feel. On the other hand, Harnad argues that Chalmers' philosophical zombies, albeit an interesting analogy to said hard problem, does not help solve the explanatory gap between neural information processes and feelings.
ReplyDeleteSo, where do we go from here? I for one believe they are all wrong in the first place trying to tackle a problem that does not exist. But to Professor Harnad I ask: what are the nature of feelings? Sont-ils immanescents au cerveau, ou les transcendent-t-il?
Hi Camille, yes this valuable commentary on the debate between Daniel Dennett and Chalmers makes a very important distinction. Harnad reframes the central issue of consciousness, arguing that it is not about cognition, representation or knowledge access but about feeling itself. Heterophenomenology does not explain feeling, only predicting behaviours and reports while the zombie argument is misguided since it focuses on the hypothetical possibility of physical duplicates without feelings (neglecting to address the deeper question of why we aren't zombies). I think rather than there being a clear, "where do we go from here" the result lies more so in the fact that causal-functional explanations work for every natural phenomenon except feeling.
DeleteI find Harnard's refutation of Chalmers' philosophical zombies enlightening. Since the goal is to find a causal explanation to feelings, just like we have aimed to do with many phenomena, why would we treat feeling differently? His example on how insensible "suggesting a molecule-for-molecule identical moon could fail to have gravity" really clarified that for me. In the "C Team" that Professor Harnard decides to position himself, the appropriate course of action, mirroring what he believes Turing should have concluded, is to focus research on performance capacity and functional modelling. This seems to be an alternative (second best option) since he believes feeling is not causally/functionally explainable. But my question is : isn't the goal functional modelling to reserve-engineer to the point of "replication" of a sentient system? Wouldn't that tell us a lot about sentience and feelings, and their functional explanation ?
ReplyDeleteI agree that Harnad’s “moon without gravity” analogy clarifies why he rejects Chalmers-style zombies. But what’s striking is that, despite Dennett placing him in Type-B/B Team, Harnad puts himself in Team C: he accepts that feelings are real but denies we have any causal explanation for them. Functional modeling can replicate every capacity of a sentient system, but for Harnad that still won’t explain why those functions are felt rather than performed “zombily.” Reverse-engineering tells us how doing works, not why doing is accompanied by feeling.
DeleteHello Sannah, I agree with your comment. The reverse-engineering method does not answer why sentient systems feel, they only explain how they execute their actions systematically. To replicate a sentient system perfectly, an explanation of why we feel, or an explanation of the hard problem in this instance, is required. We could tell a T3 "zombie" to experience nociception by pulling the strings in the brain required to experience nociception, effectively reverse-engineering the experience of nociception, which is correlated with pain. That said, it would not answer Harnad's question; why do we feel? Additionally, this process would be similar to Chalmers' heterophenomenology due to the surface study of correlation between behaviours and feelings, which is a faulty practice according to Harnad. Anywho, the alignment of Harnad with team C, which does not think there is a causal explanation for feelings, indicates, in my opinion, that reverse-engineering would in fact not be enough to obtain a causal explanation of feelings.
Delete“The mind/body problem is really the feeling/function problem.”
ReplyDeleteThis line helped me understand Professor Harnad’s main point: he thinks most philosophers talk about “consciousness” in a way that mixes too many things together, thinking, understanding, reporting, knowing when the real mystery is simply why anything feels like something at all.
I like the clarity of this framing. It reminds me that the mind/body problem is not about whether we can explain behaviour (we can) or whether we can build systems that act intelligently (we already do). The real issue is explaining why doing anything needs to be accompanied by feeling pain, pleasure, colour, taste, fear, anything.
At the same time, I find the conclusion a bit unsettling. If feeling has no clear functional role, then even a perfect Turing-scale model would still leave the essential part unexplained. It makes me wonder whether science can ever fully explain feeling, or if it will always remain the one part of nature we can only experience, not demonstrate.
For me, Professor Harnad’s message is that solving cognition is not the same as solving sentience and that the hardest question is not “How does the mind think?” but “Why does thinking feel like anything in the first place?”
By reworking what the mind body problem is into a function/feeling problem we start looking at how and why there are experiences when there are cognitive processes. Function is everything a system does or its behavioral, causal, computational operations. Feeling is what it is like for a system to be performing those operations. While we can explain and replicate all of function, feeling does not follow form function this is why we cannot reverse engineer feeling the way we can with function. Chalmer’s zombie is an example of this, where the zombie is functionally identical to a human but has no conscious experience. So naturally the question that arises is what is the difference between the zombie and a person? So in this sense solving cognition doesn’t solve feeling.
ReplyDeleteIn this paper, Dr. Harnad argues that neuroscience is fundamentally unable to explain how or why physical processes feel like anything at all. In other words, the hard problem is “real” but cannot be solved through functional explanations; neuroscience can predict and correlate, but it can’t explain. I’m interested though as to what would actually “count” as a successful explanation of feeling. I agree there is no current “how/why” explanation, but I’m left wondering what the criteria to meet this standard would look like. Is the problem framed in a way that no answer could ever really satisfy it?
ReplyDeleteI'm also wondering this. Since the fact that feeling might be a byproduct of other functions is not considered a good explanation, then I am curious what would be considered a proper explanation. I also think that the concept of "zombies" existing could serve as a helpful indicator of a solution to the hard problem in the distant future. For example, ChatGPT is what I believe to be a "zombie", since it passes T2 and is most likely not feeling as it performs its functions. I know that the paper says that what matters is why we are not zombies and have feelings. Would discovering the intricacies of what these zombies cannot do that we can help to explain the hard problem? The causal answer may come in part from these zombies. So, I am wondering if this is plausible or if this would just be the case of another correlate to feeling discovered?
Delete