10b. Harnad, S. (unpublished) On Dennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problem
Reading: Harnad, S. (unpublished) On Dennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problem.
Instructions for commenting: Quote the passage on which you are commenting (use italics, indent). Comments can also be on the comments of others. Make sure you first edit your comment in another text processor, because if you do it directly in the blogger window you may lose it and have to write it all over again.
***EVERYBODY PLEASE NOTE: I REDUCED THE MINIMUM NUMBER OF SKYWRITINGS. BUT THE READINGS ARE **ALL** RELEVANT TO AN OVERALL UNDERSTANDING OF THE COURSE. SO, EVEN IF YOU DO NOT DO A SKYWRITING ON ALL OF THEM, AT LEAST FEED EACH READING YOU DO NOT READ TO CHATGPT AND ASK IT FOR A SUMMARY, SO YOU KNOW WHAT THE READING SAID — OTHERWISE YOU WILL NOT HAVE A COMPLETE GRASP OF THE COURSE TO INTEGRATE AND INTERCONNECT FOR THE FINAL EXAM.***
ReplyDeleteProfessor Harnad's critique of Dennett and Chalmers points to the flaws and circular reasoning put forth by their arguments. Dennett prones an explanation of consciousness by distancing oneself from the subjective, potentially misleading experiences, supplanting it with so-called 'objective' measurements. This essentially demotes feelings; nonsensical in an attempt to tackle the hard problem, which seeks to explain how/why we feel. On the other hand, Harnad argues that Chalmers' philosophical zombies, albeit an interesting analogy to said hard problem, does not help solve the explanatory gap between neural information processes and feelings.
ReplyDeleteSo, where do we go from here? I for one believe they are all wrong in the first place trying to tackle a problem that does not exist. But to Professor Harnad I ask: what are the nature of feelings? Sont-ils immanescents au cerveau, ou les transcendent-t-il?
Hi Camille, yes this valuable commentary on the debate between Daniel Dennett and Chalmers makes a very important distinction. Harnad reframes the central issue of consciousness, arguing that it is not about cognition, representation or knowledge access but about feeling itself. Heterophenomenology does not explain feeling, only predicting behaviours and reports while the zombie argument is misguided since it focuses on the hypothetical possibility of physical duplicates without feelings (neglecting to address the deeper question of why we aren't zombies). I think rather than there being a clear, "where do we go from here" the result lies more so in the fact that causal-functional explanations work for every natural phenomenon except feeling.
DeleteI find Harnard's refutation of Chalmers' philosophical zombies enlightening. Since the goal is to find a causal explanation to feelings, just like we have aimed to do with many phenomena, why would we treat feeling differently? His example on how insensible "suggesting a molecule-for-molecule identical moon could fail to have gravity" really clarified that for me. In the "C Team" that Professor Harnard decides to position himself, the appropriate course of action, mirroring what he believes Turing should have concluded, is to focus research on performance capacity and functional modelling. This seems to be an alternative (second best option) since he believes feeling is not causally/functionally explainable. But my question is : isn't the goal functional modelling to reserve-engineer to the point of "replication" of a sentient system? Wouldn't that tell us a lot about sentience and feelings, and their functional explanation ?
ReplyDeleteI agree that Harnad’s “moon without gravity” analogy clarifies why he rejects Chalmers-style zombies. But what’s striking is that, despite Dennett placing him in Type-B/B Team, Harnad puts himself in Team C: he accepts that feelings are real but denies we have any causal explanation for them. Functional modeling can replicate every capacity of a sentient system, but for Harnad that still won’t explain why those functions are felt rather than performed “zombily.” Reverse-engineering tells us how doing works, not why doing is accompanied by feeling.
DeleteHello Sannah, I agree with your comment. The reverse-engineering method does not answer why sentient systems feel, they only explain how they execute their actions systematically. To replicate a sentient system perfectly, an explanation of why we feel, or an explanation of the hard problem in this instance, is required. We could tell a T3 "zombie" to experience nociception by pulling the strings in the brain required to experience nociception, effectively reverse-engineering the experience of nociception, which is correlated with pain. That said, it would not answer Harnad's question; why do we feel? Additionally, this process would be similar to Chalmers' heterophenomenology due to the surface study of correlation between behaviours and feelings, which is a faulty practice according to Harnad. Anywho, the alignment of Harnad with team C, which does not think there is a causal explanation for feelings, indicates, in my opinion, that reverse-engineering would in fact not be enough to obtain a causal explanation of feelings.
Delete“The mind/body problem is really the feeling/function problem.”
ReplyDeleteThis line helped me understand Professor Harnad’s main point: he thinks most philosophers talk about “consciousness” in a way that mixes too many things together, thinking, understanding, reporting, knowing when the real mystery is simply why anything feels like something at all.
I like the clarity of this framing. It reminds me that the mind/body problem is not about whether we can explain behaviour (we can) or whether we can build systems that act intelligently (we already do). The real issue is explaining why doing anything needs to be accompanied by feeling pain, pleasure, colour, taste, fear, anything.
At the same time, I find the conclusion a bit unsettling. If feeling has no clear functional role, then even a perfect Turing-scale model would still leave the essential part unexplained. It makes me wonder whether science can ever fully explain feeling, or if it will always remain the one part of nature we can only experience, not demonstrate.
For me, Professor Harnad’s message is that solving cognition is not the same as solving sentience and that the hardest question is not “How does the mind think?” but “Why does thinking feel like anything in the first place?”
Professor Harnad's framing really sharpens the issue that explaining doing is not the same as explaining feeling. Your reflection captures this gap well. Even if we model all cognitive functions, the fact that they're accompanied by subjective experience remains unresolved. You're right, this is the core mystery the paper urges us not to ignore.
DeleteBy reworking what the mind body problem is into a function/feeling problem we start looking at how and why there are experiences when there are cognitive processes. Function is everything a system does or its behavioral, causal, computational operations. Feeling is what it is like for a system to be performing those operations. While we can explain and replicate all of function, feeling does not follow form function this is why we cannot reverse engineer feeling the way we can with function. Chalmer’s zombie is an example of this, where the zombie is functionally identical to a human but has no conscious experience. So naturally the question that arises is what is the difference between the zombie and a person? So in this sense solving cognition doesn’t solve feeling.
ReplyDeleteIn this paper, Dr. Harnad argues that neuroscience is fundamentally unable to explain how or why physical processes feel like anything at all. In other words, the hard problem is “real” but cannot be solved through functional explanations; neuroscience can predict and correlate, but it can’t explain. I’m interested though as to what would actually “count” as a successful explanation of feeling. I agree there is no current “how/why” explanation, but I’m left wondering what the criteria to meet this standard would look like. Is the problem framed in a way that no answer could ever really satisfy it?
ReplyDeleteI'm also wondering this. Since the fact that feeling might be a byproduct of other functions is not considered a good explanation, then I am curious what would be considered a proper explanation. I also think that the concept of "zombies" existing could serve as a helpful indicator of a solution to the hard problem in the distant future. For example, ChatGPT is what I believe to be a "zombie", since it passes T2 and is most likely not feeling as it performs its functions. I know that the paper says that what matters is why we are not zombies and have feelings. Would discovering the intricacies of what these zombies cannot do that we can help to explain the hard problem? The causal answer may come in part from these zombies. So, I am wondering if this is plausible or if this would just be the case of another correlate to feeling discovered?
DeleteHi Elle and Isabelle!
DeleteI actually agree with the idea that “zombies” can be useful as a contrast case, especially the way you’re using ChatGPT as an example. It shows how far purely functional systems can go without any evidence of feeling, and yet, as Dr.Harnad argues, this still doesn’t get us any closer to understanding why we feel at all. It just widens the gap. But I think what keeps me stuck is exactly what you’re pointing to. Even if we build a perfect functional zombie, how do we ever know whether it really feels nothing? If I can hide my own feelings (say I’m furious, heartbroken, anxious, but outwardly claim I’m fine), then a zombie could in principle do the same. Outward behaviour, reports, physiology, even perfect Turing-Test performance don’t settle anything. That’s why Dr.Harnad insists that functional accounts only ever give correlates, not causes. We know that we feel because we feel it directly. But we have no access to the inside of a zombie, or even another human, beyond what they do and say. So even if we discovered “what zombies can’t do,” it would still only tell us about function. It wouldn’t explain why our functional capacities are accompanied by an inner life.
So I’m aligned with you, comparing ourselves to “zombies” is helpful, but it only highlights the hard problem, it doesn’t solve it. It just raises the deeper question,
What makes feeling emerge at all, and how could we ever prove that something truly lacks it?
A successful explanation of any phenomenon needs a causal functional “how/why”. But the problem with feeling is that feeling is not a function. Professor Harnad’s gravity example made this really clear for me. When we ask “How/why does gravity pull?”, the answer works because pulling is just what gravity is. But feeling is different because feeling is not doing. Every time you try to explain feeling in functional terms (like with Turing test robots), the explanation still works perfectly even without any feeling at all. That’s why the mind/body problem is really a feeling/function problem. From my understanding, because of what feeling is, it’s not really possible to even imagine what a proper causal explanation of “why we feel” could look like. And about the proposed zombie cases, I feel like studying what zombies can or can’t do wouldn’t explain why we feel either. Building zombies doesn’t get us any closer to understanding why we ourselves are not zombies.
DeleteHi Isabelle, I found your points interesting. However, I would argue for the sake of the ideas in this paper and in this course that ChatGPT does not make a good candidate for a "zombie" in the relevant sense. While yes, ChatGPT does pass T2 and it does not feel, there is so so much that we (feeling beings) can do that ChatGPT cannot -- thus there is still potential explanatory space to dismiss its lack of feeling.
DeleteI think the point of the thought experiment is considering what if one of these hypothetical zombies could do anything a human can observably do, and even be the exact same compositionally (i.e. T4-passing), but with the only feature lacking being feeling. Harnad argues that this idea seems about as sensible as considering a molecule-for-molecule recreation of the moon without gravity -- and I would certainly agree with this point. It feels pointless for me to consider the possibility of a being exactly alike me in every way, except for its lacking of feeling -- this line of thought necessarily steps outside of the physical world and seems to point toward the metaphysical or supernatural as potential explanations for feeling.
Hi Liam, I take your point, and I agree that ChatGPT isn’t a zombie in the strongest T4 sense Harnad is worried about.
DeleteBut I think that’s exactly why it’s useful in the present context. When you say there’s still explanatory space to dismiss ChatGPT’s lack of feeling because of everything humans can do that it can’t, I agree. But that explanatory space is doing a different kind of work. It explains why this system doesn’t feel, not how we do.
What I want to add is that, given where we are technologically, ChatGPT is probably the closest thing we currently have to a zombie. If we were to place ChatGPT’s software into a human-like robot or doll, we’d have a system that could pass many social and behavioral tests while remaining a very plausible case of something that lacks feeling altogether. That doesn’t make it a full philosophical zombie, but it makes it a concrete illustration of how far functional capacity can go without touching feeling.
“Heterophenomenology is nothing but good old 3rd-person scientific method applied to the particular phenomena of human (and animal) consciousness… bringing the data of the first person into the fold of objective science.”
ReplyDeleteI found this passage interesting because it shows exactly how Dennett thinks he can stay strictly third-person while still “capturing” subjective experience. What stood out to me, especially after reading Harnad’s commentary, is that Dennett really does take everything as data: verbal reports, physiological reactions, and even the subject’s insistence that something ineffable is being left out. But Harnad argues that this entire picture still never touches the real issue: explaining how and why any of this functioning is felt rather than just performed.
So although heterophenomenology gives a very comprehensive method for describing the patterns in what people say and do, prof Harnad’s point is that it still only ever explains the “easy problem.” Bringing first-person claims “into the fold of science” doesn’t tell us why there is anything it’s like for those claims to occur. That gap, the feeling/doing gap, is exactly what he says Dennett’s approach can’t reach.
Gabe, I think you're right that Dennett’s strength is the scope of heterophenomenology. He really does take everything as data, even the subject’s insistence that something “ineffable” is missing. I agree that this makes his method powerful descriptively. But what your comment made me realize is that this same inclusiveness exposes the limit: no matter how rich the data set is, it still only tracks what is happening, not why it feels like anything. So I think your point actually sharpens the tension between Dennett’s completeness claim and the feeling/function gap that Prof Harnad highlights.
DeleteI agree with both of you that Dennett’s method is impressive in how much it can capture, but what I keep coming back to is something a bit different. Heterophenomenology does not only leave the feeling/doing gap untouched. It also quietly transforms every first-person claim into third-person data before the analysis even begins. Once everything is converted into behaviour, there is no room left to notice whether some parts of experience resist that translation for reasons other than mystery. So the problem may not only be that feeling cannot be explained by function. It may also be that the very method forces experience to look like function, which hides whatever cannot survive that conversion.
Delete
ReplyDeleteHarnad argues that Dennett's theory on consciousness does not address the hard problem of consciousness; the how/why of feeling; namely, what causal mechanisms can produce feeling at all, and what evolutionary adaptive role feeling might have. Dennett claims that consciousness can be studied through Heterophenomenology, a third-person method that uses verbal reports, behaviour and physiological measures to examine individual’s beliefs about their experiences. For Dennet, feelings are nothing more than their functional correlates such as the measurable changes that occur when someone feels angry (increased heart rate, tightened facial muscles, brain activity changes). While Harnad suggests that this is relevant in addressing the easy problem, how and why humans can do all the things they do, he emphasises that it fails to explain why any of the processes “felt like anything at all” such as feeling the “redness” of red. Reflecting on this debate, I still wonder then, what would an explanation of feeling even look like?
"We are not interested in whether your toothache was real or psychosomatic, or even if your tooth was hallucinated, nor in the conditions under which these various things may or may not happen or be predicted. We are interested in how/why they feel like anything at all.”
ReplyDeleteI find this passage particularly illuminating because it elegantly separates the existence or accuracy of an experience from the very phenomenon of feeling. Whether a toothache is real, psychosomatic, or hallucinated is completely beside the point; the hard problem isn’t about verifying content or predicting outcomes. Rather, it’s about why any experience, regardless of its source or correctness, feels like something at all. This distinction helped me truly grasp the concept of qualia and why Dennett’s examples, like change blindness, seem to miss the heart of the matter: it’s not about limits of first-person knowledge, but the mystery of what it is like to experience anything. When I was reading Dennett’s original paper, I thought of placebo effects, and how physiologically someone might not align with their subjective report of pain. Yet even there, that mismatch is irrelevant. The debate isn’t about the validity of first-person reports; it’s about why feeling arises at all, rather than just doing.
ReplyDeleteEarlier I thought that maybe Dennett's proposition sidestepped the Other Minds Problem and Harnad's critique confirmed this. Dennett says subjects can report what they feel, so there’s no mystery, but this works only if we deny that there is any private ‘felt’ layer beyond the reports themselves. Harnad pushes back: all the behavioral and physiological correlates still leave untouched the real question of why any of these things are felt from the inside at all. Third-person methods can track every change, prediction, and report, yet none of this explains why those states aren’t occurring without experience. If we take away the ‘private’ part of consciousness just so that we can make consciousness scientifically tractable, and if we do that are we explaining consciousness, or just avoiding the hardest part?
something interesting is the insistence that no matter how perfect our functional models become, they will never touch the real question, which is why any of those functions are felt at all. I see why he calls this the point where explanation collapses, but I also wonder if that collapse depends on how we choose to frame the problem. If we treat feeling as completely outside causal structure, then of course nothing can explain it. But if feeling depends on patterns we do not yet understand in complex systems, then maybe the boundary between doing and feeling is not fixed forever.
ReplyDeleteI really like the way you frame this, because it captures the tension at the heart of Harnad’s argument. His point is powerful precisely because he keeps pressing the gap between doing and feeling: no matter how exhaustive our functional story becomes, we still haven’t explained why any of those functions should be accompanied by subjective experience. But you’re right that this “collapse” hinges on how we define the problem. If we treat feeling as fundamentally outside causal structure, then the hard problem becomes unsolvable by definition. But if feeling emerges from patterns we don’t yet know how to characterize, something like higher-order organization in complex systems, then the boundary Harnad draws might not be metaphysically fixed, only conceptually fixed for now. Your point opens space for a future science that doesn’t reduce feeling to function, but neither treats it as untouchable.
Delete10.b. Harnad argues that both Dennett and Chalmers miss the real issue: explaining how and why anything feels at all. He dismantles heterophenomenology by showing that predicting every functional correlate still never touches the “feeling/doing” gap. Honestly, he’s right that Dennett keeps shifting to behavior and reports, and Chalmers gets lost in zombie sci-fi. But Harnad’s own pessimism, saying the hard problem is insoluble, feels like giving up. Maybe instead of declaring defeat, we need theories that try to bridge feeling and function rather than bracketing one side away.
ReplyDeleteThis comment has been removed by the author.
ReplyDeleteThis reading suggests that the true challenge in the HP is not in explaining vague terms like "consciousness," but in addressing the "aesthesiogenic" question of HOW and WHY we feel.
ReplyDeleteThe reading highlights limitations in current methodologies, such as Dennett’s heterophenomenology. While heterophenomenology is good for documenting and predicting WHAT or WHETHER someone feels by observing verbal reports and physical activity (EP), it only provides correlation, not causation. Identifying where physical processes happen in the brain does not explain how those processes give rise to feelings.
This explanatory gap is encapsulated by the philosophical zombie, an entity that seems identical to a human, but lacks the ability to feel. The Zombic Challenge is the methodological demand that we provide a causal explanation for HOW and WHY we are not zombies.
“The name of the game is not just inferring and describing feelings, but explaining them.”
ReplyDeleteHarnad (2001) argues that in “The Fantasy of First-Person Science”, Dennett loses sight of the key issue at hand, straying from causal explanations of the hard problem—namely, how and why thinking organisms feel the way they do. In this sense, explaining “what it’s like to feel” does not refer to behavioural or functional accounts of feeling, nor does it rely on empirical predictions about the conditions under which feelings may occur. Correlates of feeling and function may exist, but that is not the point according to Harnad—the point is addressing the deeper question of how and why we can even feel at all.
Harnad reframes the mind-body problem as the "feeling/function problem," arguing that the real explanatory gap isn't about prediction or correlation, but it's about why functional mechanisms are accompanied by subjective experience at all. As he puts it even with a complete causal account of the neural mechanisms underlying consciousness, "you will still not be able to give even a hint of a hint as to how it is that that mechanism feels at all..." This isn't about methodological limitations or data insufficiency but it's a categorical problem. Harnad distinguishes between explaining that feelings correlate with certain functions (the easy problems) and explaining how/why those functions generate felt experience rather than proceeding zombily. He's essentially asking- given that causal-functional explanations work perfectly well without invoking phenomenology, why should physical processes feel like anything? This question cuts deeper than either Dennett's heterophenomenology or Chalmers' zombie thought experiments becasue it questions whether functional explanation can ever bridge the doing/feeling distinction.
ReplyDeleteI love Harnad's realism in this response. He stands up to both Dennett and Chalmers, ending at a form of pessimism which is more accurate than his contemporaries. Harnad still believes we should continue down this path of pursuing neuroscience, AI, modelling... without the expectation that it will solve the explanatory gap. My question is why? Just so we can get better at solving 'easy problems'? To me, it seems like based on Harnad's position, the hard problem either 1. shouldn't be explored at all, or 2. must be explored through non-physical means (via spirituality, religion...?). If we know that the feeling/function relationship isn't explained even when we've ran out of all degrees of freedom, then why continue? It seems clear that this isn't the right avenue, and I'm puzzled at how Harnad's pessimistic conclusions still lead to encouragement, not submission. Unless of course, this is what his submission looks like, and he's trying to get others to stop pretending that they haven't submitted either... is philosophical defeat the context to his psychological encouragement?
ReplyDeleteI actually read Harnad’s stance a bit differently. I don’t think he’s encouraging neuroscience and AI because he believes they’ll “almost” solve the hard problem. It’s more that solving the easy problems is still necessary, in reverse-engineering cognition, passing T3/T4, mapping causal mechanisms, because that’s the only domain where science can truly operate. The fact that this won’t bridge the explanatory gap doesn’t mean we abandon the project but that we stay honest and clear about what the project is for.
DeleteI don’t think Harnad sees this as submission or defeat, but as clearing away confusion. He is telling us that cognitive science can explain doing but not feeling. That boundary-setting is productive in itself in my opinion. And I don’t think he’s pointing us toward non-physical solutions either, more that the HP is not currently accessible in scientific terms, and forcing it into a causal framework leads to the kinds of mistakes he sees in Dennet and Chalmers. So I think this encouragement isn’t “keep going, the HP will crack,” but more like “keep going, but stop pretending the HP is part of the same puzzle.”
What I find most interesting about this idea is how radically it reframes what it means for children to learn language. The quote suggests that children are not passive recipients who piece grammar together from the chaos of speech around them; instead, they come equipped with powerful internal constraints that instantly eliminate countless “reasonable” but impossible grammatical options. This makes language acquisition feel less like trial-and-error learning and more like uncovering a pre-structured cognitive system. What fascinates me is how this distinction separates language from general intelligence: even children with profound cognitive impairments can display relatively sophisticated grammar, while neurotypical adults still struggle to consciously articulate the rules they use effortlessly. This asymmetry implies that language is not just another skill but a specialized mental organ that shapes what a child can even consider a possible sentence.
ReplyDeleteThe paper addresses the hard problem (feeling/function problem), which is the question of how and why physical and functional processes give rise to felt experience. The author reviews Dennett’s use of heterophenomenology, which treats reports and behaviours as third-person data, but ultimately critiques it for never addressing why anything feels like something rather than merely functioning. The paper concludes that while functional and computational explanations can successfully model cognition, no existing scientific framework explains how or why feelings arise from those functions and that overall the ''problem of giving a causal/functional explanation of feeling: explaining how and why we feel) has not been solved, and is insoluble''.
ReplyDeleteIn this paper, Harnad clarifies the consciousness debate by clarifying that the challenge isn't the traditional mind/body problem, but the feeling/function problem. He argues that computational or functional approaches, like Dennett’s, focus entirely on explaining doing, basically the mechanics and performance capacities of the brain. However, this strategy leaves out the subjective feeling aspect of consciousness entirely.
ReplyDeleteIt’s notable that function is ‘explanatorily superfluous’ to feeling. Even if science could perfectly predict every feeling based on neural activity, the mechanism that produces the behavior has its own causal explanation that doesn't need consciousness. This means functionalism never explains why the process is felt at all, only how it works. This failure forms the basis of the Zombic Challenge: explaining how and why we are not just functional duplicates (zombies) that lack subjective experience.
“I am simply denying that my (or your) current interpretation is what is given to the scientist, as a datum to be explained.”
ReplyDeleteHarnad criticizes Dennett’s heterophenomenological stance for fundamentally failing to capture the causal reality of subjective experience. By converting first-person experience into third-person data (the subject's report), heterophenomenology sacrifices the very thing it aims to explain the causal origin of feeling, offering only a correlation between brain activity and verbal output. This is evident in Dennett's insistence on treating the subject's self-report as just another piece of data, not a privileged, causal insight. This denial underpins the method's weakness. While the heterophenomenologist can meticulously chart what a subject reports, by refusing to acknowledge the interpretation as a causally potent reality of experience, the method remains locked out of addressing the HP, leaving the fundamental how and why unanswered.
ReplyDeleteHarnad’s critique of Dennett really zeroes in on the idea that "the mind/body problem is the feeling/function problem". While Dennett’s "heterophenomenology" focuses on third-person data like verbal reports and brain activity, Harnad argues this only solves the "easy" questions of Computation (the how-to-do part) while ignoring the "f-word": feeling. This is crucial for Communication because, in the Turing Test, a robot might categorize and talk perfectly, but it could still be a "zombie" with no inner life.
I think we often mistake "intelligence" for "mind," but the author discusses that Computation is just a complex causal mechanism. He suggests that Categorization and "doing" are functionally explainable, but "being" or feeling is where the explanation stops. If a robot does everything we do but feels nothing, it still can’t be a feeling mind, even if it is built with traits and skills necessary to posess functional advantages.
I think Harnad would strongly agree with your claim that Computation, Categorization, and Communication can all be functionally explained without ever touching feeling. One place I’d slightly sharpen your point is around the zombie case. It’s not just that a robot that does everything we do wouldn’t count as a feeling mind if it felt nothing, but it’s also that nothing in the functional story would even require feeling in the first place. That’s what makes the feeling/function gap so troubling. Once all the “doing” is explained, feeling appears almost unnecessary causally, yet we know from our own case that it undeniably exists.
Delete