4b. Fodor, J. (1999) "Why, why, does everyone go on so about the brain?"
Reading: Fodor, J. (1999) "Why, why, does everyone go on so about thebrain?" London Review of Books21(19) 68-69. Abstract: I once gave a (perfectly awful) cognitive science lecture at a major centre for brain imaging research. The main project there, as best I could tell, was to provide subjects with some or other experimental tasks to do and take pictures of their brains while they did them. The lecture was followed by the usual mildly boozy dinner, over which professional inhibitions relaxed a bit. I kept asking, as politely as I could manage, how the neuroscientists decided which experimental tasks it would be interesting to make brain maps for. I kept getting the impression that they didn’t much care. Their idea was apparently that experimental data are, ipso facto, a good thing; and that experimental data about when and where the brain lights up are, ipso facto, a better thing than most. I guess I must have been unsubtle in pressing my question because, at a pause in the conversation, one of my hosts rounded on me. ‘You think we’re wasting our time, don’t you?’ he asked. I admit, I didn’t know quite what to say. I’ve been wondering about it ever since.
ABSTRACT: Visual categorization is thought to occur in the human ventral temporal cortex (VTC), but how this categorization is achieved is still largely unknown. In this Review, we consider the computations and representations that are necessary for categorization and examine how the microanatomical and macroanatomical layout of the VTC might optimize them to achieve rapid and flexible visual categorization. We propose that efficient categorization is achieved by organizing representations in a nested spatial hierarchy in the VTC. This spatial hierarchy serves as a neural infrastructure for the representational hierarchy of visual information in the VTC and thereby enables flexible access to category information at several levels of abstraction.
Though I did find Fodor's story humorous, I appreciate his point that simply taking a picture of the brain lighting up never really lets us conclude anything about how thinking works. His concern was that neuroimaging is collecting data, without necessarily being aligned with clear questions. Harnad (2019) was addressing a similar concern: while neuroimaging can identify where and when something happens, it doesn't meaningfully support our understanding of how to produce cognition. In contrast to this, Grill-Spector & Weiner (2014) demonstrates how careful neuroimaging research can illuminate "real" structure of "real" cognitive architecture, like how the ventral temporal cortex organizes categories of visual objects. So maybe the problem isn’t brain imaging itself, but whether researchers use it to ask the right cognitive questions. Certainly, we can find value in mapping the brain, but this value is limited. We still need associated theories of computation, symbol grounding and behavioural work to link brain activity to thought. In short, pictures of “lights” are interesting to look at, but they only become powerful when paired with explanations of what the brain is doing.
ReplyDeleteLorena, Correlations are just correlations, But they can be useful for causal explanation when they test causal hypotheses. We know the brain causes behavioral and cognitive capacity, but to test a hypothesis we need more substance than that.
Delete***EVERYBODY PLEASE NOTE: I REDUCED THE MINIMUM NUMBER OF SKYWRITINGS. BUT THE READINGS ARE **ALL** RELEVANT TO AN OVERALL UNDERSTANDING OF THE COURSE. SO, EVEN IF YOU DO NOT DO A SKYWRITING ON ALL OF THEM, AT LEAST FEED EACH READING YOU DO NOT READ TO CHATGPT AND ASK IT FOR A SUMMARY, SO YOU KNOW WHAT THE READING SAID — OTHERWISE YOU WILL NOT HAVE A COMPLETE GRASP OF THE COURSE TO INTEGRATE AND INTERCONNECT FOR THE FINAL EXAM.***
DeleteI found Fodor’s piece to be very entertaining and thought provoking. He did a great job at poking at the duality of psychology and philosophy and the seriousness of science. Mixing psychology and philosophy, which we seem to do a lot in class, has me questioning many things. It makes me wonder how much of what we call “explanation” is just clever description, and whether our search for a single “why” of the brain is even the right question. His playful skepticism reminds me that sometimes the most productive move in science is to keep asking better questions rather than rushing to the final answer.
ReplyDeleteKaelyn, Fodor’s critique of when/where neural correlations is valid. But he doesn’t give cognitive science much positive advice if it is trying to reverse-engineer the causal mechanisms that produce cognitive capacity. What is Turing’s advice?
DeleteI also noticed Fodor’s lack of advice on reverse engineering the causal mechanisms that produce cognition. He is critiquing the “when/where” localization obsession and makes a point that “It belongs to understanding how the engine in your auto works that the functioning of its carburettor is to aerate the petrol... But why (unless you’re thinking of having it taken out) does it matter where in the engine the carburettor is?”. Essentially, knowing where an activity occurs does not tell us how it works. However, he does not continue to give any guidance on how to proceed with more valuable questions. On the contrary, Turing suggests the avoidance of metaphysical debates such as “can machines really think?”, instead we should focus on building models that do the cognitive tasks. To clarify, to produce a capacity means to build a system that has that ability (not simply simulate behaviour).
DeleteEmily, yes, the main question in cogsci, as in science and engineering, is to explain how (and why) machines can do what they can do. (The “why” is usually an evolutionary question: what adaptive advantage does the capacity confer?; Week 7).
DeleteI understand the "why", but I think I might be losing grasp on the "how". I wonder if it is difficult question for me to grasp because of the very difficulty that comes with capturing it. Let's consider the brain activity during sleep. One school of thought consider a sleeping person as "unconscious", but others differ, namely my Sleep professor. K-complexes are significantly more observed on an EEG graph when information that is personal (ex: name) or emotionally salient/relevant (as opposed to random information) is mentioned in the presence of someone who is undeniably asleep (brain waves cannot be faked). This is, to some, indication of sensory-gating or information filtering, to preserve the sleep state without neglecting potentially life-saving information (ex: call for help). The need for some level of environment processing is crucial to survival and therefore answers the question "why". But how the brain discriminates auditory stimuli based on relevance when someone is - according to one school of thought - uncouscious remains unknown. Though we understand the advantages of this capacity, how it came about remains unclear; thus, brain imaging shows the outcome of the "how" process, not the process itself (causal link).
DeleteI like that Fodor distinguishes the questions “whether mental functions are neurally localised in the brain” and the question “where they are neurally localised in the brain”. Neuroimaging techniques have clinical relevance because they show brain activation, but they are correlational, not causal. Rizzolatti (2008) assumes that mental functions are indeed localized in the brain and that physical matter (in this context mirror neurons) is responsible for higher cognitive functions. But what does it tell us about the “why” and the “how” of cognition? When someone is remembering their 3rd grade schoolteacher, we can see brain activation yet we do not know how this memory is retrieved and how it is that we can remember. Cognition goes beyond the brain’s vegetative functions and localisation. Its causal mechanisms probably are linked to dynamic properties of the brain that computation alone cannot simulate because it is part of the physical world. Is that it? The next questions we could ask ourselves is “how can we study causal mechanisms of the brain?” and “can symbol grounding tell us about causality in the brain?”. In this way, we can think about cognition as something that transcends the brain itself…maybe mental functions are embedded in lived experiences. To Fodor’s questions I add “Are mental functions solely a product from the brain?”.
ReplyDelete"Cognition goes beyond the brain’s vegetative functions and localization." Sophie, I disagree. But first, let me say I agree with Fodor, in some way: knowing the localization of certain brain areas is important, but only up to a certain point (e.g. in surgery). I agree with the fact that knowing where neural signals appear in the brain is not something that science should focus on at the moment (although Fodor's critique is partially misplaced: there is too much taxpayer money that goes to much more useless resources, such as the military, and science would benefit from this money tremendously). Shoonover et. al. (2021) also shows that representational drift happens in the mice olfactory cortex, most putatively happening in the human brain as well; as such, neural signals change for a same stimulus, and the relevance of nailing specific neural signals to specific stimuli becomes an obsolete endeavour.
DeleteHowever, in your critique, you say cognition goes beyond the vegetative state of the brain; the brain is not vegetative. The brain is an energy capacitor, constantly strengthening/decaying its synapses in function of the environment (ergo energy) and its genes (ergo available molecular mechanism, the 'hardware'). In that, it is dynamic, and is capable of sustaining an infinite amount of neural representations/neural constellations. In order to understand if cognition goes beyond that, we first need to understand the dynamic properties of the brain. This is why I like Fodor's critique: these properties are much more pertinent to explore, both to better our understanding of diseases and philosophical inquiries.
Hi, Camille! I did mention that the brain has vegetative functions, but I never said it was vegetative perse. The heart has vegetative functions such as pumping blood. The brain has similar functions too (Harnad 2019). Do you need to think to keep balance while walking? Actually, this is what the activation in neuroimagery shows us; fMRI maps the oxygen level in the brain because neurons use nutrients found in the blood to be able to work properly. The brain uses blood when we cognitize because neurons use it — which correlates with the cognitive functions! — but it is not a cognitive function itself. As you said, if the brain is an energy capacitator and that synapses strengthen and decay then it needs those “vegetative” functions (then it is a whole other story to understand which ones stay and which ones don’t). And yes, if neuronal signals change for a same stimulus then Fodor is right to ask the question “whether mental functions are neurally localised in the brain”…where would these representations be? There are other causal mechanisms going on when performing cognitive functions and I agree that understanding the brain’s dynamic properties will get us closer to explaining “how” and “why” we do it.
DeleteSophie P, What is the difference between being a dynamical system and simulating a dynamical system computationally?
DeleteCamille, you say the brain is an “energy capacitor”. Okay. Cogsci starts where you can explain how this energy capacitor can pass T3. (Same question for Sophie P.)
Sophie and Camille, I think you both make good points, but I’m honestly still confused. Sophie, you’re right that brain scans mostly just show where stuff lights up, and Camille, I get the idea of the brain as an energy thing that’s always changing. But then the profs question makes me wonder, if a computer can copy those dynamics, is that really the same as actually being the brain? Fodor’s line about it being like “a camera without a hypothesis” kind of makes sense here too. Maybe the only real way to know is if something could actually pass T3, but I’m not sure.
DeleteHey Rena (and Prof Harnad), I thought about the question!Dynamic systems can be simulated by computation; the computer (or machine) has the recipe for the dynamic system. The model is made so it reproduce the capacities of the dynamic system, but with symbol manipulation only. It is also lacking the properties that emerge from the “real” physical dynamic system which can only appear with a “body”. To pass T3, I believe that the system needs a physical “body” with some kind of sensors and motor capacities. The nervous system as an “extension” of the brain can play this role. Through its dynamics, the brain can do what it does. The difference between a brain and a simulation of the brain is that the simulation (computation) needs external interpreter; the machine cannot interpret the symbols. Thus, I think that there is something else in the brain ― coming from the brain’s dynamic properties ― that associates symbols to semantic (even though they do not look like like the referent). Other dynamic systems also have their way to work that is not solely (or at all!) computation and from which new properties emerge.
DeleteRena, brain activity does cause cognitive capacity, but that correlation does not explain how the brain does it. Reverse-engineering is looking for a causal mechanism, not just a correlation. (I think you understand that.)
DeleteSophie, I think your understanding is moving toward that too. But what a body is needed for is T3. If T2 (which is just verbal capacity) could be passed by computation alone, C=C [what is that?] would be right. How does Searle show C=C is wrong?
Mirror-neuron activity is no doubt part of the causal mechanism of mirror capacity, but we already knew that we had mirror-capacities(what are they?) before mirror-neurons were discovers; we still don't know how the brain does it. But the discovery of mirror-neurons did draw attention to the existence of mirror capacities.
What does "dynamic" properties mean? Computation can simulate (or "model") dynamical properties (Strong Church-Turing Thesis), but a computer-simulation of a vacuum-cleaner is not a vacuum-cleaner and cannot suck in dust. A computer that is simulating a vacuum-cleaner is a dynamical system, but it's the wrong dynamical system for sucking in dust. The algorithm (recipe) that the computer is implementing may be the right algorithm, but the computer's physical dynamics in executing the recipe are irrelevant; it's only the recipe that is relevant.
So how would Turing propose that we test whether the recipe for mirror capacity is right? Virtual-reality testing can help increase our confidence that the recipe is right? But VR-testing is not Turing-testing. So what is Turing-testing? What would have to be done to test whether the recipe for mirror-capacity (or for vacuuming dust) really works?
What a body is needed for is to pass T3. So if C=C is wrong, and T3-capacity is needed to pass T2, how might T3-capacity contribute to producing T2-capacity? (Weeks 5 and 6)
Instructor , your mention of mirror neurons helps clarify my view. While Fodor's points are interesting, Fodor is very reductive when it comes to what Cogsci is trying to accomplish. Although we already knew humans have mirror capacities, discovering mirror-neurons, I believe, brings us one step closer to understanding how the brain does it. We can distinguish inborn mirror relations, like empathy, from learned ones. Cognitive science faces a similar challenge: we know we are thinking, but we don’t know how the brain does it. Still, brain imaging (and discovering mirror neurons) allow us to approach cognition through bottom-up processes (using sensorimotor interactions and correlating it areas of the brain) rather than relying solely on top-down reasoning. Finally, this moves us beyond C=C, as it doesn’t assume implementation-independence (i.e. hardware) and recenters the physicality of the brain in understanding cognitive capacities.
DeleteAt first, I thought the point Fodor was making was outrageous – of course imaging is useful for understanding cognition. But when I began to write this post, I had difficulty articulating why I thought so. Thus, my stance has shifted and I agree that Fodor is right to question whether localization actually advances theories of the mind/cognition. His strongest point was that a mechanistic gap exists: knowing where something occurs doesn’t explain how it is computed; localization risks being descriptive not explanatory. Correlation is not an explanation: just b/c we know the map lights up in a certain pattern doesn’t tell us *how* the brain does so. (He admits that imaging has clinical applications -- For example, he writes, “...if you’re a surgeon you may well wish to know which [brain regions] they are…to avoid cutting them out” – This is not the point of his philosophical critique though).
ReplyDeleteSo perhaps these maps are useful not in an explanatory manner, but instead in that they provide a measure of cognition? But these maps are not fixed; for example, blind individuals show a repurposing of brain regions. Can we really say they are a reflection of some internal architecture of cognitive states if they fluctuate from person to person? If localization is to actually further our understanding of mental constructs, then they should be able to provide direct evidence for the underlying mechanisms involved in their generation…
Elle, yes, "knowing where something occurs doesn’t explain how it is computed"; in fact it doesn't even mean that it is computed: What is computation?
DeleteThis is true: just because something occurs in the brain doesn’t mean we know how it’s computed — or whether it’s even computation at all. We can see this by 'imaging' the Chinese Room. Suppose motion sensors tracked Searle’s movements as he manipulated symbols. Maybe he even had designated sections of the room for different topics: politics in one corner, love in another. Observers could then correlate the content of Chinese dialogues with where Searle happened to be. This is equivalent to brain imaging: regions lighting up, correlated with certain cognitive states. But do these observers learn anything about the way Searle is working? Of course not. Likewise, knowing that brain area X is active when I think about politics doesn’t explain how, or even if, that thought is computed. This is Fodor’s point against localization, which resonates Searle’s claim that syntax is not semantics.
DeleteI agree Elle, the issue isn’t whether brain imaging produces data (it clearly does), and those data can even be clinically lifesaving. Instead, it’s whether those data actually get us closer to explanatory accounts of cognition. Like you point out, there’s a mechanistic gap: a red spot on a scan doesn’t tell us how physiological neurons and circuits, or how computations give rise to a certain mental state. At best, localization just confirms that something is happening in a certain place, but not how or why.
DeleteWhat you said about repurposing of brain regions really struck me, like the cortex in blind individuals supporting auditory or tactile processing it makes localization seem more like a window into the brain's functional organization (under certain/different conditions), not a fixed ‘map’ of cognition or how the brain works. That flexibility and plasticity suggests that localization is contingent and not fundamental, which overall I feel like weakens the case for treating localization as explanatory. I think that Fodor’s worry is that we might mistake these images for deep understanding. But unless those maps/images connect to mechanisms of information processing, they will probably stay at the level of description not understanding.
Jesse and Lauren, both spot on.
DeleteJesse, I really liked your “imaging the chinese room” thought experiment. It shows so clearly why “where” isn’t “how.” Even if observers tracked every corner Searle moved to, they’d still miss the actual rules he was following.
DeleteBut I think Rachel (higher up) is right that localization can still narrow the space of plausible models. Take Broca’s area: that “where” data (where syntax was vs semantics) forced theorists to abandon models that treated language as a single undifferentiated capacity.In that sense, “where” data doesn’t tell us the mechanism, but it can rule out the wrong ones. I think what Fodor is saying is that the real danger isn’t imaging itself, but imaging without theory (or a scientist with just a camera, as he puts it).
I wonder what Fodor would suggest…a solid theory and then fact checking it with brain imaging? Or wipe out the imaging part altogether and redirect that funding into building and testing computational models that actually explain the “how”?
“[T]here are different [brain regions] that [activate] when we see a thing, or form its mental image, but not when we hear a thing […].” This passage highlights that a single thing is usually stored in the brain in multiple different locations and modalities. For example, imagining a bell would solicit different brain areas than imagining its chime. However, despite being stored in different locations and despite the independence between those regions (i.e. imagining the chime will not necessarily prompt the imagination of the bell), the brain can still recognize the elements as being related. But how? How can the brain relate two seemingly independent stimuli into a whole and recognize it as such? My guess would be that, regardless of where things are geographically stored in the brain, cognitive processes (e.g. recognizing and remembering) are emergent properties of the interconnectedness of brain regions. As such, knowing where each individual element is stored becomes irrelevant in this regard, as Fodor defends.
ReplyDeleteCendrine, unfortunately "emergence" is not an explanation: it's what needs to be explained,
DeleteFodor argues that studying the brain won’t explain the mind, since neural correlates don’t tell us how cognition works. I agree that knowing where something happens in the brain isn’t the same as knowing how it works, like finding a chip on a circuit board without knowing the program it runs. Still, I think neuroscience can play a bigger role than Fodor gives it credit for, since imaging and modeling together might help constrain theories of cognition. In that sense, localization may not explain the “how,” but it can still shape the boundaries within which explanations of thought must fit.
ReplyDeleteHi Rachel, I definitely agree with you there; that knowing the placement of specific parts of the brain doesn’t necessarily define its function or role within our neural systems, and that it is important for study to begin somewhere. In brain science, that is discovering and exploring what makes up the motor of our bodies.
DeleteWhile the current enthusiasm for expensive neural imaging focuses heavily on where mental functions reside, expanding research to question the computational insight gained from a localization-centric approach would support the investments made in imaging technology.
We must ask whether the intense focus on generating "polychrome maps" is truly advancing our grasp of "deep issues about how the mind works". Given that brain imaging is expensive compared to "other ways of trying to find out about the mind", is the priority misplaced when capital-intensive technology risks "taking money and training resources out of other kinds of psychological research"? Additionally, if distinct mental states inherently have different neural counterparts (as materialists assume), why does the specific location of these counterparts matter, over the fundamental question of whether different tasks require different neural mechanisms?
Rachel, I agree. Localization helps set boundaries for theory, but Fodor's worry is that "when/where" data don't give the "how/why." It's not sufficient to know that nouns and verbs engage different patches, or that "lettuce" activates one region but "roast beef" doesn't, without knowing the mechanism, any more than knowing where the carburetor is tells us how the engine works (from the reading). The companion reading puts it bluntly: imaging shows correlates, not the algorithm/dynamics that generate cognition. If the right target is the mechanism (computational and/or dynamic), then imaging is useful only as far as it's theory-driven, testing process hypotheses, not simply mapping hotspots. Otherwise, we encounter Fodor's "camera without a hypothesis" problem, at the cost of starving the very behavioral/computational research we need to crack the "how."
DeleteRachel, I agree with you that localization can help set boundaries for theory, but I also see the worry that Jacqueline raised about priorities — if colorful “polychrome maps” soak up funding, are we losing sight of deeper psychological research? And Gabe’s point makes sense too: knowing that “lettuce” lights up here and “roast beef” there doesn’t explain anything about the algorithm generating cognition. However, what shoots out to me is that all three of your points revolve around the same idea ‘imaging is valuable only when it’s theory-driven’. Maybe the controversy isn’t whether neuroscience matters at all, but whether the field can resist the temptation of data for data’s sake and actually connect brain maps to models of how thought works.
DeleteRachel, I agree that neuroscience can set boundaries for theory, but I think its role could be even deeper. Sometimes the patterns in imaging don’t just constrain explanations, they hint at principles we hadn’t considered, like distributed processing instead of single “centers.” Fodor warns against “camera without a hypothesis,” but what if the images themselves spark new hypotheses? Maybe imaging is weakest when used to confirm theories, but strongest when it unsettles them.
DeleteRachel, yes, localization might help in modelling, but can you think of some concrete examples? And what do you mean by modelling?
DeleteJacqueline, what are some "other ways of trying to find out about the mind"?
Gabe, good grasp.
Rena What do you mean by "deeper psychological research"?
Randala, how is "distributed" explanatory?
By modeling, I was thinking of computational or cognitive models that try to explain the process of a task rather than just its neural location. For example, knowing that Broca’s area is involved in syntactic processing doesn’t explain syntax, but it does keep models of language processing accountable to the biological substrate. So localization doesn’t provide the “how” directly, but it narrows the space of plausible cognitive models.
DeleteRachel, the way I see it is: I get Fodor’s frustration that brain imaging isn’t really an explanation as it just shows “where,” not “how.” But instead of tossing it out completely, I wonder if we’re looking at it the wrong way. Maybe fMRI isn’t a theory of cognition at all, but more like a receipt of the brain’s energy costs. If BOLD signals are just proxies for metabolism, then maps aren’t telling us “this is where syntax lives,” they’re showing the price tag of how the brain is currently solving a problem.
DeleteThat changes the question: not “is concept X in area Y?” but “does a model predict the right energy shifts as learning happens?” For example, when someone gets better at a skill, imaging often shows activity moving from frontal regions to more efficient posterior circuits. A good theory should explain that energy reallocation. So maybe localization isn’t the explanation itself. It’s more like a filter; if a cognitive model gets the cost structure wrong, then it’s probably incomplete.
A biological human brain would definitely meet the criteria for T4. Maybe trying to reverse-engineer cognition down from the physical brain is a viable option, but does this mean that cognitive scientists supporting ‘brain lighting up’ neuroscience believe that T4 is necessary? But the brain lighting up in a specific area when specific things are mentioned doesn’t mean that any thinking is happening - in that area or somewhere else. It’s hard to say what these colourful brain maps say about cognitive processes, or whether they say anything at all.
ReplyDeleteEmma, T4 is more likely to be draw more on dynamics and even chemistry than on geography...
DeleteFonder questions why society is very interested in science that investigates where exactly things are happening in the brain. My initial reaction is that I find Fondor’s tone a bit dismissive, and I think he underplays the clinical significance of knowing where functions are localized. For example, with patients who have brain lesions or undergoing brain surgery, which brain regions are tied to what functions matter significantly. Cases like that of Phineas Gage show how damage to a specific brain region can profoundly alter personality. Someone who suffers a brain lesion is not the same person afterwards, which is very different situation from injuries to other organs. Fodor even admits, “if you’re a surgeon you may well wish to know which ones they are, since you will wish to avoid cutting them out.” To me, I think it’s central to why people care.
ReplyDeleteMaybe this is because I’ve taken more classes that approach neuroscience from a clinical perspective than philosophical that I first felt this way. But regardless, I do see Fodor’s philosophical critique. His point is that we mistake “where” as helping answer “how.” So while I think his tone is dismissive, I get that his point is that localization alone does not get us closer to reverse engineering cognition.
Annabelle, Fodor's weaknesses is that he says localizing brain correlates is not explanatory, but he doesn't say what is explanatory,
DeleteAnnabelle, I had the same reaction! While Fodor raises valid concerns about the limits of neuroimaging, he overlooks how crucial it has been for studying cognitive anomalies. For example, anterior temporal lobe (ATL) resection remains a standard surgical procedure for epilepsy, made possible by localizing brain regions and anticipating associated deficits, such as verbal memory impairment. However, as you both point out, the current methods provide only correlational insights into functional localization which does not causally explain the “how” or “why” behind cognitive processes. Similarly, to you, I tend approach this from a clinical perspective, but I also wonder: if correlation truly falls short, why haven’t cognitive scientists turned more seriously to alternative frameworks, like T3 models of cognition, in the first place?
Delete
ReplyDeleteFodor (1999) emphasizes that “theres never enough money to do all the research that might be worth doing". He criticizes the use of imaging techniques drawing funds from other kinds of psychological research, give what they offer for explaining cognition: identifying activation “blobs” during tasks in the brain, which provide correlates of behavior and cognition, but not causal evidence about mechansims. At best, he suggests imaging may help home in on hypotheses about where specific mental processes are localized. Given limited resources and time, I am increasingly convinced that if our goal is to identify causal mechanisms that reveal how thinking works, researcher who continue to prioritize imaging over building T3 systems are not just inefficient; they are being negligent by partaking in outdated methods that risk stagnating cognitive science despite the availability of more promising approaches.
Gabriel, what Fodor overlooks is that there are studies that use brain activity correlates to test models of how the brain produces cognitive some cognitive capacities (though mostly toy ones so far).
DeleteFodor argues that neuroimaging does not provide a causal explanation for cognitive processes. He asserts that identifying where and when brain activity occurs cannot explain how we think.
ReplyDeleteHowever, I think he’s overly dismissive of the potential relevance of the when/where. Fodor compares the brain to a car, “But why (unless you’re thinking of having it taken out) does it matter where in the engine the carburettor is?” Here, Fodor suggests that the spatial location of neural functions is only useful for clinical interventions, like surgery.
While, I agree that correlational neuroimaging cannot directly answer the how, I don’t think localization is completely irrelevant to cognition/thought. Grill-Spector & Weiner’s work implies that the VTC’s hierarchical organization serves as a physical substrate for information processing. I think this suggests that the question of where is important when building new theories and methods to study the how.
Adelka, it sounds like you have a point: could you elaborate a little, for kid-sib?
DeleteIf I may, I would like to expand Adelka’s point because I agree with her that, while neuroimaging does not directly explain causal mechanisms of cognitive functions, it allows us to come up with testable hypotheses that do explain. I think that Grill-Spector & Weiner (2014) is a great example. The authors themselves acknowledged a lack of computational understanding of how the brain generates visual categorization. However, they first needed to know that visual categorization occurs in the VTC to then understand how its anatomical structure supports this cognitive process. One hypothesized causal mechanism was that that the spatial hierarchy of the VTC in which visual stimuli are stored allows more efficient and flexible visual categorization.
DeleteInterestingly, computational models were used to test this hypothesis by comparing models with and without this implementational feature of the VTC, but they were merely simulations rather than physical robots for TT. This supports Dr. Harnad’s point about distinguishing a simulation obtained through computation and a physical implementation of the algorithm/computation. But it makes me wonder whether a T3-passing machine would be possible without the internal structures and functions needed for T4?
This comment has been removed by the author.
DeleteHey Anne-Sophie! I also agree with you on the fact that the potential for deepening our understanding of cognition using brain imaging should not be overlooked. However, I would like to answer your concluding question and connect it with the question at hand. A T-3 passing machine is, not only capable of passing the Turing test without the need for the materials involved in making a T-4 machine, but it is a prerequisite for producing a T-4 machine because the latter should have the same cognitive capacities as the former, and must be assembled in the same “order of function”/structure at every level of activity. This is where brain imaging and its uses for creating computer models of the brain becomes relevant. I believe, as do many others, that Fodor failed to acknowledge that, in order to reverse-engineer and learn to model cognition, experiments that rely on our knowledge of neural networks via brain imaging need to be conducted. He made a valid point in emphasizing the idea that the “where” does not directly explain the “how”, possibly because cognition isn’t all computation and simulations are not replicas of authentic experiences.
DeleteWhile brain imaging has an important role in cognitive science and neuroscience, the reading makes an important point: that there is limited funding to do important research and not all of it should be squandered by imaging in cognitive science. The main goal in cognitive science is to reverse engineer what thinkers are able to do. Searle’s CRA shows that thinking is not completely computational but much of it could still be represented computationally. Because computation is implementation-independent, reverse engineering cognition should not require replicating the exact locations of different tasks in the brain. So putting all research money towards just looking at what part of the brain lights up for every task that people could probably be better spent on designing a system like the T3 system, which which achieve indistinguishable performance without needing to replicate the brain’s internal mechanics
ReplyDeleteSierra, good points; do you know examples of such approaches (for kid-sib)?
DeleteGrill-Spector & Weiner (2014) say the ventral temporal cortex works like a nested hierarchy for categorization. But isn’t that kind of going beyond what the imaging actually shows? A scan doesn’t literally display a hierarchy, it just shows patterns that researchers then interpret. It’s like seeing traffic lights on a map and then claiming the city must be organized like a perfect grid:
ReplyDeletethat’s a theory, not the raw data.
Randala, categorization is definitely hierarchical, but do Grill-Spector & Weiner explain how VTC enables us to categorize?
DeleteFrom what I understand, the paper explains this issue in a neuroscience context, where the VTC enables us to categorize through a structured, hierarchical, and distributed representational map of visual categories. Visual categorization and recognition happen through 'over a dozen cortical regions' such as the V1, V2, V3, human V4 and prefrontal cortex (which is also involved in decision-making). However, in a cognitive context, Grill-Spector & Weiner don't quite explain how the VTC performs categorization. In other words, while they describe where and what happens in the brain during categorization, the precise computational mechanisms by which the VTC organizes and uses information for categorization remain unclear.
DeleteAlthough Fodor has made a point of not conflating our understanding of where things take place in the brain with how they take place I do want to put forth an example to counter how localization can in some instances causally explain a phenomenon. This is more in the realm of computer science and artificial neural networks, but the Hopfield Network (Hebbian-learning inspired) stores representation of information in low energy areas—think hoppers scattered across a flat surface—and retrieval is working your way down to these areas. The ‘placement’ or locations of these low energy areas that information is stored in ends up playing a large role in the overlap of the information retrieved, i.e. the closer these representations are stored together the less clear they become. Of course, this in comparison to the human brains being imaged that Fodor focuses on, is still a toy example but offers at least on instance where localization can be an explanatory tool in cognitive modelling.
ReplyDelete"We fill journal upon journal with articles on the brain correlates of behavior and cognition: where and when in the brain the activity occurs when we are cognizing. But “where” and “when” stubbornly keeps refusing to reveal how those brain correlates generate our cognition and the behavior. The reports of the correlations are as fascinating and as addictive as horoscopes—but they are about equally explanatory."
ReplyDeleteI think this is a slightly provocative yet sharp passage. Neuroimaging gives us beautiful pictures of the brain, but knowing where and when activity happens doesn’t yet explain how thinking works. Professor Harnad calling the findings “as explanatory as horoscopes” is provocative because it risks sounding dismissive of serious science, but it’s also indicative of the frustration many feel about having gathered endless data without answering the real question of "how". I like the reminder that correlation isn’t explanation (seeing a light turn on in a brain scan doesn’t tell us the causal process behind it). Still, I wonder if the horoscope comparison goes too far, because unlike horoscopes, neuroimaging is grounded in real biology and has brought clinical progress. Nonetheless, the main point that true explanatory breakthrough hasn’t happened yet still stands.
“But given that it matters to both sides whether, by and large, mental functions have characteristic places in the brain, why should it matter to either side where the places are?”
ReplyDeleteThis is interesting because Fodor is challenging a very common belief in neuroscience. He’s saying that even if we accept the idea that mental functions happen in particular parts of the brain, we still have to ask why the exact location matters. Knowing where something happens is not the same as knowing how it works or why it exists. By asking this, Fodor pushes readers to think beyond colourful brain maps and to focus on the processes and mechanisms that actually produce thought, feeling, and behaviour.
With mirror neurons, for instance, just knowing they fire when we act and when we observe does not tell us how they transform sensory input into motor output, how they link to emotions, or how they develop over time. Likewise, an AI system might have a subnetwork that predicts human emotions, but that doesn’t mean it actually experiences them or understands them the way a person does.
You’re totally right, what Fodor’s getting at is the gap between where and how. Pinpointing a mental function on a brain map might look impressive, but it doesn’t actually explain what’s happening or why. Your mirror neuron example nails it: knowing they light up doesn’t tell us how they generate action, emotion, or meaning. Same with AI, just because a subnetwork predicts emotions doesn’t mean it feels them. Brain maps are useful as coordinates, but they’re not explanations. What really matters is digging into the mechanisms that make thought and experience possible, not just colouring in regions
DeleteI thought Fodor’s article was not only insightful but also pretty funny. My favorite line was when he asks, “What part of how your engine works have you failed to understand if you don’t know where in the engine the carburettor is?”... it captures his skepticism that just knowing where something happens in the brain really helps us understand how it happens. It also made me think about the balance between description and explanation; colorful brain maps might tell us “here’s the spot lighting up,” but do they actually push our theories of mind forward, or just confirm what we already suspected in a more sophisticated way? Linking this back to our previous lectures, it feels like brain imaging is often focused on identifying the “hardware address” of mental processes without saying much about the actual mechanisms of thought, the "software". Maybe neuroscience risks mistaking location for explanation. Linking back to mirror neurons, it’s one thing to say “these neurons fire both when acting and observing,” but another to explain why that firing gives rise to action understanding or prediction. so one could ask if localizing mirror neurons in a region of the brain really advances cognition, or is it just a descriptive finding that still needs a computational story behind it?
ReplyDeleteThis will be a brief skywriting because I think that’s in line with what Jerry Fodor is saying. Let us stop and ask why we are asking “why” so much. We’re on this constant push forward in science to produce answers, yet it often seems that what we produce are more questions (’tis the nature of the scientific process, though a tiresome one). Can we value the person who is simply at peace, accepting the fact that they are a conscious being moving through the world? I think it is quite all right not to lose oneself in the rabbit hole that is the Hard Problem. That’s what I gathered from this article, and I found it very refreshing.
ReplyDeleteFodor accurately points out that the incessant prodding and scanning of brain activity is not inherently valuable. For example, we figure out that when we think of an apple, x area of the brain shows activity (ignoring the fact that correlates are just correlates ). So, what? It doesn't really mean anything by itself, and it certainly isn't helping us at reverse engineering how we think (except maybe giving us the tool for t4). Put in simpler terms, when I see my computer run through its glass case, I have no actual clue what it's doing, only by looking at its parts. However, there are still uses for neuroimaging. For example, it gave us a potential clue as to how the mental process of categorization might work (we have a hierarchy on how we visually process things, starting from inanimate vs animate, all the way down to eyes vs face views) by looking at how an area of the brain has within it an area that lights up only for certain things. It doesn't tell us the whole story, but it gives us a start at figuring out the software process.
ReplyDeleteWhile I agree with Fodor that neural imaging is not entirely useful in cognitive science right now, I think he fails to appreciate how these studies are useful in the field of neuroscience. Neuroscience aims to empirically study the brain’s biology. When attempting to understand the function of cells, biologists had to decipher the function of their organelles, which began with observing correlations between the activity of these organelles and the activity of the cell. Mapping out correlations provides a foundation for identifying relationships that guide researchers toward what to study next to determine causation. While this practice doesn’t necessarily follow the reverse-engineering approach of cognitive science, I believe it remains scientifically relevant.
ReplyDeleteWhat I find fascinating in this critique is the idea that maybe brain mapping research is chasing locations rather than explanations. Knowing that nouns and verbs “light up” in different cortical patches doesn’t necessarily tell us why language works the way it does, or what meaning really is. It’s like knowing where the carburetor sits without understanding combustion. Still, perhaps the attraction is symbolic: pinning down the geography of thought makes the invisible visible. It satisfies our need to see the mind in the flesh, even if it doesn’t yet explain it.
ReplyDeleteFodor’s frustration with brain imaging reads less like cynicism and more like a warning about misplaced confidence. It's not a rejection of fMRI, but Fodor does express that fMRI is meaningless without theory; it has nothing to test. Localization views cognition as if it were written into the folds of the cortex, ignoring what needs explaining, being how a representational system does the work it does. That’s the same (but opposite) issue computationalism faces: it explains the work but ignores the substrate. Both approaches stall when it comes to mechanisms. I liked how this argument quietly rewrites the Turing–Harnad problem we saw earlier in neural terms. If mirror neurons suggest that cognition is prediction through embodiment, Fodor asks how far embodiment goes until it stops being implementation and starts being explanation. When does a neural correlate become causal? If cognition depends on this loop between physiology and representation, is any theory really “implementation-independent”?
ReplyDeleteAfter reading Fodor’s article “Why the Brain?”, I find myself agreeing with his general skepticism about the importance of definitively understanding the brain through scientific means. What struck me most was his reference to Bernard Shaw’s fable “The Little Black Girl in Search of God,” in which the heroine encounters Pavlov drilling holes in dogs’ mouths. When she asks why, he explains that it’s to prove they salivate in anticipation of food scientifically. But we already knew this. So why is it so important to prove it scientifically? Connecting this to our class and to cognitive science more broadly, I wonder: why are we so concerned with whether machines can think? If an artificial intelligence system like ChatGPT is nothing more than a thoughtless input-output mechanism–or even if it were a sentient thinking machine–what real difference would that make to us in its current capacity? It’s not that I think the question is unimportant, but rather that I struggle to see how answering it would significantly change the way these systems function or how we interact with them.
ReplyDeleteThinking about what we spoke on the prior weeks about mirror neurons and empathy, Turing's framework helps us test whether mirroring really reflects understanding or just surface level mimicry. A system might show mirror like behaviour like in VR but Turing would say that doesn't prove real comprehension and only interaction with the physical world (t3) could test that. The difference between VR and t3 testing is like the gap Antonetti and Corradini describe between embodied resonance and reenactive empathy where one is automatic and the other requires context and reasoning. Fodor would agree that simulating neural mirroring might look like understanding but lack real meaning. so to test the recipe for mirror capacity a t3 test would be needed to see if this system can not only mirror but use those capacities in real world understanding goals and not just copying actions.
ReplyDeleteThis article was quite interesting and while I understand where the author is coming from, I do get an anti-science impression from him. Although it is currently difficult to say whether these brain imaging studies are getting us closer to understanding cognition, we are learning something. The importance of which may not be available to us now, but may be useful in the future, whether it is medicine, neural networks or maybe the Hard Problem itself! We could ask the same question about many types of scientific inquiry, and we can even doubt that science is useful to reveal the nature of reality. But for now, it’s the best we’ve got and even if we go in the wrong direction, that’s still valuable information.
ReplyDeleteFodor highlights the redundancy of mapping the brain. Observing brain activity correlations with sensing specific stimuli or engaging in certain activities does not help answer the causal mechanisms behind them. If we were to map this onto Dr. Harnad’s Turing Test levels mapping the brain would only serve to confirm attempts at making a T4 passing machine, the standard at which the internal causal structure of a machine is indistinguishable to that of a human. Like the T2 Turing Test, many different systems can produce outputs that pass it; both an LLM and a human can pass the T2 test despite having vastly different internal structure. Mapping the brain is analogous, because many systems can produce the same correlates as humans do in brain scans. The directionality of the investigation is flawed; we must build the system creating the output to fully understand cognition, not infer the system by looking at the output.
ReplyDelete