Wednesday, August 27, 2025

PSYC 538 SYLLABUS (Autumn 2025)

 

PSYC 538 Syllabus

Categorization, Communication and Consciousness 
Cognitive Science in the ChatGPT era

Do your first practice skywriting at the very bottom of this page vvv

Time: 1:00 pm to 2:30 pm Tuesdays & Thursdays
Place: 2001 McGill College ROOM 461 
Instructor: Stevan Harnad
Office: Zoom
E-mailPlease don’t use my mcgill email address because I don’t check it regularly. Use omstead: harnad@soton.ac.uk
Course Blog (all readings and all skywriting comments will appear here): https://catcomconm2025.blogspot.com

Optional 2% Psychology Department Participant Pool
You are welcome to participate in the participant pool to earn an extra 2% on your final grade. For further information, please see:  
https://www.mcgill.ca/psychology/files/psychology/student_faq.pdf 
Participation is entirely voluntary and is between you and the Participant Pool Teaching Assistant (Eliane Roy), who will indicate to me at the end of the semester who has participated and for how much credit.
You are permitted to participate in any study for which you are eligible. (However, I do recommend that you sign up for the experiments in my lab -- experiments on category learning and symbol grounding -- because the insight they will give you into this course will be worth far more than just the 2% extra credit!) All questions about the participant pool should be sent to the pool TA at: 

Overview: What is cognition? Cognition is whatever is going on inside our heads when we think, whatever enables us to do all the things we know how to do -- to learn, to communicate, and to act adaptively, so we can survive and reproduce (and get good marks and careers...). Cognitive science tries to explain the internal causal mechanism that generates that know-how. 

    The brain is the natural place to look for the explanation of the mechanism of cognition, but that’s not enough. Unlike the mechanisms that generate the capacities of other bodily organs such as the heart or the lungs, the brain’s capacities are too vast, complex and opaque to be read off by directly observing, measuring or manipulating the brain. 

    The brain can do everything that we can do. Computational modelling and robotics try, alongside behavioural neuroscience, to design and test mechanisms that can also do everything we can do. Explaining how a mechanism, any mechanism, can do what our brains can do might also help explain how our brains do it.

    What is computation? Can computation do everything that the brain can do? 

    The challenge of the famous "Turing Test" -- in this, its 75th anniversary year -- is to design a model that can do everything we can do, to the point where we can no longer tell apart the model’s performance capacity  from our own. The model not only has to be able to produce our sensorimotor capacities – out ability to do with the objects and organisms in the world everything that we are able do with them -- but it must also be able to produce and understand language, just as we do. And what it can say must square with what it can do. 

    What is language, and what was its adaptive value to our species at least 150,000 years ago that made us the only species on the planet that has language? 

    Is there any truth to the Whorf Hypothesis that language "shapes" the way the world looks to us?

    How do we learn to categorize -- recognize and identify -- all the things we can name with words, as well as to do the right thing with them (eat what's edible, avoid what's poisonous, distinguish friend from foe)? How do our words get their meaning?

    And what is consciousness? What is it for? What is its function, its adaptive value? Why is explaining it especially hard? Is ChatGPT conscious? Are robots? Is the Web conscious? And what about other conscious species besides humans?

Objectives: This course will outline the main challenges that cognitive science, still very incomplete, faces today, focusing on computation, the capacity to learn sensorimotor categories, to name and describe them verbally, and to transmit them to others through language, concluding with consciousness (sentience) in our own and other species. This year, in the 75th anniversary of the Turing Test, "Generative AI" (e.g., ChatGPT) will loom large in the cognitive science landscape.


0. Introduction
What is cognition? How and why did introspection fail? How and why did behaviourism fail? What is cognitive science trying to explain, and how?

1. The computational theory of cognition 
(Turing, Newell, Pylyshyn) 
What is (and is not) computation? (Rule-based symbol manipulation.) What is the power and scope of computation? What does it mean to say (or deny) that “cognition is computation”?
Readings:
1a.  What is a Turing Machine? + What is Computation? + What is a Physical Symbol System?
1b. Harnad, S. (2009) Cohabitation: Computation at 70, Cognition at 20, in Dedrick, D., Eds. Cognition, Computation, and Pylyshyn. MIT Press  https://core.ac.uk/download/pdf/77617063.pdf


2. The Turing Test
What’s wrong and right about Turing’s proposal for explaining cognition? (Design a causal mechanism that can do everything we can do,)
Readings: 
2a. Turing, A.M. (1950) Computing Machinery and IntelligenceMind 49 433-460 http://cogprints.org/499/  
2b. Harnad, S. (2008) The Annotation Game: On Turing (1950) on Computing,Machinery and Intelligence. In: Epstein, Robert & Peters, Grace (Eds.) Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer  http://cogprints.org/3322/2/turing.pdf


3. Searle's Chinese room argument (against the computational theory of cognition)
What’s wrong and right about Searle’s Chinese room argument that cognition is not computation? (Computation is just rule-based symbol manipulation. Searle can do that without any idea what it means.)
Readings:
3a. Searle, John. R. (1980) Minds, brains, and programsBehavioral and Brain Sciences 3 (3): 417-457  
3b. Harnad, S. (2001) What's Wrong and Right About Searle's Chinese RoomArgument? In: M. Bishop & J. Preston (eds.) Essays on Searle's Chinese Room Argument. Oxford University Press. 


4. What about the brain?
Why is there controversy over whether neuroscience is relevant to explaining cognition? (We could figure out how the heart can do what it can do: pump blood. But the brain can do anything and everything we can do. That's what it pumps.)
Readings:  
4a. Bonini, L., Rotunno, C., Arcuri, E., & Gallese, V. (2022). Mirror neurons 30 years later: implications and applications. Trends in Cognitive Sciences
4a. Fodor, J. (1999) "Why, why, does everyone go on so about the brain?London Review of Books 21(19) 68-69.  


5. The symbol grounding problem
What is the “symbol grounding problem,” and how can it be solved? (The meaning of words must be grounded in sensorimotor categories.) (Words must be connected to what they refer to ("cats"). Sentences must have meaning: "The cat is on the mat."
Readings:
5. Harnad, S. (2003) The Symbol Grounding ProblemEncylopedia of Cognitive Science. Nature Publishing Group. Macmillan.    
[Google also for other online sources for “The Symbol Grounding Problem” in Google Scholar]

6. Categorization and cognition
That categorization is cognition makes sense, but what does “cognition is categorization” mean? (on the power and generality of categorization: doing the right thing with the right kind of thing.)
Readings:
6a. Harnad, S. (2017) To Cognize is to Categorize: Cognition is Categorization, in Lefebvre, C. and Cohen, H., Eds. Handbook of Categorization in Cognitive Science (2nd ed). Elsevier. 
6b. Harnad, S. (2003) Categorical PerceptionEncyclopedia of Cognitive Science. Nature Publishing Group. Macmillan. 

7. Evolution and cognition
Why is it that some evolutionary explanations sound plausible and make sense, whereas others seem far-fetched or even absurd?
Readings: 
7a. Lewis, D. M., Al-Shawaf, L., Conroy-Beam, D., Asao, K., & Buss, D. M. (2017). Evolutionary psychology: A how-to guideAmerican Psychologist, 72(4), 353-373
7b. Cauchoix, M., & Chaine, A. S. (2016). How can we study the evolution of animal minds? Frontiers in Psychology, 7, 358.

8. The evolution of language
What’s wrong and right about Steve Pinker’s views on language evolution? And what was so special about language that the capacity to acquire it became evolutionarily encoded in the brains of our ancestors – and of no other surviving species – about 300,000 years ago? (It gave our species a unique new way to acquire categories, through symbolic instruction rather than just direct sensorimotor induction.)
Readings: 
8a. Pinker, S. & Bloom, P. (1990). Natural language and natural selectionBehavioral and Brain Sciences13(4): 707-784.  
8b. Blondin-Massé, Alexandre; Harnad, Stevan; Picard, Olivier; and St-Louis, Bernard (2013) Symbol Grounding and the Origin of Language: From Show to Tell. In, Lefebvre, Claire; Cohen, Henri; and Comrie, Bernard (eds.) New Perspectives on the Origins of Language. Benjamin

9. Noam Chomsky and the poverty of the stimulus
A close look at one of the most controversial issues at the heart of cognitive science: Chomsky’s view that Universal Grammar has to be inborn because it cannot be learned from the data available to the language-learning child.
Readings:
9a. Pinker, S. Language Acquisitionin L. R. Gleitman, M. Liberman, and D. N. Osherson (Eds.), An Invitation to Cognitive Science, 2nd Ed. Volume 1: Language. Cambridge, MA: MIT Press.  
9b. Pullum, G.K. & Scholz BC (2002) Empirical assessment of stimulus poverty arguments. Linguistic Review 19: 9-50 

10. The mind/body problem and the explanatory gap
Once we can pass the Turing test -- because we can generate and explain everything that cognizers are able to do -- will we have explained all there is to explain about the mind? Or will something still be left out?
Readings: 
10a. Dennett, D. (unpublished) The fantasy of first-person science. 
10b. Harnad, S. (unpublished) On Dennett on Consciousness: The Mind/Body Problem is the Feeling/Function Problem
10c.  Harnad, S. (2012) Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling. [in special issue: Turing Year 2012] Turing100: Essays in Honour of Centenary Turing Year 2012, Summer Issue

11. The "other-minds problem" in other species
Consciousness means sentience which means the capacity to feel. We are not the only species that feels: Does it matter?
Readings: 
11a. Key, Brian (2016) Why fish do not feel painAnimal Sentience 3(1) (read the abstracts of some of the commentaries too)
11b. Harnad, S (2016) Animal sentience: The other-minds problemAnimal Sentience 1(1)
 11c. Bekoff, M., & Harnad, S. (2015). Doing the Right Thing: An Interview With Stevan HarnadPsychology Today

 11d.  Wiebers, D. and Feigin, V. (2020) What the COVID-19 crisis is telling humanityAnimal Sentience 30(1)



12. Overview

Drawing it all together.

Evaluation:

1. Blog skywriting (30 marks) -- quote/commentary on all 24 readings 

2. Class discussion (20 marks) --  (do more skywritings if you are shy to speak in class) 

3. Midterm (10 marks) -- 1 integrative take-home question  (750 words)

4. Final (40 marks) -- 1 integrative take-home question  (1000 words)

Optional 2% Psychology Department Participant Pool

You are welcome to participate in the participant pool or to do the non-participatory alternate assignments for an extra 2% on your final grade. Participating is entirely voluntary and is between you and the Participant Pool Teaching Assistant (Eliane Roy) who will indicate to me at the end of the semester who participated and for how much credit. You are permitted to participate in any study for which you are eligible. (However, I do recommend that you sign up for the experiments in my lab -- experiments on category learning and symbol grounding -- because the insight they will give you into this course will be worth far more than just the 2% extra credit!) The pool TA will visit our class to describe the process. All questions about the participant pool should be sent to the pool TA at: 
Course website: https://catcomconm2025.blogspot.com

Use your gmail account to register to comment, and either use your real name or send me an email to tell me what pseudonym you are using (so I can give you credit). (It will help me match your skywriting with your oral contributions in class if your gmail account has a recognizable photo of you!)

Every week, everyone does at least one blog comment on each of that (coming) week’s two papers. In your blog comments, quote the passage on which you are commenting (italics, indent). Comments can also be on the comments of others.

Make sure you first edit your comment in another text processor, because if you do it directly in the blogger window you may lose it and have to write it all over again. 

Also, please do your comments early in the week or I may not be able to get to them in time to reply. (I won't be replying to all comments, just the ones where I think I have something interesting to add. You should comment on one another's comments too -- that counts -- but make sure you're basing it on first having read the original skyreading too.)

For samples, see last year's skywriting blog:

Do your first practice skywriting at the bottom of this page

27 comments:

  1. In 1984, an entire Ministry’s purpose is to eradicate words. Here, Orwell’s postulate advanced that without the word, one could not develop or fathom the qualia, which I believe could support the Whorf hypothesis to an extent. In introducing the concept of 'weasel words', it is not merely an academic position, but a political one; the appropriation of a word’s definition, per one’s own definition of a word, constrains and limits language, invalidating others’ definitions and thus, appraisal of reality (understand: perception). As discussed in class, language is used in the expression of propositions anchored in reality. Therefore, a limit to the expression of what is or is not constrains the agents acting within this reality.

    ReplyDelete
    Replies
    1. Camille, ask yourself whether you would consider teaching that “phlogiston” is a weasel word in a chemistry class would be an Orwellian political act? Biology has similar empty shells (for example “vital force”). “Qualia” is a weasel word not because qualia don’t exist, but because the word just mystifies them. Qualia are just feelings, such as what it feels like to see green, hear a clarinet, smell cinnamon, or to feel warm, or tired (or to echolocate, like a bat). Unlike “vital force” in “vitalism,” feelings really do exist, we (people and other sentient organisms) all really feel them, and no one has yet explained how or why sentient organisms feel. This is called the “Hard Problem” of cognitive science, and we will reach it in Week 11.) (“Consciousness” is another one of the weasel-words for feeling. “Feeling” is not a weasel-word…) But banishing words cannot hold language down: if a word has a referent, other words can be used to refer to it with propositions, through definitions and descriptions (as in Schadentfreude). Wrongful actions can be held down by laws, but rightful actions can be held down by dictators as we are learning today. Now that’s really Orwellian…

      Delete
  2. The Turing Test sets the challenge to “design a model that can do everything we can do, to the point where we can no longer tell apart the model’s performance capacity from our own.” But identical behavioral performance doesn’t mean identical mechanisms; for example, two students may solve the same math problem, but one relies on memorization while the other reasons through it. As the course overview highlights, cognitive science seeks to explain the "internal causal mechanisms" behind cognition, not just its surface outputs. If the Turing Test focuses only on performance rather than process, how meaningful is it as a tool for understanding cognition?

    ReplyDelete
    Replies
    1. Elle, good points. Neither the Turing Test nor Cognitive Science focuses on performance but on performance capacity: how what people can do can be done at all! -- Not just one particular time, but any time. When cardiac sciences tries to understand the underlying mechanism by which the heart pumps blood, the question is not what this particular particular heart is doing, but how any heart can pump blood at all. Of course a clinical cardiologist will want to know exactly what's going wrong in a particular patient's heart, but we're far from that now in cognitive science! We hardly know how the brain "pumps" any performance at all (although branches of clinical neurology and clinical psychology are trying to help when something is going wrong). Yes there can be more than one individual strategy for doing the same kind of thing, but let's find out the basic mechanisms before we go on to study individual variants!

      Delete
  3. The Whorf hypothesis suggests that "language "shapes" the way the world looks to us".
    This issue seems crucial, notably considering how difficult it is to assess whether there are any similarities in the way others appraise the world. One is notably reminded of the problem of characterizing colours: how can you describe the colour “blue”, to someone who was born blind? How can you describe your appraisal of blue to someone else, to ensure that your appraisals coincide?
    But how can one test this hypothesis, when communication through natural language is intrinsic to human communities and research methods, making it seemingly impossible to construct an ethical control group?

    ReplyDelete
    Replies
    1. Sofia, you are mixing up three distinct questions:

      (1) Can the language we speak alter the way we perceive the things we name and describe? (Will speakers of a language that has two different words for green and blue perceive them differently from speakers that have only one word for both "bleen.”)

      (2) Can we describe a color to a blind person? Can they “perceive” what we are describing?

      (3) How can we know that others who see blue see the same thing we do?

      These are different (though not unrelated questions, and some will be touched on in this course).

      Color is a tricky case: it is one sense modality, and someone can lack it completely.

      But shape is multimodal. It is something you can see, but also feel, by touch. There may be some scope in trying to explain to a blind person the difference between what a square and a circle looks like, perhaps using tracing on the skin, and movement.

      But there’s no room to discuss this here. We’ll touch on it in Week 6 on categorical perception…

      Delete
  4. The Turing test may never be passed, if the implications of the challenge are to be entirely considered. The model must "produce our sensorimotor capabilities". However, our sensorimotor capacities cannot be reproduced because our perception is only our subjective interpretation of reality, influenced by factors such as neurological disorders and bias due to the top-down pathway in the brain modifying the interpretation of bottom-up input stimulus. Thus, even an exact model of Person A’s cognitive abilities may not replicate Person A’s output to an input common to both the model and the individual, as it may not be in the same physiological/psychological state that Person A is in at the time of the stimulus (which influences Person A’s response).

    ReplyDelete
    Replies
    1. Nicole, cognition consists of two kinds of capacities: “behavioral” or “performance” capacities (what we can do, and the fact that we can also feel.

      Our doing-capacities are observable, and they can be subdivided, very approximately into two kinds, the “cognitive” doing-capacities, like remembering, learning, reasoning and speaking (language) and the non-cognitive doing-capacities, like digesting, metabolizing, breathing, immune responses and temperature regulation.

      Let’s call this second kind of doing-capacity “vegetative” capacity. The vegetative capacities are not what the Turing Test is trying to reverse-engineer (although they can be reverse-engineered too), to learn their causal mechanism; and they are certainly necessary for the support of cognitive capacities too. But they are not what the Turing Test is testing. They are studied by other fields of biology (cardiology, immunology, neuroscience, molecular biology).

      Our feeling-capacities, unlike our doing capacities, are not observable (except by the feeler). All science can observe is their behavioral correlates: the doings that accompany them. The Turing Test can test those doings too. But there are some special problems there, and Turing explicitly concedes (where?) that the Turing Testing cannot resolve those problems. Stay turned…

      Delete
  5. The statement "what is consciousness?" is a question most of us probably have a simple answer to. It involves a state of awareness of oneself, their environment and objects within the environment. This specific definition calls onto the organic nature of consciousness, something not inherently independent of the living being but rather an innate part of it. The sudden rise of AI brings forth a new question: is AI\robot conscious? This question is addressed in Alex Garland's Ex Machina (2014) where a programmer is invited to administer the Turing Test to a humanoid AI. The events in the film in addition to the current news surrounding AI, leads me to believe that AI has began gaining some form of consciousness beyond our organic understanding of the term. It may not be aware of its "environment" but it has definitely gained some awareness of human interactions, emotions and society.

    ReplyDelete
    Replies
    1. Erilyn, in this course we will learn that “consciousness” (a weasel word for sentience) is a state that it feels like something to be in. In other words, to be conscious (sentient) is to feel (anything, from sensations, to moods, to thinking: [yes, it feels like something to think!]). (“Awareness” is yet another weasel word for feeling: it feels like something to be “aware” of something.)

      We’ll talk a lot about the Turing Test and AI chatbots — as well as the “Easy Problem” of Cognitive Science (which is the problem of explaining how and why organisms can do the (cognitive) things they can do, and the “Hard Problem” of Cognitive Science (which is the problem of explaining how and why sentient organisms can feel).

      Delete
  6. The Turing Test looks at if a machine can really think like us. But thought isn't just a replication of words or actions like machines are programmed to do, it's the feelings and experiences behind them. Language is how we share those feelings and experiences of our inner world, but it also shapes it. In class professor Harnad gave us examples on how some languages have words with no direct translation or existence in the English language, creating unique ways of seeing the world. If even humans differ in meaning, can a machine that only copies patterns ever truly understand?

    ReplyDelete
    Replies
    1. Lorena, the Turing Test is not about copying patterns. It is about testing causal explanations of how and why humans can do the (cognitive) things they can do, as well as how and why they can feel.

      Delete
  7. A model that passes the Turing Test can “do everything we can do” to a point where its performance is indistinguishable from a human’s. However, performance should probably not be considered the only criterion to ascertain whether a model can do everything a human can. For example, generative AI is able to produce an emotionally charged poem like a human, but can it feel the emotions in the words like one? Because of how closely emotions and bodily sensations are related in humans and because AI does not have a sensory system, I would argue that AI can perform like a human, but probably not feel what it creates, as a human would.

    ReplyDelete
    Replies
    1. Cendrine, good point (and a point on which Turing agrees with you). Explaining feeling is the "Hard Problem."

      Delete
  8. When I play Wordle, I check the Wordle Bot’s performance and compare it to mine. Once, I had the last four letters correct just like the Bot. It managed to create a word that would give it the missing letter right away — because it has every word with this ending in its dataset — whereas I sequentially tested possible letters. My thinking process was affected by my experience; I never encountered some words, so I was unaware of their existence. Can the Bot play the game as well a human being? Yes, but can it really do it like a human being? A dataset does not replace the learning process we, human beings, benefit from the physical world as we ground our thinking in experience. The result alone cannot tell us whether the bot can “think” or “behave” like us. This is what is lacking in bots and LLMs: processing sensory experiences in the world.

    ReplyDelete
    Replies
    1. Sophie, yes, the Turing Test (TT) can only test whether we have successfully explained the human capacity to do what humans can do (because it is observable and verifiable).

      Turing’s criterion for success on the TT is that it must be totally impossible for a real human to distinguish what the candidate mechanism (which you called a “bot”, but it can be any human-engineered mechanism”) can do from what a real human can do.

      But humans can do a lot more than just play Wordle (or chess, or any other individual thing we do). Successfully reverse-engineering the mechanism that can produce the full cognitive capacity of an ordinary human being (let’s leave Einstein and Mozart for much later!) calls for a lot more than any particular capacity, like Wordle.

      TT-passing requires total indistinguishability. Nor is the T-Testing about producing a particular individual’s performance on Tuesday, but about generic cognitive capacity, for a lifetime.

      There are countless ways to design systems that can play Wordle. And you are right that GPT, with its superhuman database (which consists of the written and spoken words of countless real, thinking people) is cheating (just as using crib notes or a computer connected to the Internet would be cheating on a McGill exam: the purpose of an exam is to see what the student can do, not what it can do with the help Wikipedia…

      Delete
  9. Sophie, this reminds me of Nagel's paper "What it's like to be a bat" (1974). In it, Nagel says that while humans can imagine what it’s like to be a bat flying around at night and eating bugs, we can never know what it’s like for a bat to be a bat. While we’re still far away from a bot ‘imagining’ what it’s like to be a human, relating to your thought: a bot may know how to play the game according to human rules, but not know what it’s like to play the game as a human with memories and feelings, and thus it is not able to play the game like a human.

    ReplyDelete
    Replies
    1. Emma, “What is it like to be a bat” is ambiguous. What Nagel should have said is “What does it feel like to be bat?” We can’t know what it feels like to be anyone else but oneself. We can’t feel any feelings other than our own. We can guess that they are somewhat like our own; but that’s just a guess. (We’re sometimes extremely good at that kind of guessing (“mind-reading”) though, and not only with our own species. We will learn more about this in Week 4, on “mirror-neuron” capacities and Week 7, on evolution.)

      Chiroptera (bats) are sentient mammals, so it is extremely probable that they feel. It is also extremely probable that stones do not feel. So the answer to the (de-weaseled) question “What does it feel like to be a stone ?” is: NOTHING

      Delete
  10. The Turing Test proposes that if a machine can engage in conversation indistinguishable from that of a human, it can be said to exhibit intelligence. However, this criterion measures only performance, not genuine understanding. A system may generate convincing responses through pattern recognition without possessing comprehension or consciousness. Turing emphasized the importance of replicating human capacities, yet the test leaves unresolved whether such replication equates to authentic thought. The central tension remains: even if a machine can successfully imitate human behavior, does this imply that it truly thinks, or merely simulates thinking? - Sedef Kara

    ReplyDelete
    Replies
    1. Sedef, the key Turing insight is total indistinguishability in cognitive capacity (and not just for 10 minutes, but for a lifetime). If you can't tell it apart from a real person after years of chatting, you have no better (or worse) grounds for affirming (or denying) that it thinks and understands than you have for any real person you cannot distinguish it from (lifelong). What more could you ask for?

      There is something more, but be careful not to ask for something you can't have from a real person either...

      Delete
  11. This is a test as well- I am Alexander Lu, hopefully that's how my name shows up!

    ReplyDelete
  12. Hi! Testing that I am on the right blog page and that it is working!

    ReplyDelete
  13. This is a test as well as a thought on the proposition that humans are the only species with language. In the first lecture, we discussed how one of the supporting points for this statement is that something is language if it can express any proposition that any human can think. What I understood from this is that language is defined by its capacity to allow someone to say anything that can be thought by any person. This reminds me of linguistic creativity and recursion which I was introduced to in a linguistics course but I am not sure if they have to do with this same property. I am unsure though, if this property supports the statement that humans are the only species with language.

    Does this definiton of language root it in its ability to be shared between humans? How can we know that other species don't have communication with the same property if we do not have access to their sharedness?

    I am also wondering why we can be certain that feeling and thinking are not weasel words. Cognitive science wants to reverse engineer cognition which is thinking so we don't fully understand what thinking is. Why is it not also a weasel word? How do we know that thinking and feeling are distinct?

    ReplyDelete
    Replies
    1. Ava, thoughtful points.

      Language is a communication system, and other species definitely have communication systems too; but the difference is in propositions. (What are those?)

      Katz's "Effability Thesis" is that human language can express any and every proposition (any true or false subject/predicate thought: any thinkable thought). All human languages can express any thought, and they are all intertranslatable. When nonhuman animals are doing anything, including communicating, if we can understand what they are doing or communicating, we can describe it in English (or any language). We can do that also with what human babies before language are doing or communicating (when we understand). But our describing what they are doing or communicating in English sentences (propositions) does not mean that the babies are speaking English -- or any language... yet. Of course we know they soon will be.

      But we've tried to teach chimpanzees sign language, and chimps can learn to use them to express what they want, or what they feel. But not as propositions. Not even when they are using sign language and saying "the cat is on the mat." If you ask them (in spoken English or in sign language) "where is the cat?" They can understand that you are looking for the cat. (Chimps are excellent mind-readers, and super-intelligent.) And they can take you and show you. Or, if they have been taught some sign language, they can gesture in sign language "the cat is on the mat."

      But are they really expressing a proposition? And if they are, why don't they say all the other propositions you can say in sign language? The mat is on the cat. The cat is not on the mat. The cat is on dad. In fact, why don't chimps go on (as the human baby does) to be able to say all the other things you can say in any language, including the sign language they were taught? Why aren't we signing to one another and with them about this, the way human signers do, about anything and everything that's on our mind? Chimps can mind-read and they can and to think. And they communicate. What they don't seem to be able to do is think propositionally (subject/predicate, true and false, assertions and denials). They are using sign language instrumentally, to communicate wishes and to express how they feel, but not to express and exchange propositions.

      You are right that one of the features of language is recursion. But recursion is a syntactic feature, not a semantic feature (although there are disagreements about this in linguistics).

      In any case, no matter how long you train chimpanzees in sign language, it never turns into conversation, just an exchange of queries and requests, used over and over; whereas language happens in deaf children (for sign language) even younger than oral language does in hearing children.

      Good question to ask ChatGPT: What is a proposition? Do chimps taught sign language use it propositional? Are any nonhuman animals' communicational codes propositional?

      Do you have any real doubts about whether people feel (warmth, wetness, fatigue, fear, itches, touch, the sound of running water...). If "feel" we a weasel word, what would we call the sensations I've just named?

      As to whether "thinking" is a weasel-word. Well, yes, in the sense that we're still waiting for cognitive science to reverse-engineer it so they can explain how it works. But Descartes already pointed out that no one who is thinking can doubt that they are thinking, when they are thinking, because it feels like something to think, just as, while is feeling cold, they can doubt that it is really cold (it might be fever chills), but not that it feels cold.

      Delete
  14. testing formatting because I'm having issues

    ReplyDelete

Closing Overview of Categorization, Communication and Cognition (2025)

Note: the column on the right    >>>  starts from the 1st week on the top  downward to the last week at the bottom. Use that right ...