1a. What is Computation?
What is (and is not) computation? (Rule-based symbol manipulation.) What is the power and scope of computation? What does it mean to say (or deny) that "cognition is computation"?
Readings:
What is a Turing Machine?
Computation is Symbol Manipulation
What is Computation? A Foundational Definition
The Core Definition
Computation is the manipulation of formal symbol tokens according to rules (called algorithms) that operate solely on the symbols' shapes or forms, not their meanings. This definition, crystallized in the work of mathematicians like Alonzo Church (1936), Alan Turing (1936), and Emil Post (1936), captures what mathematicians do when they "compute" - whether calculating the roots of a quadratic equation or proving a theorem.
Symbols and Algorithms
A symbol in this context is any distinguishable object whose specific physical form is arbitrary. The symbols "2", "II", "zwei", or even a red block can all represent the same mathematical concept, but for computational purposes, only their distinguishable shapes matter. What makes computation remarkable is that these symbols can have any form whatsoever - they could be marks on paper, electrical states in a circuit, or beads on an abacus.
An algorithm is a finite set of explicit rules that specify exactly how to manipulate these symbols. Consider the familiar algorithm for addition: given symbols arranged as "2 + 3 = ?", the rules tell us to replace "?" with "5". Crucially, this manipulation depends only on recognizing the shapes "2", "+", "3", "=" and "?" - not on understanding what "two" or "addition" means.
The Three Levels of Description
Understanding computation requires distinguishing three distinct levels:
1. The Formal Level
At this level, computation consists purely of symbol manipulations according to rules. When a mathematician applies the quadratic formula, they manipulate symbols according to the algorithm x = [-b ± √(b²-4ac)]/2a without necessarily thinking about what these symbols mean. The process operates entirely on the basis of symbol shapes and rule-following.
2. The Physical Level
Computation must be physically implemented - symbols must be actual physical objects, and something must physically execute the symbol manipulations. This could be a human with pencil and paper, mechanical gears and wheels, or electronic circuits. The specific physical details are irrelevant to the computation itself, which is why we say computation is "implementation-independent."
3. The Interpretive Level
Humans can interpret the results of computations as meaningful. When we see the output "5" from "2 + 3", we interpret this as meaning that combining two objects with three objects yields five objects. But this interpretation exists in our minds, not in the computation itself.
Implementation Independence
One of computation's most important properties is that the same algorithm can be executed by radically different physical systems while remaining the same computation. Whether you solve "2 + 3" with pencil and paper, an electronic calculator, or mechanical gears, you're performing the same computation because you're following the same symbol-manipulation rules. This is why Church's lambda calculus, Turing machines, and Post systems were proven mathematically equivalent - they all capture the same formal process despite their different notation systems.
The Software/Hardware Distinction
This leads directly to the software/hardware distinction that is fundamental to understanding computation:
- Software is the algorithm - the formal rules for symbol manipulation
- Hardware is whatever physical system executes those rules
The algorithm for computing square roots remains the same whether implemented in biological tissue (a mathematician's brain) or silicon circuits. The computation is the rule-following process; the implementer is just the physical machinery that carries out those rules.
What Computation Is Not
It's essential to understand what computation excludes:
Not semantic processing: Computation operates on symbol shapes, not meanings. A computer executing "cat + dog = ?" doesn't know anything about animals - it only manipulates shapes according to rules.
Not analog processes: A sundial "tells time" through continuous physical processes, but this isn't computation in the formal sense. There are no discrete symbol tokens being manipulated according to explicit rules.
Not the interpreted results: When we interpret computational output as meaningful, that interpretation is our cognitive contribution, not part of the computation itself.
Simple Examples
Consider basic arithmetic:
- Symbols: "2", "+", "7", "=", "?"
- Algorithm: "When you see two numbers with '+' between them followed by '=?', replace '?' with the sum"
- Implementation: Could be done by a human, electronic calculator, or abacus
- Interpretation: Humans understand this as representing numerical addition
The computation consists entirely of the rule-based manipulation of the symbol shapes. Everything else - what the symbols "mean" to us, how the physical device works, why we find the result useful - exists outside the computation proper.
Historical Foundations
The mathematical foundations were established independently by Church (1936) with lambda calculus, Turing (1936) with his abstract machines, and Post (1936) with production systems. Despite their different approaches, these systems were proven equivalent in their computational power, establishing that there is indeed a well-defined, universal notion of what formal computation is.
Turing's particular contribution was showing that his simple abstract machine - capable only of reading symbols, writing symbols, moving left or right along a tape, and changing internal states according to a lookup table - could perform any computation that any mathematician could perform. This is the foundation of what we now call the Church-Turing thesis regarding the nature of mathematical computation.
Why This Definition Matters
This precise definition matters because it establishes clear boundaries. Computation is a specific type of formal process - rule-based manipulation of arbitrarily-shaped symbols. Recognizing what computation is helps us understand both its remarkable power (any mathematical procedure can be formalized this way) and its inherent limitations (it operates only on syntax, never directly on meaning or semantics).
When we see computation applied in various fields - from weather modeling to game-playing programs - we're seeing this same fundamental process of rule-based symbol manipulation, just applied to different domains and implemented in different physical systems.
References
Church, A. (1936). An unsolvable problem of elementary number theory. American Journal of Mathematics, 58(2), 345-363.
Post, E. L. (1936). Finite combinatory processes—formulation 1. Journal of Symbolic Logic, 1(3), 103-105.
Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42(2), 230-265.
Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.
Use the prompt belos to query other LLMs (ChatGPT, Claude, Gemini...)
__________________________________________________
“What is computation?” Prompt for LLMs
Please create an introductory explanation of computation for undergraduate cognitive science students who need a clear, foundational understanding. Please provide a comprehensive but accessible explanation that addresses the following requirements:
Primary Question
What is computation, in its most fundamental sense, as originally formalized by mathematicians and logicians in the 1930s?
Essential Guidelines
- Focus on the core definition: Emphasize computation as it was originally defined by Church, Turing, and other mathematicians—as a formal process for manipulating symbols according to rules.
- Avoid technical diversions: Do not discuss the halting problem, Turing completeness, computational complexity, uncomputable numbers, or other computer science technicalities that are not essential to understanding what computation fundamentally is.
- Distinguish levels clearly: Separate (a) what computation is as a formal process, (b) how it gets physically implemented, and (c) how humans interpret computational results as meaningful.
- Emphasize arbitrariness: Explain why the physical shapes of symbols are arbitrary and irrelevant to the computational process itself.
- Implementation independence: Clarify why the same computation can be performed by different physical systems (humans, machines, etc.) without changing what the computation is.
- Avoid cognitive speculation: Do not speculate about whether human cognition is computational, whether brains compute, or related questions about minds and consciousness.
- Use concrete examples: Provide simple, clear examples (like basic arithmetic) that illustrate symbol manipulation according to rules.
Required Structure
- Begin with the most basic, foundational definition
- Explain what symbols and algorithms are in this context
- Clarify the role of interpretation versus the computation itself
- Address the software/hardware distinction
- Provide appropriate scholarly citations to foundational sources
- Conclude with why this definition matters for understanding computation as distinct from other processes
Citation Requirements
Include citations to primary sources (Church, Turing, Post) and authoritative secondary sources, but avoid citing popular science articles, technical computer science papers about complexity theory, or speculative pieces about artificial intelligence and consciousness.
Please write as if explaining to students who need conceptual clarity rather than technical expertise, using language that is precise but not unnecessarily technical.
__________________________________________________
Instructions for commenting: Quote the passage on which you are commenting (use italics, indent). Comments can also be on the comments of others. Make sure you first edit your comment in another text processor, because if you do it directly in the blogger window you may lose it and have to write it all over again.