The Computational Theory Of Mind: Alan Turing & The Cartesian Challenge

The Computational Theory Of Mind

Alan Turing & The Cartesian Challenge

By Professor Steven Horst (Wesleyan University)

December 31, 2014                                                                 Picture: David Bailey/Flickr.


This article is part of the Critique’s exclusive series on the Alan Turing Biopic The Imitation Game.


In 1637, René Descartes set out an important challenge to future materialists. Although Descartes is best remembered as an advocate of mind-body dualism, he was also one of the first to explore the idea that the bodies of living organisms (including human beings) can be understood completely in mechanical terms, and even argued that a very great deal of what we would today call “psychology” can be explained mechanically. But he went on to argue that there are two features of human beings that cannot be explained mechanically: language and reasoning. [Discourse, Chapter V, AT VI 57-59] His argument for this conclusion was based upon the premise that every mechanism performs a single function. As a result, you can build a machine that does particular things that seem like instances of thinking and language, the way that the mechanical statues found in the pleasure-gardens of the nobility in his day might make lifelike sounds or move in very specific lifelike ways, startling or delighting the visitors. He thought that similar mechanisms were at work when animals perform clever tricks or a magpie repeats the sounds of particular phrases it hears a person speak. But language is more than the production of particular sounds and reason is more than a collection of rote tricks. They are infinitely flexible, and can be applied appropriately to novel contexts. To explain language and reasoning mechanically, there would need to be a kind of general-purpose mechanism; and this, argued Descartes, is impossible because mechanisms are, by their very nature, special-purpose.

While Alan Turing did not refer to the problem Descartes had posed in his two memorable papers, “On Computable Numbers” [1936] and “Computing Machinery and Intelligence” [1950], those papers can usefully be seen as a response to this Cartesian challenge. It was in the 1936 paper that Turing presented an account of a general-purpose machine, subsequently called the Turing Machine, as a way of defining the class of computable mathematical functions (that is, those that can be evaluated by a rote algorithmic procedure): the functions that are computable are equivalent to the class of functions that can be evaluated by such a machine. Turing’s paper definitively showed that mechanical systems need not be the kind of single-trick ponies that Descartes had assumed them to be. And the subsequent application of the capacities of such computing machines to a wide variety of non-mathematical problems has shown that what such machines can do is very flexible and even open-ended.

Of course, the very task of the 1936 paper – to characterize the class of computable functions – involves a recognition that not all mathematical functions are computable. A Turing Machine is not a magic genii, and even a Turing Machine cannot compute incomputable functions. Turing had shown Descartes’ assumption that every mechanism must have a single function to be mistaken, and with it the basis for Descartes’ argument that no machine can replicate human competence with language or reasoning. Indeed, he had shown more than this: if language and reasoning can be reduced to computable functions, a Turing Machine can, in principle, be programmed to compute those functions and replicate human competence with them. The question of whether they can be reduced to computable functions – or indeed to any kind of function – was left open.

“On Computable Numbers” is remembered principally by mathematicians, computer scientists, and historians of science. Most of us know Turing more for his later paper, “Computing Machinery and Intelligence,” in which he introduces us to an “Imitation Game”, in which a computer attempts to fool humans communicating with it through a teletype into thinking it is another human being. This has subsequently been called the “Turing Test”. Turing goes on to make two further claims. The first is a hypothesis: that a computer could be programmed in such a fashion that its linguistic performance, and the reasoning apparently demonstrated in its responses, would be indistinguishable from that of human speakers. The second claim – though Turing does not put it exactly in these terms – is that, if a machine can pass such a test, it should be counted as thinking.

Both of these claims are controversial. There is an annual competition, the Loebner Prize competition, in which programmers submit programs that attempt to pass the Turing Test. There are versions of some of the winning programs posted on the competition’s website, and readers can try them out and form their own conclusions about how far we have progressed towards confirming Turing’s hypothesis. When I teach a class on computers and the mind, I have my students interact with a few of these in class, and usually we can trip up the programs within the first minute or two. But failures of such programs to pass the test are inconclusive: Turing’s claim was not about the performance of any one program, or even of all of the programs that have been developed at a particular date, but about what can be done in principle. Moreover, most of the “chatbot” programs actually do rely on a few good tricks to steer conversation in a way that engages the tester and avoids the things that are likely to produce gaffes. Joseph Weizenbaum’s ELIZA program [Weizenbaum 1966], which took the strategy of a Rogerian therapist whose dialog consists mainly in asking questions, accomplishes this very well, and was written with only about two hundred lines of code. Programs like ELIZA seek to replicate human linguistic performance (in specific contexts) without attempting to encode the vast knowledge base that humans possess. This is sometimes viewed as a feature that distinguishes artificial intelligence (the attempt to produce intelligent performance regardless of how it is achieved) from cognitive science (which attempts to produce computer models that mirror how humans think). There are ongoing projects of this latter sort as well, which have been fruitful in the project of mapping the mind. But I think it is fair to say that the jury is still out on whether a computer can be programmed in such a fashion that it could pass the Turing Test.

There have been at least two important lines of philosophical argument that have attempted to prove, on grounds of principle different from Descartes’ argument, a negative answer to Turing’s hypothesis. One, pioneered early on by J.R. Lucas [1961] and reintroduced by Roger Penrose [1989, 1994], turns upon the distinction between computable and uncomputable functions. Lucas appeals to Kurt Gödel’s [1931] first incompleteness theorem, which states that any axiomatization powerful enough to express elementary arithmetic cannot be both consistent and complete. Lucas interprets this as implying that an automated formal system like a computer could not derive all of the truths of elementary arithmetic, at least not without inconsistency. A complete and consistent derivation of arithmetic is not computable, and hence is not something that falls within the scope of what Turing has proven a Turing machine can do. But humans do understand arithmetic, and the incompleteness theorem implies that we cannot do this on the basis of computational methods; hence, Lucas concludes, there is something more to human understanding than computation. Lucas’s arguments (and likewise Penrose’s later variations) are quite controversial. (A summary of them can be found here: http://www.iep.utm.edu/lp-argue/ )

Hubert Dreyfus [1972, 1986, 1992] has long argued for a second type of principled barrier to machine thinking, based upon a roadblock encountered in second-generation AI. First-generation AI showed (unsurprisingly) that computational techniques are well suited to simulating formal inferences like syllogistic arguments, which depend only upon the syntax of the propositions. But most of our reasoning is not purely syntactic, and second-generation AI turned to the problem of modeling semantic understanding through a variety of forms of “knowledge representation” such as semantic networks [Quillian 1963], frames [Minsky 1974], and scripts [Schank and Abelson 1977]. I have a higher opinion than Dreyfus of how well frame-based models simulate understanding of particular problem contexts, like playing chess or ordering dinner at a restaurant. But the problem that plagued frame-based approaches was that, no matter how well a program simulated understanding and replicated performance within such a circumscribed context, it was unable to replicate our ability to evaluate which context we are faced with (and hence which frame to apply), or when to change frames. Dreyfus argued that this could not be handled by (standard) computational techniques, by which he meant techniques relying upon formal rules and sentence-like representations. This may or may not be correct, but to the best of my knowledge, this “framing problem” is still a serious problem for AI today. But the methods of first- and second-generation AI do not exhaust what computers can do. Indeed, Dreyfus has himself often shown some sympathy with the idea that parallel distributed computing can produce properties more like those of the human nervous system, and might be able to yield ways of deciding upon which frames are appropriate. But such parallel distributed systems can be run virtually on a Turing Machine, and in fact are almost always run on some type of serial machine that is formally equivalent to a Turing Machine. So Dreyfus’s argument really seems to be concerned with the limits, not of Turing computation in general, but of some particular set of AI techniques that can be run on a Turing computer.

Turing’s unargued claim that a machine that can pass the Turing Test should thereby be deemed to be thinking has also received philosophical scrutiny. John Searle’s “Chinese Room” thought experiment – originally presented in [Searle 1980] and subsequently republished and re-presented in many other publications – presents an argument that even if a machine can pass the Turing test, that is not enough to count as thinking. Searle bids us imagine a person who is essentially playing the role of a computer CPU in a Turing Test situation. The person speaks only English, and the test is being run in Chinese. Sentences in Chinese ideograms are passed in to him, and he consults a rulebook (corresponding to the program of a machine that can pass the test) to determine what symbols to write down and pass back out to the testers. Because the rulebook is, by supposition, a program that allows a computer to pass the Turing Test (only written in a form that a human can use), the testers are convinced that they are dealing with someone with competence in Chinese. But neither the person in the room, nor the rulebook, nor the entire system of person-rulebook-room, understands a word of what is being exchanged. Thinking involves more than just being able to produce appropriate behaviors; it involves understanding, and that is lacking in the Chinese Room experiment. And since the person in the Chinese Room is doing exactly what a computing machine is doing in executing the program, the machine would not be proved to be thinking either. Of course, the man is also a thinking being who understands things (though not the meanings of the Chinese symbols he is dealing with) – something we do not know about the computer. But his understanding is not based in what he is doing manipulating symbols according to rules. So Searle’s argument does not preclude the possibility that something might be a computer and also a thinking being with understanding – indeed, he seems to think that humans are, in some sense, computers among other things. The points, however are (1) that computation alone is not sufficient for understanding and (2) that passing the Turing test would not be proof that understanding (hence thinking) is involved.

While I have suggested some refinements to Searle’s position (Horst 1996, 1999), I think that his basic insight is correct: computation, in the sense of symbol-manipulation based in syntactic rules, does not provide a sufficient condition for thinking, regardless of whether it might be enough to produce a simulation of human-level reasoning and language-use. Turing’s invention of a general-purpose machine successfully refutes a specific argument offered by Descartes against the possibility of mechanized thinking, because that argument was based upon the assumption that mechanisms must always be special-purpose. I regard Turing’s hypothesis that what he called “computation” – symbol-manipulation governed by syntactic rules – is capable of a simulation of linguistic competence as an open empirical question. But Searle’s argument “spots” this hypothesis, and makes a case (successfully, to my mind) that passing the Turing Test through rule-governed symbol manipulation cannot be used as a litmus for thinking, because thinking requires semantic understanding, and this is precisely what is absent from Turing’s notion of computation as purely syntactic manipulation of symbols.

So does this imply that our minds are not computers? That depends upon exactly what the question means. Searle’s point is that if we mean “Is the mind just a computer?” or “Are mental processes nothing but computations in Turing’s sense,” the answer would seem to be no. On the other hand, if it means “Are our minds computers, among other things?” or “Are (some) mental processes performed by way of computation, even if there are other aspects of them as mental states that are non-computational?”, this question has not been decisively answered.

The possibility I outlined in Symbols, Computation, and Intentionality (1996) separated the question (1) of whether mental processes (such as various forms of reasoning) perform computationally from the question (2) of whether viewing the mind as a computer can tell us anything about why our thoughts have meanings in the first place. It seems to me that the computational theory of mind  has the most to offer as an account of mental processes: given that we have meaningful mental states, by what methods do we go from one state to another in the course of reasoning? The computationalist answer is that this is a causal process rooted in non-semantic properties (though I do not really like to call them “syntactic” – perhaps they are better conceived as neural properties) that are utilized in operations that can be described by computational rules, even if the rules are not represented in the mind or consulted in the process of computation. This strikes me as an interesting empirical thesis, even if it is couched at a level that abstracts away from the biological particulars of human brains that are probably relevant to how such processes take place. The other claim, that syntactically-defined “mental representations” and reasoning rules that operate upon their syntactic properties explain why we have meaningful mental states in the first place, seems to me to be wholly implausible: you cannot get semantics purely out of syntax. This struck me as a point worth emphasizing in the early 1990s, as the view I was criticizing was still in the air. But in fact, even by that time, major proponents of the computational view of the mind (e.g., Fodor [1987]) were already exploring the need for some other type of account of our semantic understanding. I happen to think that Fodor’s specific ways of trying to explain semantics (in terms of causal regularities) are inadequate, and that we cannot separate understanding from first-person consciousness. But we seem to be on the same page in holding that computation is at most a way of explaining mental processes such as reasoning, and not of the meaningfulness of mental states.


Footnotes & References

[1] Descartes, René. 1988. The Philosophical Writings of Descartes. Translated by J. Cottingham, R. Stoothoff, D. Murdoch and A. Kenny. 3 vols. Cambridge: Cambridge University Press.

[2] Dreyfus, Hubert. 1972. What Computers Can’t Do. New York: MIT Press.

[3] Repeated Author. 1986. Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Oxford: Blackwell.

[4] Repeated Author. 1992. What Computers Still Can’t Do. New York: MIT Press.

[5] Fodor, Jerry. 1987. Psychosemantics. Cambridge, MA: Bradford Books.

[6] Gödel, Kurt. 1931. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme. I. Monatshefte für Mathematik und Physik 38:173-198.

[7] Horst, Steven. 1996. Symbols, Computation, and Intentionality: A Critique of the Computational Theory of Mind. Berkeley and Los Angeles: University of California Press.

[8] Repeated Author. 1999. Symbols and Computation. Minds and Machines 9 (3):347-381.

[9] Lucas, J.R. 1961. Minds, Machines, and Gödel. Philosophy 36 (137):112-127.

[10] Minsky, Marvin. 1974. A Framework for Representing Knowledge. http://web.media.mit.edu/%7Eminsky/papers/Frames/frames.html.

[11] Penrose, Roger. 1989. The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford, New York, Melbourne: Oxford University Press.

[12] Repeated Author. 1994. Shadows of the Mind. New York: Oxford University Press.

[13] Quillian, Ross. 1963. A Notation for Representing Conceptual Information: An Application to Semantics and Mechanical English Paraphrasing. Santa Monica, CA: System Development Corporation.

[14] Schank, Roger, and Robert Abelson. 1977. Scripts, Plans, Goals and Understanding: An Inquiry into Human Knowledge Systems. Hillsdale, NJ: Erlbaum.

[15] Searle, John. 1980. Minds, Brains, and Programs. Behavioral and Brain Sciences 3 (3):417-457.

[16] Turing, Alan. 1936. On Computable Numbers, With an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society (Series 2) 42:230-265.

[17] Repeated Author. 1950. Computing Machinery and Intelligence. Mind 50:433-460.

[18] Weizenbaum, Joseph. 1966. ELIZA—A Computer Program for the Study of Natural Language Communication Between Mn and Machine. Communications of the ACM 9 (1):36-45.

Steven Horst
Steven Horst
I am currently Chair of Philosophy at Wesleyan University in Middletown, CT, where I have taught since 1990. I have also been a visiting scholar at the Center for Adaptive Systems at Boston University (1993), Princeton University (1997) and Stanford's Center for the Study of Language and Informatin (1998). My research began in philosophy of mind and cognitive science, and has branched out into metaphysics and the interface between philosophy of mind and philosophy of science, particularly the implications for philosophy of mind of the assumptions we make about the nature of laws and explanation. I am currently finishing off projects critiquing reductive naturalism and examining the nature of laws in psychology and the natural sciences. The new projects involve articulating a view called "Cognitive Pluralism" and working on an account of concepts that does justice to work on concepts in several scientific disciplines: notably, developmental psychology, cognitive ethology and neural modeling.
Recent Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt

Start typing and press Enter to search