The Philosophical Legacy of Alan Turing

The Philosophical Legacy of Alan Turing

How The Mathematician’s Ideas Shaped Modern Scholarly Thinking About Minds, Machines & More

By Darren Abramson (Dalhousie University)

December 31, 2014                                               Picture: Johnson Camera Man/Flickr.

This  article is part of the Critique’s exclusive series on the Alan Turing Biopic The Imitation Game

The following is quoted from a fictional depiction of Alan Turing’s life:

“The churning of a human mind is unpredictable, as is the anatomy of the human heart. Alan’s work on universal machines and computational morphogenesis had convinced him that the world is both deterministic and overflowing with endless surprise. His proof of the unsolvability of the Halting Problem had established, at least to Alan’s satisfaction, that there could never be any shortcuts for predicting the figures of Nature’s stately dance.”

This selection from the short story “The Imitation Game” by Rudy Rucker[1] presents the philosophical legacy that I will be defending in this article: the idea that the world could be both surprising and deterministic. The recent film by the same name does an excellent job of portraying the significance of Turing’s contributions to wartime efforts to crack German encryption, but his philosophical contributions are also highly significant. Turing’s philosophical legacy, as described in this quote, is closely related to the mathematical result for which he gained his early fame as a mathematician.[2] What does ‘surprising determinism’ look like?

In his 1936 paper, Turing attempted to answer the following question: can statements of arithmetic be answered using a finite method? That is, is there some sort of definite procedure which, after a finite amount of time, will be guaranteed to answer the question ‘is that string of symbols on a piece of paper a true statement of arithmetic’? Such statements can get very fancy: for example, ‘is there a natural number A such that when you apply this rule about what to do with that symbol, combined with 87 other rules about what to do with other symbols, and you apply the rules no more than A times, you end up with an expression that has the following property?’

To ask about the existence of such a definite procedure, it has to be made more precise. In making it more precise, Turing invented what we now call the ‘Turing machine’ (he called it an ‘automatic machine’, or ‘a-machine’). By doing so, he proved that there is no such definite procedure. The reason has to do with something called ‘enumerability’. This is the property of being able to put a group (‘set’ in mathematical talk) of things in a ‘one-to-one’ correspondence with the natural numbers. In other words, even if it’s infinite, there’s a way to count through them so that you know you’ll get to each one eventually.

The fact that the collection of all Turing machines is ‘enumerable’, combined with the fact that there’s a universal Turing machine that can compute what any other Turing machine would do, inexorably leads to the conclusion that there is no definite procedure (if we understand definite procedures in terms of what a Turing machine can do). A lot of philosophy has been written on whether the analysis of this definite procedure can give us insight into what a Turing machine can do — I’m guilty of some of it myself — but I won’t focus on that here. If you are more interested in it, I recommend starting with Jack Copeland’s 1999 paper.[3]

Mathematicians are, quite famously, concerned with something called ‘priority’. This term, when used by mathematicians and scientists, refers to being the first to make a discovery or construct a solution to a problem. Take a look at the dispute between Leibniz and Newton over who invented calculus for a really vivid example.[4] Turing was scooped on his discovery that there is no definite procedure for answering questions about arithmetic by the American Alonzo Church. But, in a twist, the journal he sent his result to accepted his paper even though its editors were aware of Church’s result. In his edited collection of Turing’s works, Copeland explains the support that mathematician Max Newman gave to the paper, recounted in a letter from Turing to his mother — Turing’s method for his version of the proof was ‘sufficiently different,’ in its use of hypothetical, general purpose computing machines, to warrant publication.[5]

Turing’s philosophical legacy mirrors his mathematical legacy: he contributed not by having priority over some idea or argument, but by taking a pre-existing intellectual achievement and presenting it in a new perspective. The paper that cemented Turing’s philosophical legacy is “Computing Machinery and Intelligence.”[6] It contains within it the claim that to tell a thinking thing apart from a non-thinking thing, all you need to do is take a paradigm example of a thinking thing, and check if it is indistinguishable from it in how it uses language. The editors of Mind could have sent the paper back to Turing with a cursory rejection that he had been scooped by Descartes, or even earlier by Publilius Syrus (“Speech is a mirror of the soul: as a man speaks, so is he”), but they didn’t — presumably because there was much in the paper that was interesting and new. There is even compelling evidence that Turing was aware of Descartes’s idea of a language test — I describe that evidence in my 2011 paper.[7]

One interesting and new thing in this paper is the idea that an appropriately programmed Turing machine (henceforth I’ll call machines that compute ‘computers’) could pass this language test — what Turing called ‘the imitation game’, and what we now call ‘the Turing test’. The reason Turing called it the imitation game is that he supposes in a few places that the best way for a machine to win such a game is to think of a particular person and then imitate them. Viewers of the movie ‘The Imitiation Game’ will recall the tense conversation between Turing and the detective pursuing Turing’s mysterious break-in, in which Turing pushes the point that machines might think in a completely different way than people do, given that even people think differently from one another. In “Computing Machinery and Intelligence” Turing does push this point. It is for this reason that he says, effectively, that passing the imitation game is a sufficient, but not necessary condition for thinking.[8] Part of Turing’s philosophical legacy, unfortunately, is that commentators and scholars often ignore this key distinction in his paper. What this distinction means is that if a machine is indistinguishable from a person in its capacity to carry on a conversation in printed text, then it thinks. It does not follow from this that if a machine (or person, for that matter) fails at the imitation game then it doesn’t think.

Turing is a bit disingenuous in his famous philosophy paper. At the beginning he claims to want to avoid philosophical disputes and get to a clear question that can be answered. By doing so, he successfully reminded generations of researchers of the difficulty and importance of flexible use of natural language in a sensible manner as a criterion for thinking. But for most of the paper, Turing responds to objections that a computer could think that one could levy totally independently of whether one accepted this criterion, or whether one thought a computer could ever satisfy it.

One of these objections would have been very familiar to Descartes — he uses exactly this argument to explain why an automaton couldn’t exhibit the linguistic behavior that people do. Machines only do what their creators anticipate: imagine a mechanical duck that has one response when you tweak it here, and does something else when you put food in its mouth, and so on. Machines only do what they are set up to do by someone who knows how they work. Turing calls this objection ‘Lady Lovelace’s objection’.

The conclusion of Turing (and Church’s) paper is that the concepts of determinism and predictability fall apart. Computers, which can be understood as the finite unfoldings of a particular Turing machine, are completely deterministic. But there is no definite procedure for figuring out, in every case, what they’ll do: if you could, then you would have a definite procedure for deciding whether any statement of arithmetic is true or not. But there is no such procedure for the one, so there is no such procedure for the other. Computers are, in the general case, unpredictable, even by someone who knows exactly how they work.

Therefore, the obstacle that Descartes and Lady Lovelace observed for machines, which prevented them from doing things which their creator couldn’t take credit for, is removed. This is a deep and important result.

This philosophical legacy is in dispute. Another objection in his paper is what Turing calls ‘the mathematical objection.’ It goes like this. Suppose you’re presented with some computer that its programmer says can think. Whether or not it passes the imitation game, someone might object that it doesn’t think, since in some sense it can be surpassed by a person who knows a little about mathematics. In a construction very similar to the one used to prove that there is no finite decision procedure for checking whether a statement of arithmetic is true, we can design a question for the computer that supposedly can think that has the following property: when asked this question, the computer must either give the wrong answer, or fail to give an answer at all.[9]

Turing’s response to this objection is, as I’ve argued, quite straightforward. In fact, I’ve claimed, it is exactly the same response given by generations of leading philosophers and mathematicians to this objection.[10] This response is quite subtle. First, it asks: can we assume that of the two choices for the special question we construct, the machine will fail to give an answer? How do we know it won’t just give the wrong answer? In earlier writings, Turing does grant the ‘consistency premise,’ according to which any candidate machine for thinking must not give the wrong answer to the special arithmetic question we can formulate for it. The consistency premise has been defended by advocates of the mathematical objection such as John Lucas and Roger Penrose on such grounds as the logical fact that any statement at all follows from a falsehood, so if a machine were to give the wrong answer to this question, there would be no logical restriction against its asserting anything at all.

As I understand Turing, the consistency premise must be rejected for at least two reasons: first, there is no decision procedure for figuring out that an arbitrary computer will give the wrong answer instead of failing to give an answer at all. This is a direct consequence of the result of his 1936 paper! Second, people give wrong answers all the time — why should we assume that computers don’t, in the context of a debate over the question of whether machines can think?

Once we lose the assumption that the computer will fail to give an answer to the special question we construct, Turing observes, then we also lose the assumption that we will know the answer ourselves to the special question we have constructed. There is a very subtle, important point that must be made here. Turing’s 1936 result rules out the existence of a procedure that allows us to test an arbitrary string of symbols to see if it’s a truth of arithmetic. However, there is no consequence for our ability to check whether any finite number of strings are truths of arithmetic. For that reason, we might feel smarter than some group of computers that are candidates for thinking, since we are able to deduce the answer to the special questions that we formulate for them; but “[there] would be no question of triumphing simultaneously over all machines”,[11] because there is no general procedure for doing so.

An alternative interpretation of Turing’s response to the mathematical objection says that computers, through their interaction with the world, can produce ‘non-computable’ behaviors that allow them to transcend, in some way, the mathematical fact of the question they are unable to answer. This interpretation places heavy weight on Turing’s incredibly prescient discussion of what we now call ‘evolutionary’ or ‘biomorphic’ computation, under Turing’s heading ‘Learning Machines’.[12] There are at least a couple of problems with this interpretation of Turing’s work.

The first problem is that it means that an appropriately programmed computer can’t think — only an appropriately programmed computer that interacts with the world in the right way, transcending its computational limits in some way, could think. To assert this is to deny Turing’s central claim of his 1950 paper, and ought to be rejected as an expression of Turing’s own view. A great deal of philosophy continues to be written on whether or not some variant of the mathematical objection proves that computers can’t think.

The second problem with this interpretation is that it ignores the context of Turing’s discussion of learning computers. They come up, explicitly, in discussions of the objection that he spent the most time responding to: Lady Lovelace’s objection. In as-yet unpublished work, I’ve traced Turing’s correspondence with other ‘cyberneticists’ of the day about this issue. He was pushed on this issue, and clearly felt strongly that a standard, digital computer could think, despite its behavior being a deterministic consequence of its construction and its inputs. It is in the discussion of Lady Lovelace’s objection that Turing also makes an observation that has been, in my own experience, the passage most cherished by academics who read his 1950 paper.

I’ll quote it here:

“The view that machines cannot give rise to surprises is due, I believe, to a fallacy to which philosophers and mathematicians are particularly subject. This is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it. It is a very useful assumption under many circumstances, but one too easily forgets that it is false. A natural consequence of doing so is that one then assumes that there is no virtue in the mere working out of consequences from data and general principles.”[13]

My own interpretation of Turing’s philosophical work, supported by this passage, is that he held a scientific view: he believed that computers would eventually be able to accomplish all of the mental tasks that people could, as expressed in natural language. He also held the view that the computers that could do this would be surprising, even to someone who knew how they worked. Neither of these claims were offered as self-evident truths, but rather as claims to be tested through experiment and investigation.[14] This passage contains both within it a plea for taking an experimental approach to computers, and an argument against those who think that determinism and creative surprise are incompatible — it’s based on an intuition that doesn’t hold up against the mathematical arguments available.

Artificial intelligence is at an exciting, unprecedented stage. Tasks that researchers failed at building computers to accomplish are now routinely solved by them. They can transcribe spoken language fairly well, translate between languages at a level that is useful, and make pretty good predictions about human preferences and behaviors. Successes at these tasks have largely come from the application of ‘machine learning’ algorithms to very large data sets, and are much closer to Turing’s vision of what a learning machine might look like than the unsuccessful attempts to build artificial intelligence in the decades immediately following his death. The behavior of computers that are programmed this way is often unpredictable and surprising (but completely deterministic). Turing’s philosophical legacy is still unfolding; as the movie The Imitation Game makes abundantly clear, it is a tragedy that Turing didn’t live to see more of it. 

Footnotes & References

[1] Available at

[2] Turing, A. (1936). On computable numbers, with an application to the entscheidungsproblem. Proceedings of the London Mathematical Society, 45:230–265.

[3] Copeland, B. J. (2000). Narrow versus wide mechanism: Including a re-examination of Turing’s views on the mind-machine issue. The Journal of Philosophy, 96:5–32.

[4] Described in detail at

[5] Copeland, B. J., Editor (2004). The Essential Turing. Oxford University Press, pg. 207.

[6] Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236):433–460.

[7] Abramson, D. (2011). Descartes’ influence on Turing. Studies In History and Philosophy of Science Part A, 42(4):544 – 551.

[8] See, for example, Turing (1950) pg. 435: “May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.”

[9] For a detailed account of how to construct this question for a given computer, see
Abramson, D. (2008). Turing’s responses to two objections. Minds and Machines, 18(2):147–167.

[10] Ibid.

[11] Turing (1950), pg. 445.

[12] Turing (1950), pg. 454.

[13] Turing (1950), pg. 451.

[14] These are the conclusions of how to interpret the Turing test in Abramson (2008).

Darren Abramson
Darren Abramson
Dr. Darren Abramson is Associate Professor of Philosophy at Dalhousie University. He has published several articles on the philosophy of mind and the history of computer science, including "Turing's Responses to Two Objections" (Minds and Machines, June 2008) and "Descartes' Influence on Turing" (Studies in the History and Philosophy of Science, 42 (2011) 544-551).
Recent Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt

Start typing and press Enter to search