Is The Human Brain a Computer? Alan Turing On Mind & Computers

Is The Human Brain A Computer?

Alan Turing On Mind & Computers

By Professor John-Michael Kuczynski (Bard College)

December 31, 2015                                                         Picture: Doug Bowman/Flickr.


This article is part of the Critique’s exclusive series on the Alan Turing biopic The Imitation Game


 Introduction

The mind is an information-processing system. An information-processing system is anything that converts information into information. Through our eyes, ears, and other sensory organs, we upload information about the external world. Our central nervous system processes this information. It then outputs beliefs, which in turn lead to action.

Your PC is an information-processing system. Information is uploaded, converted, and then downloaded. The same is true of all computers. To that extent, computers are not so different from the human mind.

But the similarities end there. All computers are information-processing systems, but not all information-processing systems are computers, and the human mind is an information-processing system that is not a computer.

No one did more to demonstrate this than Alan Turing. And yet no one had a greater role than Turing in the development of the modern computer. These two achievements of Turing’s are two sides of the same coin, as we will see.

This paper is divided into two parts. In the first part, we will describe Turing’s analysis of computability. This analysis is the foundation of contemporary computer science. In the second part, we will discuss the bearing of Turing’s analysis on the question: Are we computers?

 

Part I: Turing’s Analysis of Computability

Turing did not himself create a physical computer; nor did he create a computer program. What he did was much more important. He created a meta-program—a blueprint for creating programs—known as the Universal Turing Machine. The Universal Turing Machine is really just a very precise description of what it is to compute the solution to a problem.

Not all cases of finding a solution are cases of computing a solution. A solution that is known on the basis of intuition or informal reasoning is not computed. To compute a solution is to use an algorithm to identify it.

An algorithm is a mechanical procedure. In elementary school, you learned mechanical procedures for adding and multiplying multidigit numbers. These are examples of algorithms.

The algorithms for multiplication and addition are concerned not with numbers or mathematical operations, but with physical inscriptions—not with the number 5 or the operation of addition, but with the corresponding expressions. According to these algorithms, whether a given inscription or inscription-sequence is permissible is a function of its shape, not of its meaning. Thus, the ability to carry out such an algorithm has to do with sensitivity to symbol­-shape, not with sensitivity to symbol-meaning. For this reason, machines can implement them and, in so doing, solve arithmetical problems.

All algorithms are concerned with the geometrical or otherwise strictly physical properties of objects. None are concerned with their semantic properties—with their meanings, in other words. For this reason, algorithms can be implemented by machines.

In the early 20th century, algorithms were invented for deducing statements from other statements. Thus was created the discipline of mathematical logic, the purpose of which is to mechanize the drawing of inferences. Computer science grew out of mathematical logic. With the advent of computer science, there arose questions, both practical and philosophical, as to the similarities, or lack thereof, between minds and computers.

By producing a precise and rigorous analysis of the concept of computation, Alan Turing made it clear how to resolve many of these controversies. The essence of Turing’s analysis can be stated in one sentence: If a problem can be solved in a mechanical fashion, the solution is the output of a recursive function.

We will start by saying what this means. Then we will say why it is true. Finally, in Part 2, we will discuss the bearing that it has on the question: Are we computers?

 

Explaining Turing’s Analysis

Let us start by defining the term “recursive function.” First of all, a recursive function is a mathematical function. A function in the mathematical sense is a rule that assigns outputs to inputs. Consider the function F(x)=x+1. This function assigns 2 to 1, 3 to 2, and so on. In other words, 2 is the output if 1 is the input, and 3 is the output if 2 is the input.

A function need not assign numbers to numbers. In fact, the great innovations in mathematical logic underlying computer science were made possible by the discovery that some of the most important functions have nothing to do with numbers. The functions dealt with by Boolean Algebra fall into this category, as do the functions dealt with by the propositional calculus. Both disciplines are indispensable to computer science.

A function cannot assign different outputs to a given input. Otherwise, there are no limitations as to what a function may assign to what.

A recursive function is one that is defined for each of its own outputs. Consider the series: 2, 4, 16, 256, 65,536…This series is generated by a recursive function, namely: F(1)=2 and F(n+1)=n2. This function assigns a 2 to the number corresponding to first position in the series—this number being the number one, of course—and, if it assigns n to the number corresponding to a given position in the series, then it assigns n2 to the number corresponding to the subsequent position.

To take a simpler example, the series 1, 2, 3,… is generated by the recursive function: F(1)=1 and F(n+1)=1+F(n).

A recursive function is defined by (1) a rule that explicitly assigns a certain output to an initial input, along with (2) a rule for generating the output to any given input, apart from the initial input, on the basis of the output to the previous input.

Arithmetic is based on the following recursions:

(1)  n+0=n and n+(m+1)=(n+m)+1

(2)  n×0=0 and n×(m+1)=(n×m)+n

(3)  n0=1 and nm+1 =nm×n.

Given enough time, any given problem of arithmetic can be solved by employing these recursions. In the case where 0 is the relevant operand, the answer is explicitly given. Otherwise, the answer is obtained by repeating the rule contained in the second part of the relevant recursion.

Thanks to (1)-(3) arithmetic requires no thought at all. It is therefore possible to create machines that solve problems of arithmetic as unthinkingly, but also as unerringly, as those that create hubcaps.

Wherever there is a recursive function, there is a mechanical decision procedure.

According to Turing, the converse also holds: wherever there is a decision-procedure, there is a recursive function. In other words, if the problem can be solved in a mechanical fashion, the solution is the output of a recursive function.

 

Evaluating Turing’s Analysis

The justification for Turing’s claim lies not in a formal proof, but in considerations of a philosophical nature, which we will now state.

First of all, a list of true arithmetical statements, e.g. “5+3=8” and “72=49”, is not a decision-procedure. Such a list presupposes an existing way of acquiring the relevant sort of knowledge. Therefore, to the extent that such a list is available, a decision-procedure cannot add to what we know. But a decision-procedure that cannot add to what we know is no decision-procedure at all.

More importantly, a decision-procedure is by its very nature general. A list of truths is a record of pre-existing knowledge. Since a decision-procedure is a way of acquiring new knowledge, no list of truths is a decision-procedure. It follows that a decision-procedure must apply to infinitely many cases that it does not explicitly mention. This means that any given decision-procedure must contain conditional information: information to the effect that if such and such holds, then thus and such also holds.

There are two ways of acquiring new information. One is by using our senses, that is, by observing the world. The other is by reasoning, that is, by drawing conclusions from what we already know. In using one’s senses to acquire knowledge, one obviously isn’t using a decision-procedure. Therefore, the first way isn’t relevant in this context, it being the second way that we must consider.

The second way necessarily involves knowledge of conditional truths: truths of the form if such and such is the case, then thus and such is also the case. (e.g. if Bill is in Paris, then he is not in Madrid).

So far as decision-procedures enable us to add to what we know, it is by virtue of embodying conditional truths. But decision-procedures cannot be strictly conditional, since such procedures that thus and such is the case, and not merely that thus and such is the case if such and such is the case. For example, (1) explicitly tells us that 2+0=2 and it implicitly tells us that 2+6=8.

To the extent that a given decision-procedure applies to infinitely many cases, it must be conditional. (Consider the second part of (1), which tells us that if 2+5=7, then 2+(5+1)=8.)

To the extent that a decision-procedure applies to any cases at all, it must be non-conditional. (Consider the first part of (1), which tells us that 2+0=2.)

But it also means that a decision-procedure must tell you how to generate new knowledge on the basis of the assertoric information that it gives you; and this means that such a procedure must contain conditional information. (Consider the fact that, taken jointly, the two parts of (3) make it possible to evaluate any exponent, even though the first part only makes it possible to evaluate only a single one and the second part does not make it possible to evaluate any.)

It is therefore inherent in what a decision-procedure is that any such procedure contain two components:

(a)   A base-clause to the effect that such and such is the case; and

(b)  A conditional clause (usually referred to as an inductive clause, “inductive” being a synonym of “recursive”) to the effect that, if such and such is the case, then thus and such is also the case.

Any statement having this two-part structure is ipso facto a recursion. Thus, where there is a decision-procedure, there is a recursion; and, as previously observed, where there is a recursion, there is a decision-procedure.

So Turing is right. A problem can be solved in a mechanical fashion—-the solution to it can be computed, in other words—if, and only if, the solution is the output of a recursive function.

 

The Concept of a Turing Machine

Let us now discuss Turing’s ingenious way of illustrating these rather abstruse points. Imagine an infinitely long ribbon of paper that is divided into squares. This ribbon passes through a machine, one square at a time. Some of the squares are blank; others have markings on them. When a given square is inside the machine, the machine scans it. If the square has a marking on it, the machine can either:

 

(1)  Erase that marking,

(2)  Add another marking,

(3)  Do nothing at all, or

(4)  Move one or more squares to the right or the left.

 

If the square is blank, the machine can either:

 

(A) Write a symbol on it;

(B) Do nothing; or

(C) Move one or more squares to the right or the left.

 

The machine’s behavior is predetermined by its program. This program consists of instructions to the effect that, if a given square’s condition is such and such, then the machine should do thus and such.

The two significant facts about this hypothetical situation are, first, that the machine can do nothing unless it is given data; and, second, that the machine’s behavior is a function only of the condition of the square that it is scanning at that time, coupled with its program.

The tape’s condition prior to being scanned corresponds to the base-clause of a recursion, and the machine’s program corresponds to the inductive clause.

Turing proved that, given any problem that we would intuitively regard as being capable of being solved by a decision-procedure, the just-described machine can solve it.

A machine of the just-described kind that is limited to one program is a Turing Machine. Such a machine that can run any program is a Universal Turing Machine.  If a given problem can be solved mechanically, it can be solved by a suitably programmed Turing machine and is thus “Turing-solvable.”

 

A Turing-unsolvable Problem

        Turing proved that not every problem is Turing-solvable. Consider the question:

(T) Is every problem Turing-solvable?

For argument’s sake, suppose the answer to be “yes.” In that case, there is a program P, capable of being run on a Turing Machine, having the following property: Given any program F and given any input I, P can determine whether a Turing Machine running F will “halt” if given input I.

Given the existence of P, it follows that there exists a program Q such that, for any program F, Q can determine whether or not F halts if F runs F itself.

Given the existence of Q, it follows that there exists a program R such that:

 

(1)           For any program F, R runs forever if, according to Q, F halts if F runs itself; and also such that

(2)           R halts if, according to Q, F runs forever if F runs itself.

 

Suppose that R runs on itself. There are exactly two possible cases to consider.

 

Case Number 1: Suppose that, according to Q, R halts if R runs itself. In that case, by (1), R runs forever.

 

Case Number 2: Suppose that, according to Q, R runs forever if R runs itself. In that case, by (2) R halts.

 

In the first case, R runs forever if R halts. In the second, R halts if R runs forever. Thus, if R runs on itself, it halts if, and only if, it does not halt. Since this is not possible, there cannot be such a program as R. Since R exists if Q exists, Q does not exist; and since Q exists if P exists, P does not exist. There is thus no way to compute the solution to the “Halting Problem.”

This does not mean that there is no way to solve that problem, only that there is no mechanical way to do so.

 

Part 2: Philosophical Implications

A computer is, by definition, a mechanical problem-solver. To say that a problem can be solved mechanically is to say that no thought is needed to solve it. Thus, if a computer can do it, thought is not required to do it. Mechanical problem-solvers are not thinkers; they are thinker-proxies.

“But if computers don’t think,” it will be objected, “then they don’t solve problems.”

Computers don’t think, and they don’t solve problems. A mechanical ‘problem-solver’ is a non-problem-solver whose behavior makes it easy for an intelligent observer to solve otherwise difficult problems.

Suppose that 3+5= is entered into a properly functioning calculating device. That device is so structured that, on the basis of the physical characteristics of that input, it outputs 8. If it were on the basis of the meaning of that input that it decided what to output, it wouldn’t be a calculating device at all. By the same token, since it is a device, it is blind to the meaning of both input and output. Therefore, it cannot possibly be aware of the fact that the output was meaning-appropriate to the input, and it therefore has not in any literal sense solved anything. What it has done is make it easier for someone who does understand input and output to solve the problem in question.

Turing himself was very aware of the distinction between beings, such as ourselves, that actually problem-solve and beings, such as Turing Machines, that merely simulate problem-solving behavior. In his landmark paper On Computable Numbers, of which Part 1 of this paper is a summary, Turing points out that, by a “computer”, he means a person who computes—someone who uses a decision-procedure to solve a problem.

Intelligence is needed to create algorithms, not to implement them. The very purpose of an algorithm is to render thought unnecessary. We use algorithms to do arithmetic precisely because we wish to think as little as possible about arithmetic.

A corollary of the fact that we can create algorithms is that we can evaluate them. Given an algorithm, we are able to determine whether or not it yields the right results. There is no program whose legitimacy we cannot question. A computer cannot question its own program; for a computer cannot reject its own program. A computer could do so only if its program allowed it to do so. But if a computer rejects its program on the basis of its program, it is following that program and therefore isn’t rejecting it.

This argument, it will be noticed, bears a certain resemblance to Turing’s proof that not all problems are Turing-solvable. It is suggestive that Turing included that proof in the same paper where he describes Turing Machines. To be sure, Turing had reasons of a strictly discursive nature for doing so. But his decision to include it may have reflected a desire on his part to make it clear that Turing Machines merely simulate thought. Had he expressed himself more explicitly on this matter, a half century of psychological research might not have been wasted on the tragically misconceived notion that brains are digital computers.


John-Michael Kuczynski
John-Michael Kuczynski
Dr. John-Michael Kuczynski is a Bard Center Fellow at Bard College, New York. His publications include Empiricism and the Foundations of Psychology: Advances in Consciousness Research (John Benjamins Publishing Company, 2012) ; Conceptual Atomism and the Computational Theory of Mind: A Defense of Content-Internalism and Semantic Externalism (John Benjamins Pub Company, 2007) ; and other works covering related topics in philosophy, metaphilosophy, semantics, ethics, and political and legal theory.
Recent Posts
  • I agree with these conclusions. Computers may resemble brains in some of their activities, but they do not resemble humans – because humans have consciousness and freewill, which no machine has.

Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt

Start typing and press Enter to search