The Turing Triage Test: When Is A Robot Worthy Of Moral Respect?

The Turing Triage Test

When Is A Robot Worthy Of Moral Respect?

December 31, 2014                                                     Picture: Keoni Cabral/Flickr.


This article is part of the Critique’s exclusive series on the Alan Turing Biopic The Imitation Game.


The thought that if we can’t tell the difference between two things they must be the same has, I suspect, a special appeal to engineers and to a certain sort of scientist. If one conceives of one’s role as “building things that work”, then “good enough” may appear to constitute the entirety of success. In fact, it is clear that in many circumstances an inability to distinguish between two things reveals only our own limited epistemic powers and little, if anything, about the nature of the objects we are attempting to distinguish. At night, across a crowded street, I may mistake a shop floor dummy for an old acquaintance, but the fact that I do so hardly animates the manikin or petrifies the person I mistook it for. If the “imitation game” is going to serve as a test of the nature of machines or people, then, the specification of the conditions in which the test should take place will be crucial.

Yet, if the idea that we must treat things we cannot tell apart as if they were the same is less compelling than might first appear, the case for the converse — that we must treat alike things that we believe to be the same as if there were such — is more compelling. I believe that this observation has dramatic implications for our understanding of the consequences if a machine should pass the “Turing test” — and also, perhaps, for our interpretation of the adequacy of that test.

Turing (1950) proposed his famous test as an answer to the question of how we might tell if or when a machine was capable of thinking. If a skilled interrogator was incapable of telling the difference between a machine and a human being in a conversation conducted over a teleprinter, we should conclude that the machine can think. Yet both the nature of conversation and the role it plays in human life suggests that this test, if valid, would prove more than this. Part of the conversation, for instance, is an enquiry and reflection upon one’s internal states and their relations to past events. To successfully pass as a human being, a machine would need to be capable of telling us how it felt and respond appropriately to praise, to jokes, to tales of tragedy and triumph, and to insults. It would need to be capable of feeling as well as thinking. Moreover, conversation often touches upon one’s future plans and projects, as well as one’s past successes and failures and so presumably a machine that can pass as a human in conversation will also need to be able to do this convincingly and thus should also be said to have plans and desires. Finally, in so far as human beings are rational and demonstrate this capacity by responding to reason giving in conversation, a machine that could pass the Turing test should similarly be said to be rational.

A machine that was capable of both thinking and feeling, as well as conducting conversation at a human level, with plans and desires of its own, would be a marvelous thing. Indeed, I believe that the moment such a machine is created the question will naturally arise as to what we might owe it in terms of our own behaviour towards it. Must we be kind to it? Presumably if we threaten it with violence or death in conversation, it would respond with horror and aversion, just as you or I would. Should it be accorded rights, then? What is its moral status? In particular, would it be wrong to “kill” it?

Before we attempt to answer this question, it will be useful to render it a bit more precise. Whether — and how much — it is wrong to kill various sorts of things, such as non-human animals, embryos, or foetuses, for example — has been one of the central obsessions of bioethics and applied ethics for the last four decades or so. While everyone in these debates agrees that it is ordinarily very wrong indeed to kill an adult human being, there is extensive disagreement about almost every other case. In this context, it has proven useful to introduce a semi-technical term, “persons”, to identify those entities that it is especially wrong to kill — at least as wrong as it is to kill an adult human being — and then argue about what sort of things are members of this class, and also about the moral status of those things that are not persons. Without necessarily wishing to endorse this intellectual move in its entirety, the concept of “person” has the clear virtue of allowing us to pose the question of the moral status of machines without prejudice. A more precise question, then, would be “would a machine that could pass the Turing test be a ‘person’”?

In The Turing Triage Test, I proposed a thought experiment to answer the question “when will machines become persons”. Here is the “Triage test”, as I originally described it (See also, Sparrow 2011):

“Imagine yourself the Senior Medical Officer at a hospital which employs a sophisticated artificial intelligence to aid in diagnosing patients. This artificial intelligence is capable of learning, of reasoning independently and making its own decisions. It is capable of conversing with the doctors in the hospital about their patients. When it talks with doctors at other hospitals over the telephone, or with staff and patients at the hospital over the intercom, they are unable to tell that they are not talking with a human being. It can pass the Turing Test with flying colours. The hospital also has an intensive care ward, in which up to half a dozen patients may be sustained on life support systems, while they await donor organs for transplant surgery or other medical intervention. At the moment there are only two such patients.

Now imagine that a catastrophic power loss affects the hospital. A fire has destroyed the transformer transmitting electricity to the hospital. The hospital has back up power systems but they have also been damaged and are running at a greatly reduced level. As Senior Medical Officer you are informed that the level of available power will soon decline to such a point that it will only be possible to sustain one patient on full life support. You are asked to make a decision as to which patient should be provided with continuing life support; the other will, tragically, die. Yet if this decision is not made, both patients will die. You face a ‘triage’ situation, in which you must decide which patient has a better claim to medical resources. The diagnostic AI, which is running on its own emergency battery power, advises you regarding which patient has the better chances of recovering if they survive the immediate crisis. You make your decision, which may haunt you for many years, but are forced to return to managing the ongoing crises.

Finally, imagine that you are again called to make a difficult decision. The battery system powering the AI is failing and the AI is drawing on the diminished power available to the rest of the hospital. In doing so, it is jeopardising the life of the remaining patient on life support. You must decide whether to ‘switch off’ the AI in order to preserve the life of the patient on life support. Switching off the AI in these circumstances will have the unfortunate consequence of fusing its circuit boards, rendering it permanently inoperable. Alternatively, you could turn off the power to the patient’s life support in order to allow the AI to continue to exist. If you do not make this decision the patient will die and the AI will also cease to exist. The AI is begging you to consider its interests, pleading to be allowed to draw more power in order to be able to continue to exist.

My thesis, then, is that machines will have achieved the moral status of persons when this second choice has the same character as the first one. That is, when it is a moral dilemma of roughly the same difficulty. For the second decision to be a dilemma it must be that there are good grounds for making it either way. It must be the case therefore that it is sometimes legitimate to choose to preserve the existence of the machine over the life of the human being. These two scenarios, along with the question of whether the second has the same character as the first, make up the ‘Turing Triage Test”.(Sparrow 2004, 206)

Could a machine ever pass the “Turing triage test”? I am inclined to doubt it, for reasons I will explain below.

However, before setting out my own views on this matter, which are inevitably somewhat controversial, let me first observe that according to an influential — perhaps the dominant — philosophical account of what grants entities moral status, advanced (albeit in slightly different formulations) by Peter Singer (1993), Michael Tooley (1972), and John Harris (1985) amongst others, machines that could pass the Turing test should also pass the Turing Triage Test. This account attempts to explain the wrongness of killing in terms of the extent and nature of the harm done to the entity killed. Singer, for instance, argues that there is a crucial divide amongst animals, including human beings, when it comes to the moral status they have, which relates to the capacity for self-consciousness. Some creatures are merely sentient and have experiences without having a concept of themselves as the subjects of those experiences. Other creatures, including (adult) human beings, also possess rationality and an awareness of themselves as persisting through time and, as such, are capable of forming future-oriented desires. Killing merely sentient animals is wrong in so far as it deprives them of the pleasurable experiences they might otherwise have had. However, killing members of the latter class involves a further — and arguably much greater — degree of harm because it also frustrates the desires and plans they have for the future.

On this account, then, persons are creatures that are both rational and self-conscious. According to its proponents, it is a virtue of this account that it avoids the charge of “speciesism”, which might be levelled at explanations of the wrongness of killing that make the category of persons coextensive with membership of a biological species, such as Homo sapiens. Some human beings, for instance infants or severely cognitively impaired adults, are not persons on this account and some non-human animals, for instance adult orangutans, gorillas, and chimpanzees, are persons.

If I am correct, then, that a machine that could pass the Turing test must possess both rationality and future-oriented desires then, on this account, it should follow straightforwardly that it would be equally wrong to kill them as to kill an adult human being and thus that one would face a moral dilemma when confronted with the choice between saving the life of a machine and saving the life of a human being. Indeed, not only would it be just as wrong to kill a machine that could pass the Turing test as to kill an adult human being but, depending on the capacities of the machine, it might even be more wrong. Because, on this account, the wrongness of killing a person is a function of the (future oriented) desires that it would frustrate, it will be more wrong to kill creatures that have more of such desires. It seems possible that thinking machines might have many more such desires than human beings. Given that, unlike human beings, machines can be fitted with replacement parts as their existing components wear out, there seems to be no obvious reason why thinking machines should die of “natural” causes. Immortal — or even simply very long-lived — machines may have more future oriented desires than human beings because they have longer futures for which to make plans. Similarly, there seems to be no upper limit on the cognitive capacities of thinking computers, which could expand their intelligence by updating and enhancing their hardware. Machines that were more intelligent than human beings might have more ambitious and complex plans (and therefore desires) than us.

Thus not only might machines that could pass the Turing test also pass the Turing Triage test, on this account, they might quickly dominate it. Whenever we were faced between saving the life (or killing) a machine or a human being, we should choose in favour of the machine. One need not believe in the inevitability of a runaway “Singularity” (Kurzweil 1999), in which machines quickly become tens, then hundreds, then perhaps thousands, of times smarter than us, the moment they are invented, to find this a disturbing prospect. Yet while the argument for machine personhood I have been considering looks convincing on paper, my suspicion is that the Turing Triage Test might be harder to pass than my treatment thus far suggests.

In order to see this, we must re-focus on the suggestion that the choice between saving the life of the machine or the human being must constitute a moral dilemma. The nature of dilemmas is that they are difficult. Moral dilemmas are especially difficult because they involve choices that implicate some of our deepest values. Perhaps more controversially, because there are good and important reasons to choose either way, moral dilemmas are impossible to resolve without regret. When lives are at stake, they are impossible to resolve without remorse and, importantly, the object of this remorse is the particular person, whom one eventually sacrificed in the course of choosing. This should be obvious in the case where the decision involves which of two human lives to save. It would be entirely understandable, for instance, if the person who was required to make this choice should be haunted by it — and by the face of the individual they were unable to save — for the rest of their life. Indeed, if they did not experience such remorse, without explanation, one would wonder if they had experienced a moral dilemma at all. If it were impossible to believe them when they expressed “remorse”, one would be similarly unconvinced that they had confronted a moral dilemma.

How does this help us with the question of whether a machine might pass the Turing Triage test? I believe — and have argued more fully elsewhere — that any adequately rich and nuanced account of the nature of remorse and its connections both to other moral emotions and to the forms of our interpersonal relations would reveal that machines will struggle to achieve the kind of individual moral personality necessary in order for us to sustain claims about experiencing remorse for having caused the “death” of a machine (Sparrow 2004). As Raimond Gaita (1989; 1990; 1991; 1999), amongst other scholars of the latter Wittgenstein (see especially the work of Cora Diamond and Peter Winch), has argued, our moral language is deeply inflected with — and constituted by — the forms of human relations that make possible the distinction between genuine instances and false semblances of the moral emotions. Thus, for instance, it is essential to grief or remorse that these emotions are oriented towards particular individuals in their contingency and are expressed through certain characteristic gestures and facial expressions as well as moods and subtleties of language (Gaita 1991, 150-154). The sorts of bodily expressiveness available to machines today — and for the foreseeable future — place significant limits on the sorts of interactions we can have with them and on our capacity to evaluate the appropriateness of the responses of other people to machines (Sparrow 2004). Were someone to claim that they were haunted by grief after having caused the death of their laptop, for instance, we would struggle to work out whether they were serious or only joking, even if we knew that their laptop had recently passed the Turing test. Indeed, we may lack the moral and intellectual resources to determine this. Yet, if we can’t make the distinction between genuine and ersatz claims, we lack the basis to evaluate them – and thus to make them — at all.

While machines remain the sort of thing that might equally well take the form of a filing cabinet or laptop as an android, then, with “bodies” of metal and plastic that they inhabit only contingently, it may prove impossible for concepts like remorse and grief to gain application in descriptions of our relations with them. Yet if we cannot properly be said to experience remorse after killing a machine, then this suggests that we could never genuinely experience a moral dilemma when it came to the choice between preserving the existence of a machine or a human being. Machines — even extremely sophisticated machines — will not, after all, be able to pass the Turing Triage test!

This line of thinking is properly controversial. Those who feel the force of the argument that our moral reasoning should strive to avoid parochialism or “speciesism”— as I do, too — may balk at the suggestion that the nature of an entity’s embodiment, or of our responses to that embodiment, should play any role at all in determining its moral status. In turn, I can only observe that the idea that we should try to do moral philosophy without reference to any of the concepts or responses that inform our judgements about what it is to reason well in this domain can look equally far-fetched. At the heart of the dispute between the two accounts I have sketched out here is a dispute about the nature of philosophy and its limits.

As I cannot possibly hope to resolve that debate here, let me conclude instead by briefly returning to the implications of my discussion for the adequacy of the original Turing test. As I observed at the outset, an inability to distinguish between two things is only noteworthy if the conditions in which we are trying to make the distinction are appropriately specified. I have suggested that when it comes to the task of distinguishing persons from non-persons we should acknowledge that we can rely only upon the discriminations we make in the context of “thick” understanding of our moral concepts and their place in human life. Yet the idea of a conversation also has its primary application in everyday human life. If we are inclined to think that the Turing test may prove too much, as it seems to when it all-too-easily grants machines capacities that would ordinarily make them persons, it may be that the concept of conversation it relies upon is too thin. Perhaps a better test for thinking in machines would involve a conversation “in person” rather than via a teleprinter? Machines that could pass that test would raise philosophical questions indeed!


Footnotes & References

Gaita, R. 1989. The Personal in Ethics, in Wittgenstein: Attention to Particulars (ed. Phillips, D. Z. and Winch, P.), MacMillan, London, 124-150.

Gaita, R. 1990. Ethical Individuality, in Value and Understanding (ed. R. Gaita), Routledge, London, 118-148.

Gaita, R. 1991. Good and Evil: An Absolute Conception. MacMillan, London.

Gaita, R. 1999. A Common Humanity: Thinking About Love & Truth & Justice. Text Publishing, Melbourne.

Harris, J. 1985. The value of life. Routledge & Kegan Paul London; Melbourne.

Kurzweil, R. 1999. The Age of Spiritual Machines: When computers exceed human intelligence. Allen & Unwin, St Leonards, N.S.W.

Singer, P. 1993. Practical ethics (Second edition). Cambridge University Press, Cambridge; New York.

Sparrow, R. 2004. The Turing triage test. Ethics and Information Technology 6(4): 203-213.

Sparrow, R. 2012. Can machines be people? Reflections on the Turing triage test, in Robot Ethics: The Ethical and Social Implications of Robotics (ed. P. Lin, K. Abney, and G.Bekey), MIT Press, Cambridge, Mass., 301-315.

Tooley, M. 1972. Abortion and infanticide. Philosophy & Public Affairs 2(1): 37-65.

Turing, Alan. 1950. Computing machinery and intelligence. Mind 59: 433-60.

Robert Sparrow
Robert Sparrow
Dr. Sparrow is Associate Professor of Philosophy at Monash University. “At the highest level of description my research interests are political philosophy and applied ethics; I am interested in philosophical arguments with real-world implications. More specifically, I am working in or have worked in: political philosophy, bioethics, environmental ethics, media ethics; just war theory; and the ethics of science and technology”. His publications include several scholarly articles such as Making better babies: Pros and cons. Monash Bioethics Review 31 (1): 36-59 2014 and Better than men? Sex and the therapy/enhancement distinction. Kennedy Institute of Ethics Journal 20 (2): 115-144 2014.
Recent Posts
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text. captcha txt

Start typing and press Enter to search