An alternative Turing Test

During a recent discussion with Tristan, the subject of the Turing Test arose. For those who are unfamiliar, the test is intended as a way to determine if a machine has intelligence. You set it up so that it can converse with a human being – for instance, through a text-based instant message type conversation – and if the person thinks they are talking with another human, it can be taken as evidence that the machine is intelligent.

Setting aside the question of how good an intelligence test this really is (a computer could pretty easily trawl a database of human conversations to produce convincing conversation), it seems like there is another sort of test that would be demonstrative of a different kind of intelligence. Namely, it would be when a machine or a computer program first becomes aware of itself as being a machine or computer program.

It is possible that no machine made by humans will ever develop that level of self awareness. Perhaps it is impossible to replicate whatever trick our brains use to turn flesh into consciousness. If it did happen, however, it seems like it could help to illuminate what self-understanding means, and what sort of mechanisms it requires.

Author: Milan

In the spring of 2005, I graduated from the University of British Columbia with a degree in International Relations and a general focus in the area of environmental politics. In the fall of 2005, I began reading for an M.Phil in IR at Wadham College, Oxford. Outside school, I am very interested in photography, writing, and the outdoors. I am writing this blog to keep in touch with friends and family around the world, provide a more personal view of graduate student life in Oxford, and pass on some lessons I've learned here.

5 thoughts on “An alternative Turing Test”

  1. How would you know that the machine is aware that it is a machine?

    And if the machine thought it was a human being, couldn’t it still meet every definition of intelligence?

  2. It is hard to know, because a machine could easily parrot the right sort of statements without understanding them. I suppose one demonstration would be the generation of novel ideas about the significance of being a machine or a program.

    A machine or program that thought it was human would have to be both intelligent and insane, I suppose, like a human being who thinks they are a robot. Quite possibly, any entity that is capable of real intelligence is also capable of suffering from delusions.

  3. In a technical sense, it is impossible to determine for sure whether any entity – biological or artificial – is actually self aware, or just capable of putting on a convincing show of it.

    Still, I think it may be more interesting to have a machine that is aware of its nature as a machine and able to reflect on the implications of that condition than it would be to have a machine that can successfully imitate a human being. It’s a bit of an introvert/extrovert distinction, perhaps.

Leave a Reply

Your email address will not be published. Required fields are marked *