You will, Oscar, you will
Tony Osman
HOW TO BUILD A PERSON: A PROLEGOMENON by John Pollock
MIT, £20.25, pp. 189
There's no deception in the title of this book. In it, John Pollock looks at the problems of building a thinking, learning machine. This will not be, at first, a robot: it will be an electronic mind, a computer. P. ollock's book is not just a 'how would I do ir if it could be done' exercise. He is running a project at the Universityof Arizona, where he is Professor of Philo- sophy and Cognitive Science, that aims at building just such a 'person': he has called his 'person' Oscar.
It is to be a full, if electronic, person. It will be able to understand its world and live rationally in it. It will, Pollock says, 'have desires, fears, intentions, and a full range of mental states'. Intellectually, it will be, by any standard we care to use, human: To put it mildly, researchers into Artifi- cial Intelligence have not got very far along the route to a human. There are very limited systems that can check credit ap- plications or control the manufacture of, say, specialised alloys. There is, famously, Eliza, a computer program that can imper- sonate a rather mechanical psychologist by picking out key emotional words from the patient's input and rephrasing them into what seems to be a mildly probing ques- tion. There are computers that can prove theorems — mathematical or logical. But there is nothing that can even remotely be called an electronic human mind, and there are researchers, most notably John Searle of the University of California, who say that there will never be computers that can think as humans do.
John Pollock reckons that the failures to date have been due to a defective philo- sophy of knowledge and reasoning, and much of the book is devoted to demon- strating a philosophy of thinking that can be applied to the design of computers. These is also a defence of the beliefs that you must have if you think that person- building is possible. They are dramatic: if you are to build a machine that is a person you must have the complementary belief that people are machines. The key to this is known technically as `token physicalism' — the thesis that men- tal events are physical events. In one sense this is indisputable. The human brain's activities use its nerve cells — neurons and modern medical technology can show as bright patches the parts of the brain that come into action when, for example, we do mental arithmetic. It could presumably also pick out the parts involved when we solve a problem or feel the emotions of awe or love. What is implied by the sentence that mental events are physical events is, you have to realise, that they are nothing but physical events. There is another point that Pollock throws into his discussion. Our brains use a lot of short cuts that could be usefully put into the computer brain of his 'person'. One relates to the computation of move- ment. We can hit a bounding tennis ball or drive into a busy road junction with a clear idea of where the relevant moving objects will move to. There is not enough time to compute these processes fully: our brains must have a 'quick and dirty' way of computing a solution that is good enough to work. Pollock, in fact, prefers the phrase 'quick and inflexible' (Q & I), because the solutions are useless in even a closely related situation. If a ball is thrown against a wall, our Q & I computer does not start its work until after the impact.
Another computational short cut is in- volved in 'reading signs and portents'. We all 'know' that a layer of heavy black cloud presages rain. We certainly did not reach this knowledge by the processes of re- peated observation and induction that old- fashioned theoreticians of science used to talk about — most of us would be hard put 'I think we're going to have to get a second car.' to list a couple of previous occasions when rain has followed black clouds. We have some way of making rough and ready predictions that are superior to those of the cat who 'learns' that the sound of a tin opener in action is a portent of food, yet inferior to the relatively certain knowledge that comes from understanding the world we live in.
As essential ability, for us and Pollock's `person', is introspection. We need not only a way of detecting pain, so that we move away from a fire: we need to know that the pain sensors are in action, so that we can avoid putting ourselves into a similar position again. We also need a way of knowing that our other sensors colour, line, for example, are in action so that we can form useful hypotheses about the way the world works. A useful hypoth- esis, Pollock reminds us, is one that can be tested by looking for refutation. This is a very important point: it is refutation that matters, not confirmation. Most hypo- theses can be confirmed: scientists, for example, regularly confirmed that matter disappeared when it was burned until Antoine Lavoisier, in the 18th century, showed by burning substances in an en- closed space that they took in some of the air in combustion. A stunning number of people confirm daily the predictions of the newspaper astrologers, and hence appear to confirm the idea that we are controlled by the stars. It is refutation that counts, or at least the possibility of it. The computer `person' has to be able to form defeasible hypotheses about its world.
Pollock sees no insoluble problem about making a computer that can do all this, and more, and the last chapter of his book, called rather splendidly 'Cognitive Carpen- try' describes the process. Anyone who hasn't followed the progress of Artificial Intelligence research will be a bit stunned to discover what has been achieved.
Pollock's Oscar, in its early stage, can work through reasoning sequences, and it does so in a time comparable to the time a human would need. This is a dramatic contrast to rival artificial intelligence sys- tems that can take hours for a simple reasoning sequence. It can cope with para- doxes, such as the Liar's Paradox (A man says 'I always lie', do we believe him?), and it does this as we do, by recognising that thinking about the paradox is fruitless, and getting on with something else. There are still problems. One is mem- ory. Computers can have vast memories, stored on disks, but they can consult them only very slowly. We can apparently find a relevant fact in our memory virtually in- stantaneously. Perhaps most important of all, Oscar must be able to decide if its reasoning has led to a truth, not simply a logical consequence. The `full-Oscar' won't be completed quickly, but it is simply amazing that researchers now feel they understand human minds so well that they will be able to mimic them in silicon.