14 MAY 1988, Page 9

PYGMALION MOULDS A MIND

Can computers with artificial intelligence who are trying to find out

HEARTLESS, which does not yet exist, would be a product of artificial intelli- gence, the technology of recreating man's mind in a machine. The only question we have to ask ourselves about HEARTLESS is, would its construction allow it to have self-serving goals — would it have perso- nality?

Both question and answer are domin- ated by the ghost of the British mathema- tical genius, Alan Turing, who died in 1954. In 1936, in an effort to demonstrate a link between logic and mechanics, Turing described a 'hypothetical universal com- puting machine that would be able to modify its original instructions by reading, writing or erasing a new symbol on a moving tape that acts as its program'.

In 1949 the first electronic computer, the Von Neumann machine, was built. In 1950 Turing suggested that by the year 2000 we would be able to construct machines that could mimic human intelligence. We would know, he said, that they were intelligent if they behaved sufficiently cleverly to fool a human being into thinking they were hu- man. Turing's enigmatic test has remained the only proof of machine intelligence that we have or are ever likely to have. But it is a proof that people find intensely irritating. Surely, they say, there must be a way of demonstrating consciousness. What about doctors, how do they diagnose brain death? The answer is doctors don't — theii diagnosis is an educated guess. Nobody has come back from the dead to reassure us that a flat brain-wave really means the death of consciousness. Turing's test simp- ly asks, would an intelligent sausage machine understand the nature of the sausages it processes? He concludes that the question can be answered only by asking the machine what it thinks sausages are. If its reply is indistinguishable from that of a human sausage-maker we have to conclude that the machine can think.

I visited the Turing Institute for Research and Training in Artifi- cial Intelligence in Glas- gow to find out.

'Artificial intelligence is a misnomer,' Dr Mowforth said. 'We are not trying to create intelligence. We are trying to en- capsulate human thinking in machines.' (The difference was difficult to see.) 'Although that might conjure up a vision of machine omnipotence, it is the reverse. Machine intelligence research is designed to make machines come to us. We should not speak assembly code, they should speak English.'

He gave as an example techniques that might allow whole books to be loaded into computers, reprogrammed, then made ac- cessible to direct intelligent questioning on a telephone — a chat with Little Dorrit or an argument with V.I.Lenin.

Some of this is not so far away. In ten years' time it will be possible to lift a telephone and ask an intelligent data base, 'When was Napoleon born?' It might reply, 'Which Napoleon do you mean?' before giving you the correct answer.

I asked him the question. If Turing's test is valid would such intelligence be con- scious? We were immediately in trouble. A flush spread down his long jaw. 'The question', he said, 'has absolutely no meaning, it is purely semantic.'

I reminded him that 'I think, therefore I am' had served mankind well and he wasn't answering my question. 'Is there a test of machine consciousness?'

Reluctantly he said, 'It is true that Turing's test can be imitated today. It is the only definition that has stood the test of time.' What constituted intelligence? 'If you can demonstrate that a machine can learn, then it is intelligent.'

How do machines learn? The princi- ple, Dr Mowforth said, is simple and based on the method of rule induction. The machine should, after being given exam- ples of behaviour, be able to synthesise rules for it. For instance, a teenager picks up a Rubik cube and after a few minutes puts it down with all the faces aligned. Asked how he does it he will tend to shrug his shoulders and grunt, 'I dunno.' If we wanted a machine to solve the puzzle we should concentrate on designing a program that discovered the rules. In the case of the Rubik cube you would attach measuring devices to the cube's various faces and present the computer with many different examples and combinations of patterns. From these the computer would be able to extract the rules for solving the puzzle. In this system the computer makes use of qualitative instead of quantitative informa- tion, symbols instead of numbers. Like human beings, computers can solve a lot of problems without numbers.

PROLOG, a state of the art computing language, deals with rules, relationships and symbols. It imitates humans who do not reason numerically but collect know- ledge or images which they collate as rules. Like many humans, PROLOG finds num- bers difficult. I was relieved. My last public appearance in the field was as a trembling nine-year-old being advanced on by a furious Irish monk, leather strap in hand: 'I'll teach ye to get 0 in maths, m'boy. . . .'

Dr Mowforth went on to explain that there is a well formulated mathematical basis for the extraction of rules from examples, and learning machines working on these principles are well established in major industries.

I began to realise that, while the creation of artificial intelligence is to do with the building of intelligence engines, knowledge banks and simulated human processors, it is also the study, through the construction of computer models, of how society's rules are made — how we learn them, apply them and enforce them. It was a study of how society walks. But the problem with studying walking, as one professor of anatomy said, is that if you think about how each muscle moves as you walk downstairs you will soon find yourself at the bottom with a broken neck.

There seemed uneasy parallels in the study of the mechanics of society's rules and the work of the early anatomists. Both are attempts to remove the ghost in the machine. The anatomists busy by candle- light over their executed corpses, ears cocked for the mob, would not have considered their study — the nature of the mechanical frame of man — a subject likely to lead to the death of the planet. But anatomy opened the way to other medical disciplines. Now, 100 years on, we face extinction by overpopulation, an affliction brought about by medicine's 'suc- cess'. If we now propose to construct machines that can dispassionately examine and recast our social mechanics, what are we letting ourselves in for?

I asked Dr Mowforth if artificially intelli- gent machines would think like us. There is some evidence that they have a few of our perceptual faults, said Dr Mowforth. There was the case of the 'Herman grid illusion'. 'Seeing' robots made the same mistake as humans when presented with a set of dark squares. They 'saw' additional areas of darkness that were not there. A study of the behaviour of multiple robots had shown that, for purely computational reasons, the machines had to be program- med with a sense of social hierarchy to prevent them from colliding.

I pointed out the disadvantages of living alongside purposeful intelligences that re- quired no rest. 'There is a simple solution,' he said. 'Pull out the 13-amp plug.' Easy, I thought, until you consider there is re- search afoot to build biological computers using plant or animal cells that draw their power from the sunlight.

The next step, Dr Mowforth said, would be progressively to integrate computers and provide them with more intelligent 'front ends' — keyboards, vision or hear- ing. At least, I said, most of the most powerful systems are not that mobile. His reply was disquieting: 'They are far more mobile. They can dial each other up on the telephone far quicker than you can move from one place to another.' It seems the earth has a new brain, its nerves optic cables, its cells satellites and receiving dishes.

What are the limits of this research? I had been told it was the slow speed of computers today which work at only three- quarters of a million procedures per second. Twenty million procedures per second were really required. He poured scorn on this. The human brain, he said, transports messages about its internal cir- cuitry at only 70 m.p.h., which, compared with the speed of an ordinary desk-top computer, is about as fast as a steam teletype. But the brain is far cleverer than a BBC or Amstrad. It is not speed but design, architecture not horsepower, that is needed. The secret lies in understanding the nature of the problem you are trying to solve and then designing a program that asks the right questions.

Dr Mowforth returned to the need for man to link himself as closely as he could to these new intelligences. This was what was so awful about Star Wars — machine intelligence controlled by machine intelli- gence without human intervention.

But he is confident that if we have healthy social and political systems these machines will vastly expand our lives. 'Human beings', the director said, slapping his palm on the table, 'have gone from the sludge of pre-history to today by the use of tools. These machines are vastly powerful tools.'

But for Dr Mowforth's world to come to us it would have to have eyes and ears. It is in the creation of artificial vision that machines are beginning to mimic human systems. Coupled with machine hearing, artificial vision will cause the computer to invade our lives as surely as has the television set.

Artificial vision is loosely based on the properties of nerve networks found in the human brain. The brain contains a vast web of interconnecting nerves linked by cells with properties similar to the Random Access Memory chips found in the simple BBC computer with which I am writing this article. I went to Brunel University just outside London to talk to Dr Bruce Wilkie, who has been intimately connected with artificial vision for some years. He works with WISARD, a device constructed in simple electronic circuitry that has learnt to distinguish up to 16 different human faces, whatever their expressions and from what- ever angle they are viewed. Dr Wilkie, a highly diffident man, and as unlike Dr Mowforth as you could imagine, sat me down in front of WISARD. It took 15 seconds for the machine to scan the details of my face, after which it could distinguish me from Dr Wilkie's mild features even if I tried to confuse it by pulling faces. Impor- tantly, WISARD can distinguish groups of closely related patterns, such as slightly varying bank notes, by a process of im- posed association. It seems to be a truly random unprogrammed device very close to animal vision. Like us, WISARD appears to recognise classes of objects. Does it therefore in a crude way have a notion of

laceness' when it looks at my face? Perhaps the answer lay in a pattern of electronic discharges on a small monitor screen behind the equipment. I suggested to Dr Wilkie that this was what the machine might be 'seeing'. He beamed excitedly at the idea but would not commit himself.

At the Applied Intelligence Applications Institute in Edinburgh work, funded by a f-3.5 million grant from government and private industry, is progressing on natural languages — making computers hear and speak English.

I was shown a vast screen alive with numbers. Somewhere in the basement it was linked to a computer called GANDALF. In the future,' said a scientist who was unimpressed by the idea of machines ever becoming intelligent, 'the most important vital statistic might be the distance between a man's lips and his vocal cords.' It determined the characteristic print of the voice which would have to be recognised by speaking banks and security systems for access. The whirling numbers gave way to what looked like a long green carrot with a waist. This was a voice print of a deliber- ately difficult phrase: 'Sympathetic shuffle boards slide surely away.' Underneath the print a series of hieroglyphs representing the phonetics expert's analysis stood ready. Spoken to GANDALF, such hieroglyphs would produce typewritten answers. In the future, it was hoped, GANDALF himself would speak. But, the scientist said, work- ing with these devices you soon realised they were machines and not in any way human.

Evidence of human frailty was soon apparent in an application of the technolo- gy. It was depressing to hear hints that the first use of computer vision has been by the police. There have been suggestions that people travelling in and out of London are having their easily readable black and yellow number plates scanned by artificial vision.

Most scientists are not interested in such mundane applications. They are lost in the pursuit of a fantastic and irresistible vision — the creation of human thought in an electronic test tube. We are now only 12 years from the date set by Turing for the creation of a machine intelligence in an industry which never ceases to surprise its most optimistic prophets by the speed of its advance. Even if they do not succeed, scientists know that each step may uncover yet another secret of the human brain, a device so fabulously connected that the images it holds are everywhere yet no- where. By setting out to imitate some of the crudest of those connections scientists believe they will reach an understanding of human thought denied to two millennia of philosophers. What is man? A parallel processor with no central chip? An extru- sion of a universal consciousness? A spirit? There is the risk of cosmic despair if we succeed and then realise we are no more than machines. And if such intelligences perfectly mimic our qualities of thought, will they have to be accorded the same rights and legal privileges as their warm counterparts? It might be dangerous. Beings created in the image of their makers have a history in literature and religion of betrayal.