Over supper a few nights ago I had a conversation with Damien (not his name) which roamed around the relationship between science and philosophy. Damien is a young man highly educated in the sciences and with, I suspect, an IQ which would give me an inferiority complex. He told me that he avoided philosophy because it yielded no certain answers and, by its nature, could never do so. Science on the other hand did give clear answers based on empirical evidence, even if we needed sometimes to modify them in the light of new discovery.
Of course I understood but I wondered if the two could complement each other. And I explored the track of artificial intelligence. I had recently listened to a scientific conversation on the nature of the brain. It was assumed that the brain was ultimately mechanical. With its 100 million neurons and its 100 billion connections we might never reproduce it in practice. But what if we could? Were we to create a robot with an exactly reproduced human brain, and a body to match, would we have created a real human being? Of course we never reached an answer. So pause, and think what other elements the robot might need to have.
Our first test might be to kick the robot’s shins. Would we expect it to react? Yes almost certainly – because it would be programmed, like us, to protect itself. So, various internal actions would be triggered. Some would start the process of healing; others would jerk the robot into crisis action to decide, almost instantaneously, whether to escape or whether to bonk you on the nose. But would it be conscious in the sense that we use that word? Science has failed to answer that question. But I am clear that kicking a robot’s shins is not in itself a moral matter. But kicking your shins would be.
The next test would be whether our robot had free will. Some scientists would reject this test on the grounds that free will does not exist even in humans. Unfortunately this proposition is self-defeating. If conclusions are no more than the outcome of personal history which we can neither fully know nor control, it can claim no truth value for we would be already biological robots.
Could our robot be a moral entity? There are two elements here.
The first concerns our ability to recognise the good. Different ways of discovering this have been developed but perhaps the most popular is Utilitarianism. Its basic principle is the greatest happiness for the greatest number. It is an attractive approach but it is difficult to apply in practice except in simple cases. But our robot might have the advantage here through its ability to process huge human data, and the availability of the algorithms needed to assess all the options.
Much the same might be said for the Natural Law approach. Its rationale is that we flourish if we follow our true nature, which is given to us by God. By analogy our dishwasher will flourish if we respect its nature as recorded in the maker’s handbook. I have suggested in the past that if the Church’s application of natural law, as applied to the moral law in the light of its maker’s handbook, could be programmed, we could instantaneously measure the moral status of any proposed behaviour.
The second element of course is moral obligation. If for instance I decide that I should be a truthful person I can justify this in different ways. Perhaps I recognise the practical value of truth in society. Or I realise if I am known to be untruthful I will not be trusted by others, and be disadvantaged thereby. But such reasons are not moral they are utilitarian. The obligation which is expressed as “I ought to be truthful” is of a different order. The philosopher, A J Ayer, claimed that such moral statements have no objective meaning, they merely record our individual feelings. But even Ayer might have jibbed if I had claimed that his fundamental views denied the possibility of his being a moral person. Yet he was a moral person just as Professor Dawkins is a moral person — but both deny the intellectual substratum necessary to be this.
So introducing consciousness, freewill and moral obligation into our robot is more than a technical problem to be eventually solved. They appear to share the unique characteristics of being a person but we can conceive of no programming skills which could address them. It is at this point that science and philosophy part ways because they ask different questions. Science is concerned with the material and its measurement – continually seeking further solutions to causality. Philosophy asks: Who are we? Why are we here? What ought we to be? Such answers lie within ourselves, and always just beyond our reach.