Is the Brain a ‘Wet Computer’: Are Humans ‘Lumbering Robots’?: The closing of the scientific mind

Edi Bilimoria, 2018

To download a pdf, please click here.

 

Unless we learn how to prepare for, and avoid, the potential risks, AI [artificial intelligence]could be the worst event in the history of our civilisation.

Stephen Hawking1

 

Having reached generally the midpoint of this book it should be apparent that although the brain does display some of the mechanical functions and characteristics of a digital computer, then to declare as the majority of mainstream neuroscientists do, that the brain is nothing but a ‘wet’ computer2 is, arguably, a gross over simplification (somewhat like saying that because a concert pianist displays some characteristics of an office typist – the use of fingers on a keyboard – that a pianist and a typist, are one and the same thing, or a piano and a typewriter are the same instrument because they both have a keyboard.) For a start, it is minds that created and produced computers, not the other way round. The product stands hierarchically on a lower plane than the producer of the product. Brains therefore must stand hierarchically at a higher level of sophistication and subtlety than the computers created by them. The fallacy of equating the brain with just a computer has been pointed out in no uncertain terms by some of the world’s greatest philosophers, psychologists, as well as scientists like the American David Gelernter (1955- ), professor of computer science at Yale University. In his article, appropriately titled The Closing of the Scientific Mind (used for the subtitle of this chapter), he demolishes what he aptly calls the ‘master analogy’ unquestionably accepted by the vast majority of mainstream scientists: that minds are to brains as software is to computers; to put it another way, the mind is the software of the brain.3

 

This is the foremost view about the mind amongst mainstream scientists – never mind (excuse the pun) that science (by its own admission) has to date barely understood the subtleties of human consciousness. However, this idea is now so engrained that it would be instructive first to review in some depth the arguments behind it, also known as computationalism or cognitivism, before exposing the fatal weaknesses in the analogy. Accordingly, this chapter is written in two major sections: first, an elucidation and substantial development of the core theme of Gelernter’s article about the dangers of unquestioning and exclusive acceptance by science and society of computationalism and artificial intelligence as the sole basis of reality; and then a suggested way out of the bleak prospects for humanity that such acceptance would imply.

 

The Master Analogy – The Brain is Just a Computer

 

The Russian chess grandmaster and former World Chess Champion Garry Kasparov (1963- ) was beaten by the IBM supercomputer Deep Blue in 1977.4 So the obvious conclusion would be that artificial intelligence (computers) are smarter than even the finest of human brains. Or is it so obvious?

 

What is a computer? It may be rather surprising to ask what appears to be an obvious question. But the word ‘computer’ is now so commonly used that the meaning of the term has become lost in the mass of popular connotations attaching to it. Here is a comprehensive list of definitions of a computer from authentic literary and scientific sources:

  • ‘A usually electronic device for storing and processing data, (usually in binary form5), according to instructions given to it in a variable program’ – Concise Oxford English Dictionary, ninth edition.
  • ‘An electronic computer in which the input is discrete rather than continuous, consisting of combinations of numbers, letters, and other characters written in an appropriate programming language and represented internally in binary notation’ – British Dictionary for definition of a digital computer.
  • ‘A machine that stores programs and information in electronic form and can be used for a variety of processes …’ – MacMillan Dictionary: New (second) edition.
  • A device, usually electronic, that processes data according to a set of instructions’ – Collins English Dictionary: Complete & unabridged, sixth edition 2003.
  • ‘A programmable usually electronic device that can store, retrieve, and process data – Merriam-Webster: https://wwwmerriam-webster.com/dictionary/computer’ – Updated: 25 Jun 2018.
  • ‘A machine capable of following instructions to alter data in a desirable way and to perform at least some of these operations without human intervention’ – Que’s Computer User’s Dictionary: Second edition.

All of these definitions have one sense in common: words and phrases like ‘programmable’, ‘according to instructions given to it’, ‘written in an appropriate programming language’, ‘that stores programs and information’, ‘storing and processing data’, ‘that processes data according to a set of instructions’, ‘capable of following instructions’ all make it patently obvious that a human programmer is involved. The computer cannot program itself; neither can it store programs and information by itself; nor can it alter data ‘in a desirable way’ unless such ‘desires’ are input by a human programmer. Computers (and machines in general) do only and precisely what humans program them to do. This was clearly foreseen as early as 1936 by ‘the English mathematician Alan Turing showed that a single machine (the future computer) could process any problem given rules for the solution’.6 Did the supercomputer that beat Gary Kasparov program itself, or did human programmers input the necessary rules of the game—‘rules for the solution’ in Turing’s terms—which the computer then followed, mechanically? More recently, IBM’s Project Debater, a 6ft-tall black panel robot with an animated blue ‘mouth’, participated with human prize-winning debaters in two debates, one on the merits of subsidised space travel and the other on telemedicine7 – and performed so well that the audience voted the contest a draw. The robot’s delivery was deemed not as effective as the humans’, but it had more substance to its arguments. Why were the robot’s arguments more effective? Simply because, as stated by Arvind Krishna of IBM, it had access to a vast data bank containing ‘hundreds of millions’ of research papers and articles. These were drawn upon to build an argument, and a narrative to support it using speech recognition to analyse its opponents’ arguments and respond to specific points raised.8

 

But despite what may seem to be obvious, one of the arch-champions of the idea that the brain is no different in principle – in fact, is a computer – is the American philosopher Daniel Dennett (1942- ). In his highly influential book Consciousness Explained (a better title would be Consciousness Explained Away) he asks us to ‘think of the brain as a computer’.9 For Dennett, ‘human consciousness can best be understood as the operation of a “von Neumannesque” virtual machine’; which means that human consciousness is a software program or application designed to run on any ordinary ‘computer’ such as the brain – hence the reference to John von Neumann (1903-1957), accredited with inventing the digital computer, and an insulting reference at that to the great Hungarian mathematician who never maintained that minds are to brains as software is to computers – given his religious beliefs (see footnote below10). In a limited sense the analogy is, of course, fitting. The reason being that software comprises coded instructions given to hardware. We can dismantle and dissect the hardware with a scalpel and view it under a microscope if we feel like it, but we cannot dissect software to find out the mathematical code or the program, or the software programmer. The structure of software and hardware are wholly different, albeit the former is embodied by the latter (without hardware, software would have no significance, and vice versa – the one depends upon the other).

 

So far so good, but this idea of embodiment of an entirely different structure is extrapolated to the notion of mind embodied by brain and is a good example of errors of category. It is argued (reasonably upon first appearance) that the brain has its own structure and so does the mind, which exhibits reason, memory, imagination, emotions, and happens to be ‘conscious’, whatever the latter term may mean to materialists. The content of the mind cannot be dissected with scalpels or seen through a microscope or revealed by MRI scans, but the structure of the brain can indeed be so dissected and seen because the brain is a dense mass of physical matter comprising neurons and other cells. Yet the mind cannot exist apart from the brain which wholly embodies it. Therefore minds are to brains as software is to computers; and minds cannot exist without brains just as software cannot exist without hardware. Put another way, without the associated hardware in each case, minds and software are mere abstractions.

 

Some computationalists take this notion to extremes. For example, the American computer scientist, inventor, and futurist Ray Kurzweil (1948- ), who works, unsurprisingly, for Google, predicts that by 2029 computers will outsmart us and that the behaviour of computers will be indistinguishable from that of humans. (In fact computers do outsmart us even today – in speed of number crunching, if nothing else – but that is because human beings have designed and programmed them to do so.) And after the year 2045, Kurzweil maintains, machine intelligence will dominate human intelligence to such an extent that men will no longer be capable of understanding machines. By then humans will have begun a process of machinization, or what he terms ‘transhumanism’: the merging of men and machines by cramming their bodies and saturating their brains with semiconductor chip implants, along with the fine tuning of their genetic material (this theme revisited in the Epilogue to this book in Part III). In passing, the Editor-in-chief of The Week sums up the whole thing very neatly. He points out that such predictions are a geek’s pipe dream. Being like a human is not to be human. Sophisticated machine codes and algorithms may provide the former, but never the latter.11 But to continue in the computationalist’s line of argument, the American computer scientist and authority on artificial intelligence Drew McDermott (1949- ) at Yale University believes that biological computers (meaning the human brain) differ only superficially from modern digital computers. He goes on to assert that according to science, humans are just a strange animal that arrived pretty late on the evolutionary scene, that ‘computers can have minds’ and that his avowed purpose is ‘to increase the plausibility of the hypothesis that we are machines and to elaborate some of its consequences’.12 (A strict syllogism would also mean that animal brains also equals computers.)

 

And Kurzweil and McDermott are by no means alone. But it is reassuring to learn that a galaxy of super-scientists and entrepreneurs on the world stage have taken the step of pointing out their grave concerns about the threat posed by artificial intelligence and the ethical dilemma of bestowing moral responsibilities on robots. For example, Niklas Boström (1973- ), the Swedish philosopher, Oxford University don, and Director of the Strategic Artificial Intelligence Research Centre, warns that supercomputers will outsmart us. Refer to his paper outlining the case for believing that we will have super-human artificial intelligence within the first third of the next century, and how fast we can expect superintelligence to be developed, once there is human-level artificial intelligence.13 Then, according to the American entrepreneur, investor, engineer, and inventor, Elon Musk FRS (1971- ), artificial intelligence poses a greater threat to humanity than nuclear war. In his address to students at the Massachusetts Institute of Technology he stated, ‘if I had to guess at what our biggest existential threat is, it’s probably that’.14

 

Stephen Hawking joined Elon Musk and hundreds of others in issuing a letter unveiled at an International Joint Conference in Buenos Aires, Argentina. The letter warns (as does Musk) that artificial intelligence can potentially be more dangerous than nuclear weapons. Refer also to Hawking’s further warning in the epigraph. Microsoft co-founder Bill Gates has also expressed concerns about artificial intelligence. During a question and answer session on Reddit15 in January 2015, he said, ‘I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern’.16

 

On the basis of the warning notes sounded by the likes of Hawking and Gates, can all this hype from the aficionados of artificial intelligence and computationalism then be dismissed as the phantasies of nerds? We cannot do so because the latter have gained such prominence; moreover, their ideas are highly pertinent to the whole question of the nature of consciousness and mind. We need to uncover and recover our humanness at all costs and the war against man-equals-computer, and then, computer-surpasses-man has to be fought in earnest. (Hopefully this will be a bloodless war, at least regarding the computers, as they do not, as yet, have a blood supply.) Therefore, we need to unearth the fallacies in the computationalists’ predictions. But first, out of fairness to the proponents of artificial intelligence, we need to understand exactly what they are contending, and their reasons for doing so.

 

The Computationalist’s Argumentation

 

Regarding the human being, the basic strategy is to eliminate, or reduce all subjectivity to the merely physically observable and measurable; that means, consciousness and feelings included. Precisely because feelings and subjectivity are incompatible with the machine paradigm of man-equals-computer, once subjectivity is eliminated the case for the computer-mind is strengthened. And once the mind is reduced to a computer, all sense of personal responsibility, our pangs of conscience, our feelings for divinity and higher aspiration – all to do with being truly human – are eradicated or explained away at one stroke. The adopted strategy for doing so can be enumerated in three stages.

 

Stage 1: Argue the case that man is just a computer and nothing more, by ignoring everything that distinguishes man from a computer.

 

Stage 2: Eliminate feelings and subjective states, in which case man is no different from a computer. Can a computer feel anything? Obviously not. If fact, Daniel Dennett has written a lengthy scholarly paper of why computers can’t feel pain.17 Surely, being such a distinguished and award-winning scholar, cognitive scientist, philosopher of science, and philosopher of biology, particularly in those fields that relate to evolutionary biology and cognitive science, he of all people must know.

 

Stage 3: If it can’t be measured, it obviously doesn’t exist, because Science only recognizes accuracy which can be repeatably quantified over so-called truth which is a matter of subjective opinion, precision over meaning, quantity over quality, theory over experience, and respectability over validity and evidence.

 

This strategy is implemented by way of the three-pronged attack against subjectivity by way of the interrelated arguments of:

  1. roboticism, or zombieism;
  2. functionalism; and
  3. brain states.

We now describe each of the above three arguments in turn, and then expose the overall weaknesses in their line of reasoning taken as a whole.

Roboticism, or Zombieism

The term ‘roboticism’ is apt, since Richard Dawkins of world renowned fame in biological science and evolutionary theory has described human beings as ‘lumbering robots’ – which necessarily also includes himself in the epithet. He goes on to assert what he finds a truth, which still fills him with astonishment: ‘We are survival machines – robot vehicles blindly programmed to preserve the selfish molecules known as genes’.18 Such a ‘truth’ may fill us with sheer incredulity, even more so for being informed that ‘we animals are the most complicated things in the known universe’.19 But we may not take issue it. Why?

 

Because in essence, the arguments go like this. With the current increase in technology, it is not difficult to imagine a time when a robot could be constructed with all the needed software to display all the characteristics of someone we know, say our best friend. One may ask him/her (it?) about his/her feelings, or whether (s)he has consciousness or indeed whether (s)he is human, and (s)he would answer ‘yes’ to all of these. But the answer is merely due to the clever software programming that has been built into the robot. There is no way of telling that the robot is actually feeling or experiencing anything at all. (S)he is, in short, indistinguishable from the human one takes oneself to be. But why assume that one is human? Why isn’t one just like our lumbering robotic friend? After all, what is the point of consciousness? Has not Darwinian theory fully explained that nature selects the fittest creatures based on entirely practical grounds like survivability in ‘survival machines – robot vehicles’? If we and our robotic friend behave in the same way, what survivability purpose does consciousness serve?

Functionalism

The important point to note here is that subjectivity has been reduced to the physical, therefore the observable and measurable. An example: let us take something that has made generations of music lovers tearful, such as Schubert’s song Ave Maria. What does being tearful mean according to functionalism? It means that a certain set of physical events (like a compact disc playing, sound waves in the air from the speakers, the action of the salty glands in the eyes), but not actually crying, are the cause the state of mind known as tearful. This state of mind (along with others) makes one want to do certain things, like shedding tears. So ‘I want to cry’ means that the mental state (i.e. being tearful) has not been eliminated but reduced to just certain physical circumstances, to what one has been doing and what actions one plans to do, like putting on a CD of the Ave Maria in anticipation of becoming tearful. (Schrödinger would have trouble with this line of reasoning – see later).

To read the full article, please see the PDF.