On the Analogy Between Mind/Brain and Software/Hardware


[This essay was written in December 1992 for my friends in our science book discussion group. Several of the books mentioned here were read and discussed by our group, and provided a common background.]


1.  As Marvin Minsky[1] and many other people have pointed out, most often the best way to understand something is to find a good analogy for it, to see that it is like (in the relevant ways) something else that we already understand. For ages people have struggled to understand the relationship between mental phenomena (mind) and corresponding physical phenomena ("matter", "body" or, as we now recognize, a specific part of the human body, the brain). So what is the best analogy available to help us understand the relationship between mind and brain?


2.  Until recently there were really no very good analogies available. This is one of the main reasons that people in general have remained very confused and puzzled by the relationship of mind to brain.


3.  However, there are also other important reasons for this continuing confusion and puzzlement—especially ideological reasons. Early human concepts of mind (generally confused and blended with the concept of "spirit", or "life-force") helped lead to, and supported religious notions.

The modern concept of 'spirit' derives from the ancient Proto-Indo-European word *bhes- which just meant "breath". In fact, it is thought that this word came from imitating the sound of a heavy breath. (The dash at the end of the word means that the ending varied with the grammatical context, since Proto-Indo-European was a highly inflected language. The asterisk means that this word, like all other Proto-Indo-European words, has been reconstructed by the comparison of cognates in the various daughter languages, and is not directly attested to since Proto-Indo-European did not exist in a written form.)

*bhes-, in its grammatical form *bhsukh-, became via the standard phoneme shifts psukhein (meaning "to breathe") in ancient Greek. From this came the Greek word psukhe or psyche, which meant not only "breath", but already by the time of Homer, also the breath-like animating force which left the body at death and continued to exist for a while as a "shade" or "shadow". You can see how this notion came about: when someone dies, they stop breathing, their breath, their psyche, their life, "leaves them".

Later, in the religious rites in honor of the Greek god of wine and nature, Dionysus, the psyche was reinterpreted for the first time as "a principle superior to the body, and imprisoned within it". Only with Plato (or possibly Socrates) did this "life force" or "spirit" become viewed as an "immortal soul". Thus we see the gradual transformation of a physical thing (breath) into a mystical, religious thing (soul). Many modern English words related to the mind and/or "soul", such as 'psychology', evolved from psyche.[2]

Religion depends to a great degree on the continuation of such primitive and confused notions of mind. The usual concept of God, for instance, depends on a conception of mind or "spirit" which is completely independent of physical reality, and more basic than physical reality. Religious authorities themselves recognize, and insist upon this. One example that I recently came across is the following comment by Mary Baker Eddy, the founder of the Christian Science church: "All things are created spiritually. Mind, not matter, is the creator."[3] (In philosophy this underlying viewpoint is known as idealism.)

When class society developed, religion was placed in the service of the ruling class (first the slave owners, later the feudal lords, now the bourgeoisie). Fear of divine punishment for transgressing the laws and morals established by the rulers (or at least "adjusted" in order to serve the interests of the rulers)—and hope of eternal rewards in the "hereafter" for obeying such laws and moral codes—help the ruling class maintain its privileged position and keep the exploited working people under control.

Consequently exploitative society depends to a large degree on religion, and religion depends to a large degree on absurdly primitive notions of mind and spirit. Thus the heavy force of the state and religion, and their educational and indoctrination organs, are constantly brought to bear to perpetuate and defend pre-scientific conceptions of mind and its relationship to matter.

("Abstruse" philosophical and scientific questions often have more social importance than it might at first seem; they are often connected in important ways to the welfare of the people. That is one of the reasons why those of us who see those connections sometimes tend to get "so emotional" about such issues.)


4.  With the advent of computers, however, the analogy we have needed to enable us to understand the basic relationship of mind to brain has finally come into being. This is the parallel relationship of software to hardware. Not only is this by far the best analogy available, it is so far the only analogy we have which is deep enough to get to the essence of the relationship. (More than that, I believe it is the best analogy that is possible, for reasons which are implied later.)


5.  But despite the now widespread reference to human/computer analogies, they are seldom utilized to full effect. John McCrone, for example, in his book The Ape That Spoke: Language and the Evolution of the Human Mind, states that computers "provide some of the best metaphors we have come up with so far".[4] But he attributes this merely to the fact that computers are "the most complicated technology humans have created" and therefore presumably should be of some help in understanding the extremely complicated natural entity, the brain.

The fact that two things are both highly complex is of course a point of similarity between them, and makes them analogous with respect to that one point. But this in itself is the most trivial of analogies, which is essentially useless since we already know full well that both computers and brains are very complex.

Useful analogies are between entities that have many similarities, and especially similarities with respect to the essential structures, processes, functions, capabilities, attributes, and the like. And these are just the sorts of extended and essential similarities which do in fact exist between brains and computers. Thus McCrone's remark that "Computers have only a passing similarity to human minds"[5] is dead wrong.

McCrone says the danger is that we can "get carried away by metaphors" and adds that "If we talk about the mind as a computer, we may become blind to the many ways it is not like a computer."[6] Well of course any analogy can be abused; any two things which are similar in some respects are dissimilar in others—or they would not even be two different things. When we use analogies (or "metaphors", as McCrone prefers to say) we must of course keep in mind that they can be pressed too far and may consequently mislead us about the nature of the thing being investigated, rather than help us understand it. But to reject the use of analogies because of this potential abuse is to reject our primary method of coming to understand things. Should we refuse to use the best tool at hand just because it is possible to misuse it if we are not careful? Of course not; just be careful!

And beyond that elementary observation, there is another thing to mention here. McCrone fails to give any examples of "the many ways" in which the brain "is not like a computer". And in fact it all depends on what you mean by the word 'computer' whether this claim has any truth to it whatsoever. Do you mean merely presently existing computers, especially the still very primitive mainframes and PCs that people are most familiar with? Or do you mean computers in general—including all possible computers constructed on the same fundamental principles as present-day computers?

Well, of course, in discussing and comparing concepts, we are talking about generalizations, abstractions, and not specific devices. Those of us who say that the brain is a computer (or that it is an association of many computers with various architectures, à la Minsky) are abstracting beyond current computer systems that have so far been constructed (but not really all that much beyond them), based on the general principles of the way that all computers work—from the most primitive to the most sophisticated ones conceivable. (And the human brain, incidentally, is by no means the most sophisticated computer system conceivable, even now.)

Thus all the obvious differences between the human brain and the PC on your desk are essentially irrelevant. One is made of silicon, etc., and the other of organic tissue. So what?! Nobody who really understands computers believes that they must be made out of silicon!

To show that the brain/computer analogy is invalid, misguided, misleading, or whatever, you must give examples of essential characteristics of computers in general which are definitely not characteristics of the mind/brain. No one has done anything remotely like this. McCrone does not give any examples "of the ways the mind is not like a computer" probably because the only examples he can give would be laughed at as irrelevant by those of us who think otherwise, and McCrone knows this!

To show that the brain/computer analogy is appropriate, one need only show some of the many parallels between the two, which are instructive, and which help us to understand the one once we understand the other. This has already been extensively done, and moreover the analogy is constantly being deepened.

However, my real claim is not that the brain and computers are "analogous", but that the brain is a computer. Actually it is much more enlightening to follow Minsky and think of the brain as an association of many computers with various architectures appropriate to their spectific functions. It is true however, that in a deep sense all possible general computer architectures are equivalent. Any computing system which is an association of computers of various architectures can be modeled in theory by a single computer with a single architecture (a Universal Turing Machine). Whether we call any computing system one computer or an association of many computers is really a matter of how best to simplify things for our own convenience.

The true analogy is not between brains and computers, but between brains and present-day primitive computers; i.e., between two different kinds of computers in the broader, conceptual sense.

In this essay I am also calling the mind analogous to software, and the brain analogous to hardware. Actually, I prefer to say the mind is just the software that runs on the hardware of the brain. I expect that is the way that things may be described in the future. The true analogies here are once again between the brain and present-day computer hardware (that is, between two kinds of hardware in the more abstract sense), and between the mind and present-day computer software (that is, between two kinds of software in the more abstract sense).

McCrone doesn't really utilize the analogy between brains and (present-day) computers until the next to the last page of his book where he finally remarks that "We could say that the story of the human mind so far has been one of constantly improved software running on the same old hardware." A nice comment (assuming he is just describing the overall situation since the evolutionary advent of anatomically fully modern humans some 100,000 years ago or so). But up until that point he relies exclusively on many other extremely limited "metaphors" or analogies such as "nets, pyramids, fragmented maps, foundations, extensions, spotlights, stages, and dozens of other things to try to illustrate how the mind works"[7] If he had focused throughout on the one truly deep analogy, that between mind/brain and software/hardware, he might have been able to overcome many of the limitations of his book, including his tendency towards positivism, behaviorism, and other varieties of naive materialism.

(Naive or mechanical materialism denies the reality of mind and mental phenomena, and wants to speak only of matter. Scientific or dialectical materialism does not deny the existence of mind, but explains it as an emergent, or higher-level property of certain very complex organizations of matter.)


6.  However, a great many people have recognized the importance of the computer analogy to the mind/brain. It has been seized upon far and wide. Nevertheless, there have been some serious problems in using this analogy correctly. Partly this is due to a misunderstanding of the distinction between software and hardware even among those who have a great deal of knowledge about computers.


7.  It is often said for example that hardware is physical, while software is "non-physical". But if you think of it like this, you'd better be sure you know what you mean when you say something is "non-physical". If you don't, you'll be apt to jump to the craziest conclusions.

Actually, software is a "non-physical" abstraction of certain high-level properties of very complexly organized hardware (physical matter). If you really understand that, then you will also understand the essence of the relationship between mind and matter. (I'll return to this thread below.)

All abstractions, patterns and concepts are "non-physical", but this does not make them mysterious, or spiritual, let alone "primary over matter". Consider my favorite analogy of 5 pebbles on the ground, where each pebble may be viewed as a vertex of a pentagon. The pentagon shape is an abstraction which we derive from the relative positions of the 5 pebbles. The pentagon is not a physical thing in the sense that the pebbles themselves are. The pebbles are composed of atoms, but there are no additional atoms which are required for the pentagon. If the 5 pebbles were removed or their arrangement were sufficiently disturbed, the pentagon would "disappear". The relative positions of the 5 pebbles is a characteristic of the 5 pebbles as a group (at a given time), but it is not a separate "physical thing" in addition to the 5 pebbles themselves.

(Nevertheless, this "non-physical thing"—the specific arrangement of physical things—may itself have physical consequences! A pile of pebbles will behave differently in an earthquake than will a row of pebbles. Or, to put forward a more suggestive example, a fairly simple robot with a TV camera eye could be set up to recognize pentagon-shaped patterns of pebbles and to behave in one way when it finds such a pattern, and another way when it does not. Do you see how this sort of example can help explicate that notorious historical mystery, the problem of how non-physical mind can possibly influence physical matter?! We will return to this thread too.)

And yet we can talk about the relative position of the pebbles as if it were a physical thing. (Sometimes we even use phrases such as "the physical arrangement of the pebbles". In such a case what we are really doing is contrasting the actual arrangement of the pebbles with various imagined or hypothesized alternatives.) We can classify certain types of pebble-relationships as pentagons, or straight lines, or triangles. Once we have created such abstract concepts we can explore them without any further reference to the arrangements of the physical things from which they were abstracted. We can consider the properties of pentagons without thinking about pebbles or any physical things at all. In place of the vertex pebbles, we substitute in our thoughts the abstract concept of a mathematical point. But in the real world there are physical pebbles but no such things as "physical mathematical points". Points and pentagons are non-physical abstractions even if we do usually think about them in the same sorts of ways that we think about physical things.

It is true, furthermore, that once we get comfortable with making such abstractions, we can even conceive of some which are not directly derived from the relative positions of any particular set of physical objects. Thus we can conceive of a regular polygon of 12 sides even if we have never seen 12 physical objects positioned (more or less) in such a pattern. In a sense what we are doing here is a second kind of abstracting; we have abstracted concepts such as points and lines from physical objects, and then we are free to combine such non-physical abstract objects as we wish. We can draw conclusions and make discoveries about these new abstract objects. We can mentally visualize a regular dodecagon for example, and "see" that it approximates a circle.

The world of mental abstractions can take on "a life of its own". We can compare different types of abstractions, see how they are related, explore their properties, mentally modify them, and so forth. Albert Einstein once remarked that he had arrived at the theory of relativity by "visualizing...effects, consequences and possibilities" through "more or less clear images which can be 'voluntarily' reproduced and combined."[8] These abstractions can become as real to us as the physical things from which they (or their components) were generated. The concept of a unicorn is as real to us as the concept of a horse. We can manipulate either concept with equal ease, and relate either to other concepts. This is why it is so easy to postulate the physical reality of abstract conceptions which do not actually exist. Once you form the conception of a unicorn, it is easy to imagine that unicorns might exist somewhere. It is even rather easy to jump to the invalid conclusion that unicorns must exist somewhere! (If you bring up narwhals here, or one-horned goats, you are purposely missing my point!)

A unicorn is a chimera, that is, a beast mentally patched together from parts of real animals. God is a chimera too. Once you come up with abstractions like being powerful, it is easy to come up with extended abstractions like being all powerful. You patch together extended attributes (that no real entity has or could have), such as all powerful, all knowing, and all good, and—voilà!—you have the Christian God.

An aside: Interestingly enough, in this case the three extended conceptions are not even logically compatible with reality! As Epicurus pointed out before Christianity even arose, the existence of evil in the world precludes the existence of a god that is simultaneously all good, all knowing, and all powerful. The attempt to "prove" the existence of God and the attempt to somehow resolve this "problem of evil" have been the two main goals of Christian theology over the past 2,000 years. Of course no "progress" whatsoever has been made toward either goal.

In the case of unicorns we can reasonably believe that they do not exist, because we can see how such chimeras originate in people's minds. But you can not absolutely prove that unicorns don't exist without searching the entire universe, presumably. However in the case of the Christian God you don't have to search anywhere at all in order to prove that he can not possibly exist. The very concept of the Christian God is logically incoherent (given that evil exists, which it certainly does).

No real physical being can be all powerful or all knowing either. You can know, without bothering to search anywhere, that any god which is said to have either of these "properties" cannot possibly exist. The only sort of "god" that could possibly really exist would have to be a physical being with powers and knowledge greatly exceeding those of present-day humans; e.g., an extraterrestrial alien and no real god at all.


8.  Neurophysiologists have now established that there are physical changes in the brain, specifically the creation and/or deletion of neural connections, that correspond to some (at least) mental events. In fact, an item about some recent developments in this area just appeared in last night's newspaper. At the annual meeting of the Society for Neuroscience, Michael Merzenich of the University of California San Francisco reported his experiments which show that learning produces physical changes in the brains of monkeys. "His findings suggest new evidence that the brain has a 'plastic' quality. Researchers once believed that brain paths were fixed from birth.... Now, neuroscientists are seeing more evidence that the brain is continually being molded by experiences."[9] Eventually some type or other of physical changes in the brain will be found to occur for all mental events, though in many cases the physical changes may be more or less transitory. (So far this is just the scientific assumption and guiding principle of neurological research. But it is well on the way to becoming established fact.) Thus when you learn something new, or recognize a face, or whatever, there are some physical changes in your brain.

From this, it is possible to draw various invalid conclusions. Positivists and behaviorists jump to the conclusion that the mental events (insofar as they are to be mentioned at all) are identical to the physical changes. There is an element of truth to this—if it is interpreted in a more sophisticated way than its proponents have in mind (or should I say, "than they have in brain"?). I will refer back to this sort of simple-minded ("simple-brained") mind-brain identity theory later. For now, I want to focus on a related, but less sweeping conclusion, which is also invalid. The fallacy I wish to expose is this:

  1. Hardware is physical; software is non-physical.
  2. There are physical changes in the brain "identical to", or (more plausibly) at least corresponding to, every mental event.
  3. Thus mind and mental events are at bottom physical.
  4. "Therefore", the mind/brain is not analogous in this fundamental way with computer hardware/software. The mind is not "the software that runs on (in) the brain".

The problem with this argument, the reason that it is fallacious, is that its proponents don't really understand what software is. They don't really understand what it means to say that software is "non-physical".

Ultimately, all software has a physical basis, even if we do reasonably think of it as "non-physical" in contrast to hardware—which is undeniably physical. The programs and data running on a PC, for example, consist of electrical pulses, which are moving groups of electrons—physical things. Non-running programs and data are usually recorded in the form of physical modifications to the polarity of the magnetic material on the hard disk, or floppy disk, or backup tape. Of course, software can be kept in other ways, such as in the pattern of physical pits in plastic on a CD-ROM platter, or even as ink on paper. But all software has some kind of physical basis. (Even computer software that we humans are only thinking about writing already has a more or less sketchy material basis, though so far only in our brains.) There is no such thing as software which is not encoded physically in some way. There cannot conceivably be any such thing!

Thus the conclusion that mind is not software, because mental events are physically encoded in the brain, fails for the simple reason that software is also physically encoded in the hardware of the computer. The analogy between mind/brain and software/hardware is far, far deeper than its opponents comprehend.


9.  But if all software ultimately has a physical basis, what does it really mean to say that it is "non-physical" in comparison with hardware? What the heck is the real difference between software and hardware?? In the final analysis, it is merely a distinction drawn for our own convenience in conceptualizing the operation of a complex machine of a certain type. Software is part of the computer system just as hardware is, but it is an abstract conception of part of that system based on its logical function rather than on its physical embodiment.

There are intermediate cases between software and hardware, which are often called "firmware". Part of the operating system in a PC is contained within a silicon chip, rather than on the hard disk with most of the programs. Occasionally the BASIC interpreter program is also put on a chip, rather than in conventional magnetic storage media. It doesn't matter to the logical functioning of the system; one way is a good as another. Firmware is used because chips tend to be faster than magnetic media, and because chips are more difficult to modify. (Some expensive proprietary software is only distributed as firmware in an attempt to keep people from copying it.) Moreover, there are even some kinds of chips which you can reprogram (EPROMS, etc.), which further blurs the distinction. Yet another intermediate category between the usual software and hardware is what is known as microcode.[10]

It is entirely possible to construct a computer that has no software whatsoever (in the usual sense). In fact, the very first "giant brain" digital computers did not need or use any software! They were "reprogrammed" by changing the hardware, by actually replugging the old-style telephone exchange connections between the physical components (relays, tubes and subassemblies of these). You can also build primitive digital computers with kits from Radio Shack, that you "program" in more or less the same way. Reprogramming a computer in this way is essentially the same as rebuilding it.

Needless to say, programming computers through such cumbersome physical modifications was slow, difficult, and error-prone work. So better ways were found, ways that allow us to reprogram through the physical pressure of finger tips on typewriter keyboards. We are still making physical changes somewhere (in magnetic domains out on the hard disk for example), but it doesn't seem like it! We are not thinking about magnetic patterns of physical materials at all, but rather about the logical flow of the program, arithmetical processes, the tasks of various subroutines, and so forth. We are thinking at a higher level, in terms of abstractions.

At bottom, when we reprogram a computer in this easier way we are still rebuilding it; we are still changing the physical structure of the computing system as a whole. Ordinarily we don't think of it this way, nor should we; but when you analyze the situation you will see that that is what it amounts to.

When we program, it doesn't seem like we are actually just modifying the physical world in small but definite ways, nor is there any reason why it should seem like it at the time! It would be ridiculous to be thinking at the level of the physical changes which are taking place out on some hard disk, because that would prevent us from thinking at the level of abstraction which is necessary for any kind of complicated programming. Abstraction is necessary in order to think about many kinds of complicated things, including computer systems (and human beings!). If we could not think about computers in terms of logical processes, arithmetical operations, subroutine functions and—more importantly—in terms of program goals and purposes, then we could not make or use computers at all.

Computers are able to operate according to logical principles (whether in the form of hardware or software) because logic itself can be modeled or embodied in physical systems. Not only that, but logic can only be modeled, reflected, utilized—or even just thought about—in systems with a physical basis. (Such as human beings.)


10.  "But Scott, what about virtual machines!! How could you, as a former VM systems programmer forget about such things? Don't they prove that software does not need to have a physical basis?"

Nope, they don't prove that at all. I hereby offer a prize of $1,000,000 to the first person who can present to me a functioning virtual machine that does not run ultimately on some kind of physical hardware! (I should note that it is perfectly reasonable to offer a prize larger than you can possibly pay out, in a situation where there is no possibility at all of ever having to pay it out!).

No matter how many layers of virtual machines there are, the software that runs on the top machine, as well as the software that makes up each virtual machine itself, has a physical basis. Conceptually, a virtual machine is something different than an ordinary computer; at bottom it is really no different at all, since all computing machines are physical systems. All software in any computing system is just some part of that system, viewed abstractly in terms of its functional role within the system. The functional role of the software that comprises a virtual machine is conceptually different than that of the other software; that's why it is useful to make this distinction. Specifically, a virtual machine is part of a computing system that functions as if it were an entirely independent computing system (even though it is not). To understand how a computing system works which involves one or more virtual machines you must of course understand this specific functional role of the virtual machine software.

The existence of virtual machines in some computing systems, and even just the possibility of virtual machines, should bring home to the contemplative mind that at bottom there is no difference between software and hardware except in terms of how we comprehend such systems. (But that of course is the whole point of the distinction!)

(I will save any further comments on this topic until I get around to discussing Daniel Dennett's important theory of consciousness, which is based in part on the concept of virtual machines.)


11.  When you program a computer you are always working at some level of abstraction. To some degree you must always think in terms of the program's goal, the necessary subtasks, and so forth—even if you are forced by primitive programming languages to break these things down even further into arithmetical functions, elemental logical processes, and the like. The better the computing system, and thus the better the programming "languages" or techniques available, the easier it is to keep thinking only at the highest level of abstraction appropriate to your programming goal.

Software is more than just the part of the computer system that is easiest to modify, easiest to reprogram. It must be thought of more abstractly in terms of its functional role within the system as a whole. To begin with you may think of software as a series of instructions to the whole system (including both the hardware and the rest of the software). This is a step forward; if you get this far you can see why it is conceptually useful to distinguish software from hardware even though software does have a physical basis, and even though there is no sharp line between the two in contemporary computing systems.

[This is as far as programmers in business data processing centers usually get. That is why so many of them keep insisting that "no computer can possibly think; if you know anything at all about computers you should know that they just follow directions". This is an incredibly naive view which only reflects the narrowness of their own conceptions and programming practices. Viewing software as simply a long series of extremely simple instructions is like viewing a book as "simply" a long series of alphabetic letters and ignoring the higher-level significance of these simple 26 letters (and a few more characters such as commas and spaces) when they are arranged to form words, sentences, and extended discussions.]

However, even this level of abstraction can and must be transcended. Some of the newer programming languages (such as C++) are designed so that you can begin to think in terms of "objects" and messages, rather than in terms of sequences of instructions. The trend in the evolution of programming languages is in the direction of ever higher abstraction, because as programs become more and more complex, programming itself can only be done with increased abstraction. But even programs written in assembler language must also be planned, constructed and understood at a higher level than simply elemental machine instructions. We must think in terms of the functions, tasks, purposes and interrelations of the organized groups ("routines") of these basic instructions, as well as how to build such organized groups out of the basic elements.

To understand a computer at the hardware level you cannot focus your attention merely on the physical functioning of specific logic gates; you must primarily focus on how these gates are organized into groups to perform specific jobs. [But note well that the mind/body "problem" can even be said to exist at the level of a single logic gate! Is a logic gate a physical thing, or an abstract "logical" or conceptual thing? It is both. Most fundamentally it is a physical thing; but looking at it abstractly, that is, looking at its function within the computer, it is a (primitive) logical calculator which happens to be embodied in one particular physical device, rather than in another.] In the same way, to understand the brain you cannot limit your attention to individual neurons; you must instead focus on the complex interactions of assemblages of neurons, and higher organizations of these first level organizations.

Likewise, to understand computer software you must think in terms of purposes, goals, tasks, and sub-goals and sub-tasks as coded in various routines and sub-routines in the software (which in turn may be—but need not always be!—encoded in the form of "instructions"). In the same way, to understand the mind you cannot limit your attention to the lowest level of mental abstractions (such as most of the basic pattern categories—straight lines, vertices, etc.—many of which arise from perceptual processing in the eye portion of the eye-brain system), but must focus primarily on the higher level purposes, goals, tasks, their sub-goals and sub-tasks, and the various interrelationships among all these higher-level units.

The only way to really understand even relatively simple software is to jump to a higher level of analysis than that of the elemental logical processes from which the program may ulitmately be constructed. In the same way, to understand the human mind, we must jump to the level of analysis where things are described in terms of agencies (in Minsky's terminology), and the functions, tasks, and interactions of such agencies. We must jump to the level of Minsky's "society of mind".

To understand the mind you must focus on the abstractions (patterns, concepts, etc.) which "emerge" from the workings of the many more-or-less physically distinct computers of the brain, and on how these abstractions interact with the outside world, with the rest of the human body, and with each other.

Although a Universal Turing Machine can model any computer or system of computers, including the whole human brain, nobody can fully understand how the brain works, and what mind is and how it is related to the brain, simply by understanding the concept of a Turing machine. The elemental logical processes underlying every Turing machine are at the wrong level of description for understanding such things.

[One note of caution with respect to "emergent" properties mentioned above: The word 'emerge' may suggest to some that mysterious new non-physical entities may develop from matter but nevertheless no longer need to be tied to the existence and interrelationships of the various parts of that matter. Actually thoughts outlast the thinker only when they become physically embodied elsewhere (such as in a book or in the head of another thinker).]


12.  Have you ever played around with the cellular automata game of "Life" on a computer?[11] This fascinating game displays patterns on the video screen, patterns which can sometimes move and evolve, almost as if they were living.

Patterns can change! That's something obvious, but still important; it's something to think about. Remember that patterns are a kind of abstraction, and though they have a physical basis, they themselves are not what we think of as physical entities. So is it "mysterious" then that these "non-physical" things, these patterns, change and evolve, and even "reproduce" themselves in some cases? Does this show that there might possibly be non-physical living things, perhaps even some non-physical living things which do not have any physical basis? Does this show that maybe ghosts and demons and gods might exist after all? (Does it even imply that such patterns themselves might have or be "souls"?! This seems to be the extreme to which Plato went with his idealistic theory of "forms".)

No, absolutely not! How do patterns change? They change due to some underlying physical change (such as to the position of the pebbles at your feet, or to the electron flow to the pixels on a CRT screen). Even when we humans think about a changing pattern, there are some corresponding physical changes going on in our brains. Patterns, like software, always have some kind of underlying physical basis. Every abstract entity, has some kind of underlying physical basis! To fail to see this is to fail to understand abstraction.

Even fantasies (like the gods) and logically incoherent "things" (such as "round squares") have an underlying physical basis, in our brains. This does not mean that gods and round squares really exist, but only that the abstract ideas of them are possible only because a corresponding physical structure for those ideas has been constructed in our brains. Hence the quaint religious notion that "God exists within you" has an amusing validity to it, of sorts! Of course an idea of a thing is not the same as the thing itself. And thus the physical basis of the idea of a thing is not the same as the physical reality of the thing itself.

[At this point I can't resist repeating one of Jack Handey's "Deep Thoughts": "If God dwells inside us, like some people say, I sure hope he likes enchiladas, because that's what he's getting!"]

So whenever any pattern changes, moves, develops, evolves, reproduces—or whatever—there is always a physical process at work at bottom. It may not seem like it, and we certainly do not normally wish to focus on that underlying physical process. Except when they are in the workshop (perhaps), even TV engineers watch people on TV screens, not glowing phosphor pixels! We abstract patterns, and movement of patterns, etc., from glowing patches of matter, but it is not the glowing matter itself that we are interested in at the time. We look at the matter, but we "see" (comprehend) the abstract content of the matter, not the matter itself. The whole point of building such devices as TVs is so that we can watch the real (or imaginary) objects portrayed at a distance through the medium of first electromagnetic waves, and then abstract patterns and representations on the cathode ray tube.

Software is like a pattern in many ways. Both are abstractions. Both can change, and even change themselves, in some cases. Both, when sufficiently complex, can evolve, develop, and so forth. We create both for the express reason that they are abstract entities. But both have an underlying physical basis. And when there are changes to either software or patterns, whether these changes are caused by "themselves" or by "external influences", there is always an underlying physical process which causes and ultimately explains those high-level abstract changes.

And as with software, so with mind.


13.  I remarked in section 7 that a pattern—despite being a "non-physical", abstract thing—can nevertheless have physical consequences. How so? Simply because the underlying physical basis for the pattern exists as part of an evolving physical system. When we say that a robot sees the pattern of a pentagon and moves its arm accordingly, we are looking at it from a high-level point of view which is probably most appropriate. But of course we could look into the matter further (pun intended!), and analyze the situation at a very low level, exclusively in terms of physical matter and physical forces working on the physical matter, and omitting all reference to patterns and other such abstractions. In that case we could describe the same "behavior" of the robot (albeit, in bewildering complexity) in terms only of physical matter and physical processes, and there would be no "non-physical" thing involved which causes the physical change.

We say a pattern can have physical consequences because this is the way we conceptualize a situation where a complex physical process—part of which we abstract as the initial pattern itself—develops into a new stage (part of which, perhaps, we can abstract as a new pattern). There is nothing wrong or "invalid" about such conceptualizations! (In fact they are necessary in order to comprehend complex processes.)

Whenever there is a "non-physical" thing, of any kind (pattern, software, mind), which we think of as causing a physical change, this jarring characterization of the situation is ultimately only an artifact of the high-level point of view we are taking of the process. (This is the truth that positivists sense, but can't formulate correctly. Even my formulation here has a "postivistic aura" (!) about it, and therefore requires further explication below.)

To say that a "non-physical" thing like a pattern, or like software or mind, can cause a physical change, is to take a special point of view; a special point of view which at one in the same time is both enlightening and potentially misleading. Such a point of view is enlightening because at this level of abstraction we can hope to comprehend the whole process. But this special point of view is philosophically misleading if we misconstrue the nature of such abstractions, and imagine that they do not have an underlying physical basis, or even that such abstractions "are the deeper reality" while "matter is a myth" (as a current book title has it).

To understand complex processes it is necessary to think in terms of abstractions based on the roles and functions of the various material parts of the physical system underlying the process. But this constructed world of abstractions can then seem to be somehow more real, more fundamental in some mysterious way, than the complex physical entities and processes which underlie it and give rise to it. And in fact there is one sense in which such abstractions are more fundamental than the underlying physical complex that gives rise to them in our minds: from the point of view of comprehending the physical process, the non-physical abstractions are essential and primary. But nevertheless, such abstractions are still derived from physical reality, and can only be generated within other very complex chunks of physical reality (human brains and their functional equivalents).

Ironically, the practical necessity of looking at the world from higher perspectives (which involve extensive abstraction), tends to lead people to misunderstand the nature of the world at the very most basic level. We need to understand the world around us if we are to live in it and survive. A major part of that is to come to understand a lot about one of the very most complex things we have encountered in the world so far—namely, human beings ourselves (and specifically, the human mind and brain). When we humans first tried to understand ourselves and how we function, we were not up to the task. (We are still not fully up to the task, but we are rapidly getting there). In particular, we had to use abstractions, and other mental devices, long before we could possibly understand the true nature of such things.

I am reminded here of a wonderful remark—of unknown origin—that my friend Rich Swanson passed on to me: "If our brains were simple to understand, we wouldn't be able to understand them." Since understanding is a mental phenomenon (with a physical basis, of course) this remark could just as well be stated as: "If our brains and minds were simple to understand, we wouldn't be able to understand them."

We must constantly fight against the primitive tendency to mystify abstraction!

Because of this tendency, many of the first reflections about the basic nature of the world were fundamentally flawed. These flawed, idealist notions were seized upon, codified, and perpetuated by various philosophers (notably Plato) groping to understand the world (and also seeking ways to stabilize class society through ideological mass manipulation). Once created, these idealist ideologies were then used as the intellectual foundations for ruling class institutions such as churches and schools. Consequently, what started out as mere naive views has ended up as deeply ingrained ideology and entrenched ideological institutions which indoctrinate the population.

Philosophical idealism has become institutionalized in class society, and cannot be deposed as the dominant ideology until capitalism and all its vestiges have been destroyed. This is ironic, because the scientific basis does finally exist now for understanding the nature of mind and its relationship to matter, at least in its essentials. (But then, of course, the scientific and technical basis for creating a communist society also exists—which still doesn't mean that it is easy to do.) At present only a relatively few materialistically inclined intellectuals will be able to break through the idealistic strait-jackets we are all forced into from infancy. Even most scientists, including most computer scientists and neurophysiologists, cannot fully shake their idealist indoctrination.


14.  "So, Scott, you are saying that mind is merely an epiphenomenon of matter, and that really there is only a purely physical process going on in the brain. Really only matter exists, not mind." My reply: You still don't quite get it!

The classic example of an epiphenomenon, of course, is Ptolemy's monstrous contrivance, his picture of epicycles and epi-epicycles for the orbits of the sun and the planets, all designed to save an obsolete conception of the solar system with the earth fixed at its center.

Is the pentagon pattern in the pebbles at your feet "merely" an epiphenomenon? Is it some kind of "illusion"? Or is the pattern really there? Of course it is really there! (What in hell would you mean when you stare down at 5 pebbles at your feet arranged in the shape of a pentagon, and then remark that "the pentagon is not really there"?!)

Typically, idealist accusations that materialists view mind as a "mere" epiphenomenon of matter are a parody of our position. You see, an "epiphenomenon" is usually viewed as some kind of "artificial" or "phony" construct or interpretation which obscures the real situation. Thus to say that we (dialectical) materialists view mind as an epiphenomenon of matter is usually to imply that we think of mind as artificial or phony or unreal. We don't.

Of course it is possible to use the word 'epiphenomenon' in a very different way, as just meaning something like "a secondary phenomenon", or even "a higher-level or emergent phenomenon". If that is all that is meant by it, then materialism truly does view mind as an epiphenomenon of matter. But I wouldn't put forward our position in these terms because it would likely be regarded as a denial of the existence of mind.

Mind does exist! It exists in exactly the same sense that software exists, or patterns, or other abstract things. But it does not exist as a mysterious entity independent of physical matter—just as these other abstractions don't.


15.  "But then you are at least saying that mind can be reduced to matter. You are a reductionist." My reply, once again: You still don't quite get it.

Whether you can call the (dialectical) materialist position on mind and matter "reductionist" or not depends of course on exactly what you mean by the term. Like epiphenomenalism, it does have an innocuous sense. If all you mean by reductionism here is that mind has an underlying physical basis (like software), and that mind cannot exist divorced from matter (just as software can't), then I have no objection whatsoever. But the problem once again is that 'reductionism' is usually a loaded term, which incorporates a positivistic bias. Thus when the dialectical materialist view is attacked by idealists as "reductionist", it usually amounts to setting up a straw man to attack.

Reductionism, in one popular sense, is the absurd idea that anything can be understood (at least "logically" and "in principle", if not in actual practice) in terms of the relationships and forces operating between its most fundamental components.

[Aside: Actually it is an open question whether there even are such things as "the most elemental components" of any physical object or process. Are these components supposed to be molecules? Or atoms? Or protons, neutrons and electrons? Or quarks and leptons? Or...what?! I am torn on this issue, and bothered by it. Dialectics make me think there is no such "bottom level"; materialism makes me think there must be. So far, scientific investigation keeps uncovering new levels, each of which is proclaimed in turn the "most fundamental". Then a few years later, these "fundamental" particles are shown to be composed of "even more fundamental" particles. After a while, the whole concept of "fundamental particle" starts to become suspicious.]

In point of fact, it is impossible to understand many complex entities and processes in such terms. Even though quantum electrodynamics is the underlying physical theory which in some sense "explains" both automobiles and sewing machines, I challenge anybody to ever explain the difference between an automobile and a sewing machine in terms of QED!

It is always possible to "dispense with" any high-level abstraction when describing the world. But this is not the same as saying that high-level descriptions are truly reducible to lower-level descriptions. You can "dispense" with the high-level description in the sense that you can use a different kind of description instead. But this is not to say that you have thereby intelligibly "translated" the higher-level description into a lower-level description.

There have in fact been a number of impressive successes where particular scientific theories have succumbed to a reductionist program, in one sense of the word. Thus the specific laws in physics which govern the relationships between pressure, temperature and volume of a gas (Boyle's Law, for example) have been shown to be derivable from the more fundamental laws of thermodynamics. And it is now claimed that all of chemistry can be explicated and understood in terms of quantum electrodynamics, and that therefore chemistry has been "reduced" to physics. There is no reason to doubt that this kind of reductionism will find further successes with physical science, especially physics.

There have been, and will no doubt continue to be, examples of successful "reductions" within other areas of science too, such as biology, linguistics, psychology, and social science. But it will never be possible to reduce all of science to physics! (The name that some contemporary physicists use for their program of unifying quantum mechanics and relativity theory is the height of arrogance: the "Theory Of Everything", or "TOE".)

Many high-level scientific laws and principles in biology, for example, will never be reducible to physics. Can you imagine the absurdity of trying to express the theory of evolution in terms of quarks and gluons for example? Or how about expressing Grimm's Law (of systematic phonetic shifts in the evolution of human languages) in terms of some eventual TOE! A laughably ridiculous notion! Genuine social science too (i.e., historical materialism and its extensions) will never be reduced to physics or even to biology.

Of course some principles of biology and even physics are relevant in social science, just as they are in linguistics and other sciences. But such sciences can never be wholly (or even largely) reduced to physics.

The higher level principles even within many physical sciences cannot really be reduced to physics. Thus the theory of plate tectonics in geophysics cannot possibly be reduced to quantum mechanics and relativity theory, whether or not these two are eventually combined into some unified theory. I even have some serious doubts about the claim that all the high level laws of chemistry can be reduced to QED. Certainly in practice they cannot.

Karl Popper once said that "All science is cosmology, I believe."[12] If all this means is that cosmology relates to everything else, since it is the science of the development of the universe as a whole, then this is unobjectionable. But most likely, Popper is gulty of an invalid sort of reductionism here. Any statement that all sciences are "really" one particular science (physics, cosmology, or whatever) is ridiculous if taken literally. In dialectical terms it fails to recognize the particularity of contradiction, and the possibility of emergent phenomena which can only be intelligently explained by scientific laws couched in terms of those emergent phenomena.

Even in dealing rationally with functionally discrete portions of the physical world, reductionism is usually out of place—or even totally goofy. Many machines, for example, are so complex that you can't understand them except in terms of their parts which perform various kinds of tasks and functions, all contributing to some overall function. An automobile is so complicated that its operation and purpose would be hopelessly incomprehensible to anyone who tried to describe it solely in terms of the molecules that go to make it up. That low level of description and abstraction is entirely inappropriate, and useless in this case. A much higher level of abstraction is necessary to describe the principles by which an automobile works, how the transmission functions, and so forth.

The principles of physics, and in particular quantum electrodynamics, are in a sense the "ultimate" explanation (perhaps!) for the various interactions among the atoms and molecules of the brain, just as they are the ultimate explanation for the various interactions among the atoms and molecules of an automobile. But no one can understand how a brain works in terms of quantum mechanics any more than they can possibly understand how an automobile works in terms of such principles. This is why Roger Penrose is so absurdly off base in his book The Emperor's New Mind when he suggests that we can never understand the mind except in terms of unknown principles at the level of quantum mechanics or below! This is moving in precisely the wrong direction; instead of looking for scientific principles at a higher level of abstraction than molecules, at the level of the complex organization of molecules, he is off looking for them at a lower level of abstraction. Only a brilliant idealist mathematician could be so stupid!

Just one more illustration of the idiocy of "explanatory reductionism": Even if it is possible "in theory" to describe the movement of my cat in terms of fundamental particles and fundamental forces, it is far more helpful to note that the cat is moving toward the bowl of milk "because it is hungry". Multiple layers of reductionist explanations (in terms of physics, or chemistry, or cell biology)—no matter how true they may be—are inappropriate and worthless if one wishes to intelligibly discuss the behavior of cats.


16.  The mind is a real thing in the same sense that a computer program is a real thing. The fact that neither is conceptually a physical object does not mean that either is "a phantom spirit", or inexplicable or mysterious. Both, at bottom have a physical basis, the one in hardware, the other in the brain. It is ordinarily only useful and necessary to point this out in the course of combating absurd idealist and mystical notions about the mind. Conceptually, software is best viewed as contrasted with hardware, as a "non-physical" abstraction based on its function within the computer system. In exactly the same way, the mind is best viewed as contrasted with the physical brain, as a "non-physical" abstraction based on its function within the human computing system.

It is not necessarily a mistake, nor "loose talk", nor "unscientific terminology", to speak of the mind instead of the brain. Sometimes we wish to speak of the brain, sometimes we wish to speak of the mind. Both are equally scientific spheres of discourse, just as hardware and software are. When speaking of thoughts, ideas, memories, and the like, we should likewise continue to speak of the mind. To try to discourage or abandon all talk about the mind and mental events serves no useful purpose, and will only tend to perpetuate the ignorance and confusions that already abound. Think what it would mean in the computer sphere if no one was allowed to talk about software!

It is true that religious people, and idealists of all kinds, will try to mystify the concept of mind, confuse it with silly religious notions of "soul", "spirit", and so forth. But the best response to this is not to try to revert to some simple-minded, naive, mechancial materialism, whether positivist, behaviorist, or whatever, but rather to properly explicate mind, it's true nature, and its physical basis.


17.  Back in January 1981, Jerry Fodor, a linguist/philosopher at MIT, published an article on "The Mind-Body Problem" in Scientific American. In this article he explained and defended a point of view known as functionalism, and contrasted it to various other theories of the mind (or its "absence"). The particular version of the dialectical materialist theory of mind that I am championing might be characterized as functionalism. However, the problem once again is that the word 'functionalism' can be given either a materialist twist, or an idealist twist. Fodor gives it an idealist twist.

Fodor contrasts functionalism with two traditional materialist philosophies of mind:

  1. behaviorism (and what seems to me to be a trivial variation of behaviorism known as "logical behaviorism"), which maintains "that all talk of mental causes can be eliminated from the language of psychology in favor of environmental stimuli and behavioral responses"[13], and
  2. "the central-state identity theory", which claims that "mental events, states and processes are identical with neurophysiological events in the brain"[14] (This is the most popular specific point of view within the school that I call "positivism".)

Nowhere in the article does Fodor really adequately define functionalism. But the article does say that functionalism "is the philosophy of mind based on the distinction that computer science draws between a system's hardware, or physical composition, and its software, or program."[15] Thus, at least by implication, Fodor seems to be arguing along the same lines as I am, that mind is to software as brain is to hardware, or even perhaps that mind is the software that runs on (in) the brain. Functionalism is a good name for this theory since software and its component parts are essentially characterized by their function within the computing system as a whole. And as I said earlier, software is in essence a part of a computer system viewed not in terms of its physical manifestation (even though that must in fact exist), but rather viewed abstractly, in terms of its functional role.

Fodor makes a number of interesting and correct observations about functionalism. He says for example that "functionalism is not a reductionist thesis" since "it does not foresee, even in principle, the elimination of mentalistic concepts from the expanitory apparatus of psychological theories."[16] Without saying so directly, he at least implies that functionalism naturally predisposes us to view the mind and mental processes at the appropriate levels of abstraction; at the level in which "the generalizations of psychology are most naturally pitched."[17] He notes that "the analogy between minds and computers actually implies the postulation of mental symbols. There is no computation without representation."[18] He follows this up with the observation that the functionalist viewpoint is spurring the advance of psychology, and that "the science of mental representation is now flourishing."[19]

But just as notable is the idealistic nonsense with which Fodor then proceeds to burden his version of functionalism. He recognizes that "functionalism seems to capture the best features of the materialist alternatives to dualism. It is no wonder that functionalism has become increasingly popular."[20] But he also claims that functionalism itself "is neither dualist nor materialist." Actually his version of functionalism is clearly dualist while mine is clearly and necessarily materialist.

[There are two fundamental points of view about mind and matter. Materialism says that matter is primary. (Mechanical materialism says that only matter really exists; dialectical materialism says that mind exists too, but only as a high-level property of matter when it is organized in a sufficiently complex and appropriate way.) Idealism says that matter is not primary; most versions say that mind is primary; some even say that only mind really exists. Dualism says that neither matter nor mind is primary; that both exist "independently" of the other, and that neither arises out of the other—which makes it impossible to understand how these "two totally independent things" can relate to each other. From the point of view of dialectical materialism, dualism is just a variety of idealism.]

A big problem with Fodor is that the only kind of materialist theories he is evidently aware of are naive, mechanical versions such as behaviorism and positivism ("identity theory"). He claims that "In materialist theories the mental is not distinct from the physical; indeed, all mental states, properties, processes and operations are in principle identical with physical states, properties, processes and operations."[21] This just isn't so of dialectical materialism! We say that the mental is indeed conceptually "distinct from the physical", even though the mental must necessarily have a physical basis.

(The rampant misconceptions, or even total ignorance, of dialectical materialism among scientists is a gross fact of bourgeois society. Far from considering the dialectical materialist theory of mind, and then accepting or rejecting it, few contemporary psychologists or philosophers are even aware that this alternative exists! It is both frustrating and amusing for those of us who approach science and philosophy from this perspective to see all the needless confusion and floundering by these folks who just can't seem to escape the hold that idealism has on them. One hundred and fifty years ago Marx and Engels were clearer and more essentially correct about the relationship of mind and body than virtually all of these "leading edge" researchers are today!)

Fodor correctly notes that "In the functionalist view the psychology of a system depends not on the stuff it is made of... but on how the stuff is put together."[22] But he concludes from this that even systems of "disembodied spirits" or "spiritual energy" might conceivably have mental states!

What is wrong with that? The first thing that is wrong with it is that there cannot exist any such "systems" as "disembodied spirits". There can be no disembodied mental phenomena of any kind whatsoever! There can be no "disembodied patterns" for example. Can there be a pattern of a pentagon at your feet if there are no pebbles nor anything else down there which are positioned in the shape of a pentagon? Of course not. (And, once again, I should remind you that even the idea of such a pattern which may exist in your mind is only possible because of the physical structure and state of your brain at that moment.)

Fodor notes that functionalism can easily explain "both the causal and the relational character of the mental."[23] But he doesn't fully understand what he himself is implying here: if the mental decision to move my arm can lead to the actual physical movement of my arm, it is only because there must be a material basis for my mental decision that is part of an evolving causal, physical system. Thus functionalism truly explains "the causal nature of the mental" only if it is a materialist functionalism.

The functionalist view, in its present computer-age form, derives from the analogy of mind/brain to software/hardware, as Fodor recognizes. But as I said in section 6, even many people who are very knowledgeable about computers—as Fodor undoubtedly is—do not correctly understand what software is! They see it as non-physical, or immaterial, and fail to see that it must have an underlying physical basis of some kind. Consequently, their very apt analogy between mind/brain and software/hardware ends up misleading them in some serious ways, even though it helps clarify matters in other respects. For an analogy between A and B to really enable you to understand A, it must not only be a fully appropriate analogy (in all the relevant respects), but you must also already correctly understand B.


18.  Now that I have argued at length for my thesis that mind/brain is "completely" analogous to software/hardware, it is time to back off just a tiny bit! (You knew this had to be coming, didn't you!) For mind to be completely analogous to software, for mind to be the software that runs on the brain, we must employ a slightly different concept of mind than that which is used in ordinary present-day conversation. We must make the everyday concept of mind more scientifically correct.

'Mind' like most words has a variety of senses. But beyond that, the central scientific meaning of 'mind' is in flux due to scientific progress. All scientific terms must be made more definite, and to some extent must be continually redefined in the course of elaborating and deepening the scientific theory of which they are a part. Every time we find out something a little bit new about the nature of matter, for example, we thereby somewhat change the meaning of the word 'matter' in physics.

Freud changed the meaning of 'mind' when he introduced the notion of the unconscious. (Freud wasn't actually the first person to come up with this basic idea, but he is the one who brought the term into general use.) Freud's concept of the unconscious was itself bizarrely overladen with his generally absurd psychoanalytic theories, and must be cleansed of that. Nevertheless, the recognition that mind and consciousness are not coextensive, that many mental processes and states are actually unconscious, was a tremendous step forward. It allowed us to look at mind in a qualitatively new way.

Before the broadening of the word 'mind' to include unconscious mental processes and states, the assertion that mind is the software that runs on the brain would simply have been false. (Of course the concepts of software and hardware didn't exist back then, so the theory could not have been formulated in those terms anyway.) The assertion is essentially true now. But it will only become completely, 100% true when the word 'mind' is adjusted a bit more in light of the scientific theories which use the term.

The most important step remaining in the scientific transformation of the concept of mind is the complete purging of the religious and other idealist baggage that presently infests it. As I mentioned before, this cannot be fully accomplished while capitalism still exists. Even if a dialectical materialist view somehow manages to win out in the restricted sphere of cognitive science, much of the general population in bourgeois society will continue to have confused, semi-religious, notions of what the mind is.

The use of scientific terms in everyday speech always lags behind the evolution of the meaning of these terms within science itself. But this is especially the case in bourgeois society because of the pathetic scientific illiteracy of the masses. And of course the ruling class is especially concerned to keep any scientific knowledge supportive of the proletarian world view and revolutionary politics away from the masses.

But there is also another step necessary in the scientific transformation of the word 'mind', besides demystification. It must become recognized that mind is really the sum total of all the functional abstractions that we (or any intelligent being) can possibly make about the human brain and its parts. This means that the concept of mind needs to be (and will be, I am convinced) further broadened. As we learn more and more about the physical structure of the brain, and its complex processes, we will need to constantly modify our concept of mind accordingly. Our concept of mind must inevitably become broader, more complex, more profound, and much less focused on the question of consciousness.

But a bit of counterpoint to the counterpoint: we do already know many important things about the mind, especially with regard to its higher, conscious aspects. We are not throwing out all we know and starting over. Instead, in accordance with the Marxist theory of knowledge, and the scientific method—which incorporates that epistemology (whether you recognize this or not!)—we are refining and extending what we already know. Usually this happens gradually, a small step at a time. But occasionally these small advances accumulate to the point where a qualitative leap forward is possible.

So far there have been three easily identifiable leaps forward in the scientific conception of the nature of mind. First, and by far the greatest: the fundamental dialectical materialist view of mind discovered by Marx and Engels (and refined by Lenin, Mao and others). Second, the extension of the concept of mind to include the unconscious, the popularization of which is primarily due to Freud. And third, the analogy between mind/brain and software/hardware, which is a natural result of the development of computers. (If you want, you can credit people like Turing and von Neumann for at least setting the stage for this last advance.)


19.  You may have noticed that until the last couple paragraphs I have not said much about consciousness in this paper. That is because consciousness, though the most important component part of mind from our introspective human standpoint, is no longer all that critical to the concept of mind itself. (Thanks again, Freud.)

Consciousness is a small subset of mind, though certainly a subset of special importance and worthy of great attention. I intend to focus on the whole question of consciousness in a sequel to this essay, paying attention to various prominent theories such as that of Daniel Dennett. The explication of consciousness is important in its own right, but it is also important as part of the substantiation of any underlying theory of mind. And as Jerry Fodor notes, "Many psychologists who are inclined to accept the functionalist framework are nonetheless worried about the failure of functionalism to reveal much about the nature of consciousness."[24] But as we will hopefully see in the sequel, this is not really a problem; consciousness is not that hard to explain.

For now I will just note that no one can get very far in understanding consciousness unless they start with a dialectical materialist conception of mind and mental phenomena. Once you have got that down, consciousness is a piece of cake!



—Scott H.
   12/4/92 (with slight additional editing on 3/11/93 & 6/24/98)


Notes

[1] Marvin Minsky, Society of Mind (NY: Simon & Schuster, 1986).

[2] Information on the history of the concept of psyche and 'soul' is taken from Calvert Watkins, The American Heritage Dictionary of Indo-European Roots (1985), and W. L. Reese, Dictionary of Philosophy and Religion (1980).

[3] Mary Baker Eddy, Science and Health with Key to the Scriptures, quoted in the Christian Science Monitor, Oct. 6, 1992, p. 17.

[4] John McCrone, The Ape That Spoke: Language and the Evolution of the Human Mind (NY: William Morrow & Co., 1991), p. 140.

[5] Ibid.

[6] Ibid.

[7] Ibid., p. 140.

[8] Einstein quoted in Scientific American, Dec. 1984, p. 106.

[9] Tom Abate, "Mapping How Brain Learns New Skills", San Francisco Examiner, Oct. 28, 1992, p. A-7.

[10] For a discussion of the precise nature of microcode, see: David A. Patterson, "Microprogramming", Scientific American, March 1983.

[11] The cellular automata game of "Life" was invented by the British mathematician John Horton Conway. Martin Gardner introduced the game to a broader audience in one of his "Mathematical Games" columns in Scientific American. Since then a number of books have been written which discuss the game at length, such as: William Poundstone, The Recursive Universe (NY: William Morrow, 1985). There are numerous PC versions available, or you can write your own.

[12] Quoted in John D. Barrow & Frank J. Tipler, The Anthropic Cosmological Principle (NY: Oxford, 1986), p. 367.

[13] Jerry A. Fodor, "The Mind-Body Problem", Scientific American, Jan. 1981, p. 114.

[14] Ibid., p. 116.

[15] Ibid., p. 118. This quote is from the caption to an illustration accompanying the article, and may have been written by either Fodor or by the editors of Scientific American. In any case, it fairly represents the view presented in the article itself.

[16] Ibid., pp. 118-119.

[17] Ibid., p. 117.

[18] Ibid., p. 122.

[19] Ibid., p. 123. One example of this flourishing "science of mental representation" can be found in another article, "Turning Something Over in the Mind", by Lynn A. Cooper and Roger N. Shepard, Scientific American, Dec. 1984.

[20] Ibid., p. 119.

[21] Ibid., p. 114.

[22] Ibid., p. 114.

[23] Ibid., p. 118.

[24] Ibid., p. 122.



— End —


Philosophy Home Page on MASSLINE.ORG
Scott H.'s Home Page on MASSLINE.ORG
MASSLINE.ORG Home Page