I find myself perennially surprised to see otherwise scientifically-minded secularists suddenly throwing up their hands and pleading a sort of “mysterianism” (to borrow Dennett’s term) when discussion turns to the nature of consciousness. Now, it’s perfectly fair and accurate to say that our scientific understanding of what consciousness is and does is presently incomplete. I’m very skeptical, though, of any attempts made by philosophers like David Chalmers to adjudicate from the armchair on what science ultimately can and cannot explain, and I’m disappointed to see generally clear-headed secular thinkers like Sam Harris voicing similar sentiments lately.
Chalmers calls his position “nonreductive physicalism.” He argues for an explanatory gap between properties rather than substances, and so his view is (at least ostensibly) monistic. It’s a step up from Cartesian dualism to be sure, but I think it’s still indefensible. More pointedly, I think it’s fundamentally incoherent.
Now, according to reductive physicalism, all facts at any supra-physical level of description are reliably fixed, ultimately, by physical facts, such that two entities identical in all their physical properties will be identical in all their supra-physical properties as well. According to this thesis, if I wanted to build a perfect replica of myself, I could in principle do so using only knowledge of the subatomic constituents that comprise me. By accurately specifying all these basic physical properties, I could expect my replica to instantiate all my higher level properties—metabolic, neurophysiological, phenomenal—automatically; no further specification required.
To effectively attack this position, the nonreductivist must establish the conjunction of two claims:
(1). The phenomenal properties of consciousness are not reliably fixed, proximally or distally, by physical facts.
(2). The phenomenal properties of consciousness are information-bearing, such that possession of them facilitates or constitutes a form of knowledge. If we are to claim that there is some domain that is irreducible in a way semantically distinct from the way we might claim that the domain of fairies and unicorns is irreducible, then there must exist apprehendable facts about that domain through which we come to knowledge of it. Of course, a domain about which there are no facts cannot enter into a fact-fixing relation, but neither can it enter into a knowledge-having relation.
Chalmers’ “Zombie argument” (beginning on p. 100) is a well-known attempt at establishing the first claim. The “knowledge” arguments of Thomas Nagel and Frank Jackson (“What It’s Like to Be a Bat” and “Mary’s Room,” respectively) are well-known attempts at establishing the second claim. I think, however, that there’s a fatal tension between these two claims such that one can only be secured at the expense of the other. We can see this most clearly by simply combining the thought experiments each of the above three authors has invoked to motivate their arguments (we’ll even throw in Searle’s “Chinese Room” for good measure):
Suppose I am the world’s premier bat expert; I know everything a non-bat could possibly know about bat brains and behavior. I’ve built a remote-controlled robotic bat that looks just like the real thing and is capable of the same full range of movement. A team of scientists—also bat experts—are going to be subjecting both a real bat and my toy bat to a series of identical tests designed to probe the bat’s cognitive activity—its knowledge—as deeply as possible without simply opening it up to see if there’s a real brain there. The real bat goes first, and as it’s being subjected to all these tests, a recording is made of its brain states. When it is time for my toy bat to be tested, I view this recording and (because I know everything a non-bat could possibly know about bats), using my remote control, translate the neural activity into behavior I think will be identical to that exhibited by the real bat. Assume that the same tests are applied in each case and that the testing periods have identical temporal profiles (that is, all the events were spaced equally apart). I have not specified exactly what the tests are because that is largely up to your imagination. Make the interactions as complex and protracted as you like. Now, I have complete knowledge of the bat’s neural activity, but the real bat (presumably) possesses knowledge that I lack—namely, first-person experience, what it’s like to be a bat.
The question: Will the scientists in principle ever be able to discern a difference between the two bats on the basis of this alleged knowledge asymmetry (note that this doesn’t require them to know which one is real and which one is fake; they simply must be able to find some cognitively relevant difference)?
That is only the first part of the thought experiment. Now imagine the same thing happening on Zombie Earth, a possible world in which qualia do not exist but which is identical in all of its physical facts to this world. The same question is asked of the scientists here.
A variety of permutations suggest themselves. With a little more imagination, we can make this a bit fairer to Searle’s Chinese Room by replacing both the bats and the scientists conducting the tests with intelligent bat-like aliens whose neural economies (and thus presumably qualia) are wildly incommensurable with our own. Here, I know everything a non-bat-alien can know about bat-aliens and remotely control a perfect replica of one, which will be tested by real bat-aliens. As before, the interactions are up to your imagination, so long as the discerned difference, if any, is due to the special knowledge the real bat-alien has which I (presumably) do not.
Now, if we are tempted toward the view that the possession of qualia constitutes or allows for real knowledge of some sort, then there should be some possible test or another which can pick out the difference this knowledge makes for the bat. But if we hold that the scientists can discern a difference in the first scenario, then we must concede that they will be unable to discern a difference in the zombie scenario, and this violates the restriction that the two worlds be identical in all their physical facts. Minimally, then, metaphysical supervenience (a fact-fixing relation) of qualia on physical properties seems to hold.
If on the other hand, we deny that a difference is discernible on Earth, we can maintain a solely phenomenal difference between our world and the zombie world, but only at great cost. We bite the bullet not only on epiphenomenalism (and potentially panpsychism), but on a deep phenomenological skepticism, for we seem to have given up the view that there are any apprehendable facts about our qualia. To maintain that our possession of qualia still amounts to a kind of “knowledge” is to completely destroy the term. The problem beckoned by this maneuver is twofold. For one, we would have to ask: How, if we are otherwise wholly physical beings, can we come to have knowledge of something that makes no physical difference in the world? For two, we must ask: How, if we are otherwise wholly physical beings, can this knowledge not itself make any physical difference in the world (even in principle)? Put simply, taking Chalmers’ argument seriously seems to rob us of the ability to say with any confidence that we are not the zombies.
Of course, we could claim that the knowledge is stored in some nonphysical substrate. This is, of course, substance dualism, but it is no longer an attractive option in philosophy of mind. The alternative, though (if we are still determined to take a nonreductivist line), seems an even bitterer pill. We are faced with a kind of epistemic (and, in at least Searle’s case, semantic) solipsism, saying in effect: “I have this special kind of knowledge whose contents and very existence are necessarily confirmable only by me.” The fact that any alleged knowledge can be “secured” in this way (say, a devout Christian's special, privately revealed knowledge of the divinity of Jesus Christ) should give pause to anyone—especially any secularist—tempted to argue along these lines.
I think the core problem with the nonreductivist position resides in an ontological interpretation of what is really an epistemic issue, a confusion of a particular mode of knowledge access for a particular kind of knowledge content. The “neurophilosopher” Paul Churchland touches on this in his review of Searle’s 1992 book, The Rediscovery of the Mind:
The focal issue is Searle’s claim that mental phenomena are irreducible to the objective features of the physical brain. The sticking point here, according to Searle, is the subjective character of mental states, as opposed to the objective character of any and all physical states. In the face of this ‘rock-bottom’ divergence of character on each side of the alleged equation, how could mental phenomena possibly be identical with, or somehow constituted from, sheerly physical phenomena? They are as different as chalk from cheese.
The argument, let us admit, is beguiling. That is why it is famous. Searle is not offering us a new argument, but an old one, one made famous in the modern period by Thomas Nagel and Frank Jackson.
There is also a standard and quite devastating reply to this sort of argument, a reply that has been in undergraduate textbooks for a decade. On the most obvious and reasonable interpretation, to say that John’s mental states are subjective in character is just to say that John’s mental states are known-uniquely-to-John-by-introspection. And to say that John’s physical brain states are objective is just to deny that physical brain states have the hyphenated property at issue. Stated carefully, the argument thus has the following form:
1. John’s mental states are known-uniquely-to-John-by-introspection.
2. John’s physical brain states are not known-uniquely-to-John-by-introspection.
Therefore, since they have divergent properties,
3. John’s mental states cannot be identical with any of John’s physical brain states.
Once put in this form, however, the argument is instantly recognizable to any logician as committing a familiar form of fallacy, a fallacy instanced more clearly in the following two examples.
1. Aspirin is known-to-John-as-a-pain-reliever.
2. Acetylsalicylic acid is not known-to-John-as-a-pain-reliever.
Therefore, since they have divergent properties,
3. Aspirin cannot be identical with acetylsalicylic acid.
1. The temperature of an object is known-to-John-by-simple-feeling.
2. The mean molecular kinetic energy of an object is not known-to-John-by-simple-feeling.
Therefore, since they have divergent properties,
3. Temperature cannot be identical with mean molecular kinetic energy.
Here the conclusions are known to be false in both cases, despite the presumed truth of all of the premises. The problem here is that the so-called “divergent properties” consist in nothing more than the item’s being recognized, perceived, or known by somebody, by a specific means and under a specific description. But no such “epistemic” property is an intrinsic feature of the item itself, one that might determine its possible identity or nonidentity with some candidate thing otherwise apprehended or otherwise described. Indeed, as the two clearly fallacious parallels illustrate, the truth of the argument’s premises need reflect nothing more than John’s overwhelming ignorance of what happens to be identical with what. And as with the parallels, so with the original. Despite its initial appeal, the argument is a non sequitur.
(From “Betty Crocker's Theory of Consciousness”; all emphases in original)
I would put the point as follows: my inability to know what it’s like to be a bat reflects nothing deeper than the fact that my brain is not adequately configured to perform the particular cognitive operations that instantiate the bat’s phenomenal states. This doesn’t by itself tell us anything about whether or not those phenomenal properties are ultimately fixed by physical properties. From the simple neurobiological fact that I am unable to run the bat “software” (don’t take the metaphor too literally; I don’t intend a literal “symbolicist” construal of cognitive activity here), it doesn’t follow that the contents of that software constitute an ontologically distinct category inaccessible in principle except to the sole individual in which that software currently resides.
Let’s look at a more mundane case. Say I and another are observing a painting, and I have access to some representation of both his and my own brain states. If I can see that these states are very similar, I can probably conclude that our phenomenal experiences of the painting are very similar as well.
A simple permutation of the above may prove more illustrative. Suppose our cognitive states are not very similar, but suppose I am hooked up to a machine which stimulates my neurons in such a way as to bring our states into isomorphism. I could then probably conclude that I at the very least have a good idea of what it’s like to experience the painting as this other person does. I find this conclusion much less controversial than its denial, which would take us back out into Chalmersian territory, with all the aforementioned problems therein entailed.
Now, say I were born cortically deaf and am observing a symphony with another individual. The same basic story applies; I have access to some sort of representation of both his and my own cognitive states. Because this other individual has neuralphysiological processes devoted to audition while I do not, our states will not be completely isomorphic and I will not be able to apprehend the entirety of his subjective experience of the symphony. In this case, though, we can quickly see that it would be faulty for me to conclude that the individual’s auditory qualia were in principle unknowable to anyone but him.
We don’t have to invoke such fanciful examples (though their logical possibility alone is sufficient to refute any a priori nonreductivism) to drive the point home. Consider your own left and right cerebral hemispheres. We know from patients with certain congenital diseases and patients who’ve had an entire hemisphere surgically removed that each hemisphere is a cognitively and consciously sufficient entity (at least insofar as we know that anyone besides ourselves is a cognitively and consciously sufficient entity). Yet we are not two mutually inaccessible first-person perspectives crammed together into one skull. Our right hemisphere processes visual information in our left visual field and our left hemisphere processes visual information in our right visual field, yet our visual phenomenology is perfectly integrated. This is because in normal humans, our hemispheres are connected through a large mass of fibers called the corpus callosum, which makes the cognitive activities of each hemisphere accessible to the other (interestingly, patients in which this has been severed or has failed to develop do seem to have something very much like two mutually inaccessible first-person perspectives crammed into one skull). And the phenomenal unification that results from this interaction is only possible if there are reliable fact-fixing relations between qualia and brain processes. If we take seriously Chalmers’ argument that phenomenal properties are metaphysically free to vary over physical properties, this unification becomes a deeply mysterious, if not downright miraculous, thing.
I am not the first to comment on the instability of nonreductive physicalism. For other arguments, see Kim (1989), and Melnyk (1991).
I've just been pointed to a paper by Tillmann Vierkant in which a somewhat similar combined thought experiment is proposed in order to illustrate the tension between Chalmersian and Jacksonian anti-reductionist arguments. The author sees this problem as far more injurious to Chalmers' case than to Jackson's, and posits (without defending or endorsing) what he calls "Common Sense Realism" as an option left open to the nonreductive physicalist. I may take this topic up in a subsequent post.