As we saw in the previous entry of this series, philosophers of mind usually distinguish between what (after David Chalmers) they called the ‘easy’ and the ‘hard’ problem of consciousness. The ‘easy’ problem refers to how to explain the functioning of the brain: how does it manage to do things that seem to require some higher or lower degree of consciousness, like recognizing faces, words, social rules, etc.; or to generate sentences or appropriate physical movements, etc. This functioning, as well as its explanation, could in principle be described in purely ‘objectivists’ terms, and to explain the role of consciousness in them would pass through doing something like what we saw in the previous entry: identifying the ‘physical signatures’ of conscious activity, and studying the causal connection between those physical processes and the rest of the brain activity involved in the phenomena we wanted to explain.
Instead, the hard problem (HP) would be “the question of how physical processes in the brain give rise to subjective experience, the way things feel for the subject… It is these phenomena that pose the real mystery of the mind” (Chalmers, 1995 1). According to this and other authors, what makes the problem really mysterious is that subjective or psychic phenomena (i.e., how experiences ‘appear’ to the subject) on the one hand, and objective, material, physical facts on the other hand, seem to be totally heterogeneous. Obviously, this heterogeneity has lead millions of people during the ages, including very clever philosophers and scientists, to accept that we, humans, are composed of two completely different kinds of ‘stuff’: body and soul, so to say, or, to use Descartes’ more technical terms, res extensa and res cogitans (curiously, Immanuel Kant didn’t accept this distinction but considered that both physical and psychic events belong to the phenomenical realm, though he distinguished what he called the ‘external sense’ and the ‘internal sense’ as our ways to perceiving these two aspects of experience; I’ll come back to this idea at the end). But, as it is also well known, dualism has an unsurmontable problem that already tormented Descartes: how can it be that two completely different substances causally interact?, and what is the need of the subjective to explain the physical processes in the brain? Because of difficulties like these, most people working today about these matters do not accept any form of dualism (save if we take as such some varieties of ‘strong emergentism’; e.g., Hasker, 1999 2; John Eccles would be a prominent exception amongst neuroscientists), but try to understand consciousness as a kind of supervenient or emergent propery of the brain physical and chemical functioning.
Of course, saying that consciousness ‘emerges’ out of the neurons’ activity seems just to be no answer to the HP. One argument by Chalmers is particularly pressing: we can imagine that somebody has an organism exactly like ours, with a brain functioning exactly like ours, but without having nevertheless something like a consciousness (i.e, they he might be ‘zombies’). So, consciousness seems not to need to ‘emerge’ out of the brain activity in the same sense in which we can say that the liquidity of a volume of water ‘emerges’ out of the interactions between its molecules given certain values of its thermodynamic properties. Hence, the question is, why it is that we (or, at least, I) have consciousness, if it is conceivable that our brains worked exactly as they do, but without we really being aware of anything?
Stanislas Dehaene’s answer is that Chalmers’s labels are totally wrong: it is his ‘easy’ problem the one that is really difficult, for the complexity of the brain, and our technical inability to observe the simultaneous behaviour of each one of its billions of neurons in vivo, entail that, not just within the nearest future, we will be unable of understanding many aspects of the brain’s functioning. What is really ‘hard’ is, I would add, to invent a theoretical, conceptual framework that allows to efficiently ‘translate’ the language of the brain’s ‘hardware’ (i.e., excited or inhibited neurons and synapses, neurotransmitters, neuronal networks, and the like) to the language of psychology (intentions, emotions, thoughts, beliefs, etc.) in which it seems more natural to describe the behaviour, actions and thinking of human individuals. Instead, says Dehaene, “the hard problem just seems hard because it engages ill-defined intuitions… The hypothetical concept of qualia, pure mental experience detached from any information-processing role, will be viewed as a peculiar idea of the prescientific era, much like vitalism – the misguided nineteenth-century thought that, however much detail we gather about the chemical mechanisms of living organisms, we will never account for the unique qualities of life” (Dehaene, 20143). Likewise, he continues, empirical brain science will “keep eating away” at the HP until it vanishes. Or, as Massimo Pigliucci has recently 4put it, the HP is simply based on a category mistake: subjective experience, that something that I have but a zombie physically identical to me wouldn’t, is not the kind of thing of which an explanation can be offered. An explanation, after all, consists in showing that an inferential, deductive link exists between two sets of propositions (the one to be explained, and the one explaining it), and hence, both the ‘explanans’ and the ‘explanandum’ (to use these old fashioned philosophical terms), and in particular, not the explanandum, the thing to be explained, are not ‘subjective experiences’, but linguistic representations of what we want to explain; by a linguistic representation we mean an utterance or sentence whose truth conditions can be (at the least) intersubjectively established. Hence, what can be considered to deserve an explanation must be something objective, and the subjective experience is, by definition, the unexplainable ‘residue’. But it is not ‘unexplainable’ in the sense of being a ‘mystery’, i.e, something about which an explanation is ‘needed’ but none can be discovered, but in the sense of being something completely different to ‘what enters into a act of explaining’.
Stated somehow differently: defenders of the HP seem to assume that the ‘objective’, ‘third-personal-ontologic’ character of the propositions contained in any possible ‘physical explanation’ of consciousness does not pertain to the fact that these propositions are propositions and hence need to be expressed in some intersubjectively usable language, but that it somehow pertains to the ‘facts’ these propositions are about. But the truth is that these propositions (i.e., the sentences describing the physical functioning of the brain) do not tell absolutely anything about the ‘ultimate ontology’ of those physical or biological processes, in particular, they do not tell that this ‘ultimate ontology’ is ‘objective’ or ‘third-personal’. For imagine a world that were isomorphic to our ‘physical’ world, but in which the ‘ultimate’ stuff (whatever this could mean) consisted in ‘first personal experiences’, i.e., a world with the same mathematical structure of our world, but which is composed of ‘conscious events’ (this type of idealism has been suggested by some authors, e.g,. Stapp 19935). Or imagine a world also mathematically equivalent to our own, but composed of some ‘substance’ that has not any resemblance to ‘matter’ nor to ‘mind’. The scientific theories describing our universe can just offer no distinction between all these mathematically equivalent worlds, which obviously means that it is just impossible for us to know in which of such worlds we ‘really’ live in… or, more likely, that even the question of whether we live in one or in another is meaningless.
Lastly, any being (even if it is zombie) capable of describing and explaining her world would do it by means of linguistic, totally-agnostic-about-the-ultimate-stuff-of-the-world, propositions, like ours, but at the same time would confidently assert that she perfectly notices the difference between how things subjectively appear to her, and how things really are (e.g., she may notice that sticks seem to bend when submerged in water). I think this means that the difference between how things are and how things seem is better understood Kant’s style, as we saw above: as a difference between modes of describing our intersubjective experience of the world (descriptions the most we can demand to is to deliver us stable and usable regularities between as most of them as possible), rather than as referring to totally distinct ontological properties.
- Chalmers, D. 1995. “Facing up to the problem of consciousness”. Journal of Consciousness Studies, 2: 200–19. ↩
- Hasker, W., 1999. The Emergent Self, Ithaca: Cornell University Press. ↩
- Dehaene, S., 2014, Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. Penguin, New York. ↩
- Pigliucci, M., 2013, “What Hard Problem”, Philosophy Now. ↩
- Stapp, H. 1993. Mind, Matter and Quantum Mechanics. Berlin: Springer Verlag. ↩