(Continuing from Part I)
In Chalmers’ landmark paper, Facing up to the Problem of Consciousness (1995), he develops his argument slowly. Before reaching the point where he builds his theoretical stronghold and challenges conventional neuroscience to breach it, he includes several theoretically neutral expressions of the core puzzle of consciousness, worded with appropriate reserve and caution. After we have explained the obvious cognitive functions of the brain, he says, there may be a further unanswered question; we will probably not be satisfied with a simple functional account of consciousness; we might find that the standard, direct methods of neuroscience are insufficient; functional theories of cognition will not automatically account for experience. And so on.
For instance, he writes:
“…even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?”(Chalmers, Facing Up to the Problem of Consciousness, 1995)
Even here, there is nothing much to debunk, because Chalmers has not yet reached his main conclusion, the one that I intend to challenge.
Well before Chalmers’ paper reaches his radical conclusion, that consciousness cannot be accounted for within physical reality, the faux caution of the early sections is dropped. He stops arguing that there “may” be a further question after explaining the functions of the brain; he states that it is inevitable that there is such a question – and also inevitable that this question will remain fundamentally unanswerable within the scope of conventional science, not just in terms of current theoretical conceptions of the brain but in all future functional accounts, as well.
“At the end of the day, the same criticism applies to any purely physical account of consciousness. For any physical process we specify there will be an unanswered question: Why should this process give rise to experience? Given any such process, it is conceptually coherent that it could be instantiated in the absence of experience. It follows that no mere account of the physical process will tell us why experience arises. The emergence of experience goes beyond what can be derived from physical theory.”
“I suggest that a theory of consciousness should take experience as fundamental. We know that a theory of consciousness requires the addition of something fundamental to our ontology, as everything in physical theory is compatible with the absence of consciousness. We might add some entirely new nonphysical feature, from which experience can be derived, but it is hard to see what such a feature would be like. More likely, we will take experience itself as a fundamental feature of the world, alongside mass, charge, and space-time.”
(Chalmers, Facing Up to the Problem of Consciousness, 1995.)
It is this blanket pessimism that needs to be debunked, this fundamental commitment to mystery, along with Chalmers’ proposed solution, which steps outside the scope of science into unfalsifiable domains that – by his own admission – don’t even address the core source of the puzzlement.
All of this is unnecessary, because the criticism that Chalmers wants to level at all conceivable physical accounts of consciousness is an invalid criticism. The Core Problem can be solved, functional explanations are quite appropriate, and we don’t need to extend science in radical directions or take on consciousness as an inexplicable base ingredient of the universe.
Or so I will argue.
My answer to the innocent Core Problem of Consciousness is not radical, and it is not entirely novel, though I will be presenting it with an emphasis that will seem unfamiliar to many readers. I propose that consciousness is essentially what the cognitive neuroscientist Michael Graziano has called an attention schema. It is a cognitive model we use for navigating our own cognition and, in particular, for managing attentional resources. Attention is a complex cognitive phenomenon, but it can be considered to serve as a cognitive focal point, much like a spotlight, that determines where and how we allocate our cognitive resources. In proposing that consciousness is an attention schema, Graziano is essentially suggesting that consciousness provides a simplified map of cognitive options, and that the primary purpose of the map is helping us manage attention. In this view, consciousness is a representation, a cognitive construct functioning as a simplified serial user interface that enables the brain’s navigation of a vast, complex, parallel cognitive architecture that would otherwise be unmanageable and incomprehensible. Brains are the most complicated entities we know – at the purely physical level, considered as machines, they are so complex that we could not possibly manage them at all if we were forced to interact with them at the hardware level. But they nonetheless require management, because we constantly need to decide where to spend our cognitive resources in the face of potentially bewildering complexity. Graziano proposes that, in response to this challenge, we have developed a simplified interface that is within our cognitive reach, and we use it to decide where to spend our computational resources much as we might decide where to turn our eyes, direct our feet, or point our digital cursors.
And here is where thought bubbles potentially offer us a useful insight, even though they present us with a fundamentally false conception of reality. Consciousness, I propose, is represented to us by our brains as an ethereal, cloudlike entity; it offers us a simplified view of the brain’s activities and cognitive options, using visuospatial metaphors that are derived from and loosely match the surrounding world. The cartoonists have in turn chosen to present this model with the visual metaphor of a thought bubble, applying another round of simplification and abstraction.
The cartoonists are being fanciful, but our own brains beat them to it.
Of course, like any metaphor, the cartoonist’s standard convention for depicting the mind soon runs aground against critical dissimilarities. Consciousness is not represented overtly, and it is not inked in with hard, obvious edges that reveal its represented nature. It is represented as being borderless and invisible, as a nebulous field of mentality extending out to the horizon and beyond. And it is viewed from the inside, in a style reminiscent of a first-person computer game, with us and our thoughts in the middle of the cloud, and our heads all but invisible.
Despite these dissimilarities, I believe the bubble metaphor is useful for our discussion, because the main point stands: consciousness, like a thought bubble, is represented.
At the start of the next section – some fifty pages away – we’re going to extend the bubble metaphor substantially, using our imaginations to build an improved, holographic version of a cartoonist’s thought bubble. In anticipation of that exercise, consider the graphical representation of a player’s virtual environment in a first-person computer game. That software construct copies the visuospatial nature of our first-person experience, hiding the player’s head and depicting everything from a particular a point of view. That particular perspective is a deliberate programming choice, and it feels appropriate when playing a computer game because it matches how we experience the world. In much the same way, we’ll build a 3D thought bubble, taking our own experiential situation as inspiration. We’ll be adding a mobile focus of attention that functions like a cursor, a tactile interface, a visual field, a soundtrack, a temporal dimension of sorts – and a pinch of blatant magic (a first approximation to the voodoo of the book’s title).
That might seem to be a lot of effort to invest in an obvious fiction, and it will still be a gross simplification, but it will serve as an intermediate step on the way to understanding consciousness itself.
Our upgraded holographic thought bubble will not even try to fit neatly into physical science (clue #1: the magic), and so some savvy readers might already suspect that the exercise will turn out to be a devious semantic trick to step around the Hard Problem. And yes, if we temporarily forgot that our holographic bubble was fictional, we would have generated an obvious straw man, a Slightly Hard Problem.
How does our magic bubble fit into physical reality?
The solution to this miniature mystery is suspiciously easy. It doesn’t have to fit into reality; it’s fiction, remember? Anything goes. We can use voodoo or vorpal swords or the dark arts of Voldemort.
Let me be up front, then. I will be defending a similar suggestion in relation to consciousness itself: it’s fiction. Or rather, less confrontingly, this book will propose a view of consciousness that I will call “virtualism”.
Virtualism is a variation on physicalism (or materialism), the philosophical view that the brain and mind are entirely dependent on physical reality, without requiring any mysterious extras or any deviations from the laws of physics. Physicalism is, essentially, the default view of standard neuroscience, so virtualism does not envisage major extensions to science; it is theoretically conservative. Virtualism merely advocates for a slightly different way of looking at things we already know.
The central claim of virtualism is that everything we access within our cognition is a representation, including consciousness itself, which is virtual in the sense that it is a representation without a direct, real-world referent.
Virtualism incorporates Graziano’s attention schema theory, but the discussion will be extended to cover other puzzling aspects of consciousness, such as qualia, which his theory does not directly address. Qualia, like consciousness-the-container, also have a partially virtual nature – which is one reason we can’t find them in reality as described by physicists.
The idea that consciousness is virtual will probably strike some readers as preposterous, and it would be easy to mistake it for the obviously false claim that consciousness does not exist, or that it is an illusion. Rest assured, that’s not the claim being made. Consciousness can be naturalised as a virtual entity that plays a useful functional role, so it clearly exists and it does not need to be dismissed as an illusion.
In this book, I will call this conception of consciousness the bulla cogitans, Latin for thought bubble. The bulla cogitans is not essentially different from Graziano’s concept of an attention schema, but the Latin term will enable me to refer to different components of the consciousness concept while disentangling the philosophical issues. For instance, I will be able to distinguish the represented cognitive construct, the bulla cogitans, from its underlying physical substrate, the bulla substrata. These are two different views of the same brain processes, bearing a representational relationship to each other that is broadly analogous to the relationship between the cartoonist’s thought bubble and the curved lines of ink on a page. Either the bulla cogitans or the substrata might be considered to be the attention schema, but they have different relationships with physical reality, so it is important to know which one is being referenced in any discussion.
These two entities need to be distinguished from a third notional entity, the bulla externa, which is a naïve projection of our private cognitive construct out into reality, where it does not below. In the context of our ongoing visual metaphor, the externa is the magical floating essence that would have to exist to make the cartoonists’ depictions be about something real, lifting the representation from the voirtual to the actual. Later, I will be arguing that many of the thorniest philosophical issues in this field involve conflations between these three conceptual entities, so having names for them is an essential first step. Don’t worry if, at this stage, these distinctions seem impenetrable or unnecessary.
Virtualism is based on the idea that the physical processes underlying consciousness are usually grasped from within a representational medium, a situation that lends itself to confusion. This mode of access means that there is a naturally an element of cognitive distortion involved in our base conception of consciousness. We tend to think of consciousness in terms of how it is represented, rather than in a way that directly corresponds with its physical substrate. This distortion does not disconnect consciousness from reality: our cognitive grasp is slightly twisted, but we are grasping a genuine entity. That twist is something we should be able to understand; it inolves a useful representational convention that is not fundamentally mysterious and not well characterised as an illusion. The only entity that is not real in this view is a literal instantiation of our most naive interpretation of consciousness as it seems to us, conceptualised as some sort of illuminated bubble of magic awareness centred on our heads, something external to the functional processes in our brain, a non-physical essence that we somehow sense without sensors.
This complex situation means that we have three different answers to the simple question: is consciousness real? The bulla substrata is straightforwardly real; it is ultimately responsible for everything we physically do as a consequence of being conscious. The bulla cogitans is merely a specific, representationally committed view of the substrata, so it is also real, but it is not quite what it seems to be. The only entity in all of this with no valid claim on reality is the fanciful bulla externa – the mysterious non-functional entity that has anti-physicalists like Chalmers rewriting the rules of reality to accommodate it.
In direct opposition to Chalmers’ framework, virtualism is based on the idea that the virtual construct of consciousness plays a vital functional role, and that the acts of introspectively pointing to it, wondering what it is, and deciding it is mysterious, are all functional acts, taking place in the physical world via mechanisms we can understand. We can even use a functional approach to understand Chalmers’ conviction that functional approaches are inadequate.
It will take a whole book to expand this sketch into a plausible theory, to explain the central claims of virtualism, and to deal with all the countervailing intuitions, including Chalmers’ famous claim that all functional approaches are doomed to fail. This opening mega-chapter, broken into eight mini-chapters, will merely describe key features in the theoretical landscape, making assertions and suggestions that will need to be defended with much more care later. It is likely that the idea of consciousness as a representation will not make much sense until those more complete treatments have been read, but the more detailed discussion will be much easier to follow if the overall picture has been laid out.
Throughout this book, I will be making two parallel arguments. Firstly, I will develop the case that we can understand consciousness (and qualia) in functional terms, provided we are satisfied with mere intellectual understanding and don’t mind living with some residual intuitive discomfort. Secondly, I will argue that Chalmers’ rejection of functional accounts is misguided, because it is based on a faulty conception of the mind.
When we contemplate the Hard Problem, I propose, we point our cognition at our own mind and we ask: what is this? Or we ask:how does this fit into physical reality?This act of introspective pointing is where our curiosity starts. Our cognition engages with something, and we wonder what that something is. Initially, the pointing stands in as a necessary substitute for a functional definition of consciousness, which seems curiously elusive and resistant to definition, just as Chalmers has noted. But introspection is itself a functional act, which therefore requires further characterisation before we can conclude that the target of the Hard Problem stands outside the reach of science. The target doesn’t even stand outside the reach of our own grey matter – we point to consciousness with our cognition whenever we consider any of these questions, because it is within cognition that consciousness exists in the form that we understand it. At the same time, consciousness exists as a neural substrate that is cognitively alien to us, and the divide between the familiar and the alien is best conceptualised as a representational divide, not a deep mystery requiring radical revisions to science.
Not everyone points at the same thing or in the same way during introspective consideration of the Hard Problem, but there are two common themes in our puzzlement, and two types of entities we point at. Typically, we point at consciousness itself, or we point to qualia. That means there are two sub-problems within the core puzzle of consciousness. For many authors in this field, these are essentially the same issue. The Core Problem (explaining the subjective nature of consciousness) is not distinguished from what might be called the Qualia Problem (explaining the subjective nature of qualia). What could be considered the Container Problem (explaining the background state of awareness) is largely forgotten.
Part of the difficulty in understanding qualia and consciousness is that the puzzle of irreducible qualitative contents (the Qualia Problem) is often conflated with the puzzle of explaining awareness (the Container Problem). One of the first ideas that needs to be debunked is the idea that explaining the resistance of qualia to derivation is the same as the task of explaining awareness. Unfortunately, this idea is built into much of the available vocabulary, including terms like “phenomenal consciousness”.
Usually, when we think of qualia, we think of consciously appreciated perceptual properties, so combining these issues is natural, but a combination-approach means that we need to account for both aspects of the core puzzle at once, and it is difficult enough addressing either aspect alone. Drawing on our opening metaphor, I think we will need one explanation for the thought bubble, and a somewhat different explanation for the imagined red triangle inside it. Only when we have both will we be able to understand the combination. Many of the conflations and fallacies involved in the discussion of consciousness also apply to qualia, so there will necessarily be some overlap in the exploration of these issues, but there will also be points where consciousness and qualia require quite different treatments. They play different roles in cognition, and the conceptual difficulties faced in reconciling them with physical reality are not quite the same.
This approach is not just prevalent among fans of the Hard Problem. For instance, Keith Frankish introduces an online lecture series with the comment, “We used to think of consciousness as if it had special properties that make it a felt experience – qualia.”
We used to think? Who is “we”? Frankish is correct that some people believe that awareness arises when the special sauce of qualia is added to mere computation, but I think this is a highly unlikely explanation of what it means to be aware of something, and it involves an unhelpful conflation of different issues. The challenges of explaining awareness itself and the challenge of explaining the qualitative contents of awareness that resist derivation are not the same challenge. Consider, for instance, your belief that redness is more similar to orange than to blue; that belief is intimately tied to the apparently irreducible flavour of redness, but it was not conscious in your mind until you reached this paragraph. You could account for that belief in functional, scientific terms, but it is not intrinsically stored in your head in those terms. We could propose that the mysterious colours appear when you retrieve this knowledge from memory, but there is no evidence at all that their mysterious resistance to reductive explanation appears at the point of recall, and good reasons to think that the resistance does not directly depend on awareness. One could not derive an unconscious memory of what red looks like from a neural circuit diagram any more than once could derive a live, conscious appreciation of redness.
For brevity in this book, I will usually refer to the mysterious inner feel of the mind as “consciousness”, but generally, unless stated otherwise, this should be taken to mean phenomenal consciousness or subjective experience – at least until we can get better definitions in place. I do not automatically intend, thereby, to refer to qualia as well, but much of the discussion will apply concurrently to qualia, and qualia will need to be addressed in their own right, starting in the next mini-chapter.
The primary target of our pointing, consciousness-the-container, certainly seems nebulous and structureless and deeply mysterious, but the mere fact that we can point at it introspectively without employing any sensory organ to detect it is a strong clue as to its true relationship with the brain and physical reality. Unlike most other entities that we think about, consciousness is apparent to us without needing to be translated from external reality to cognition via any sensory transduction process, and apparently without needing to be built up by combining other ideas. We do not infer its presence, or derive it as a theoretical entity to account for our observations of external entities, like we did with mass, charge and space-time. Despite being ethereal and indefinite, it also seems immediate and obvious. It’s not sensed or inferred. It’s just there. This suggests to me that it is essentially cognitive, and indeed that it is a representation, constructed in situ by processes that are themselves unconscious.
Because consciousness is represented in cognition – but didn’t get there by being imported via any sensory process or by being built up through inference from facts about the world – its relationship with physical reality is indirect, and therefore potentially confusing. Unlike the many mundane objects that come to be represented in our heads, such as chairs and laptops and apples, consciousness is a representation without any external non-cognitive referent.
And, therefore, it is virtual.
If this view of consciousness is correct, and consciousness is a virtual construct, then much of what has been written about consciousness misses the mark by a wide margin. Consciousness is not an emergent property that a brain magically achieves when its information processing becomes sufficiently rich and detailed; it’s not something non-physical hovering above our neurons, connected to them by as yet undiscovered psychophysical laws; it is not a base feature of the universe that waited billions of years in a brain-free state before human brains linked up to it through evolution; it is not a quantum perturbation in our synapses or our microtubules; it is not a phenomenological hum that the brain emits at certain frequencies; it is not the result of information persisting long enough in our cognition to register on some higher plane of reality; it is not something that automatically appears whenever a representational system includes a reference to itself (or even its self, though that is getting much closer).
Essentially, it’s a user interface. And yes, it’s like a bubble.