A Brief Account of Consciousness

Summary: Consciousness (experience, awareness) has long been a mystery. It has been called the “hard problem” because it has successfully eluded explanation for centuries, however I believe that modern accounts have gone a long way to dispelling the mystery. It seems that the key to understanding consciousness may be to regard our experience not as that of a discrete observer but rather as a model of our relationship to the external world. The perceiver is inherent to this model.

By way of example, we tend to think that when we see an external object we are inside our heads looking at some kind of image. Instead, it seems more likely that the object we experience is simply a description, a logical carrier of information about our relationship to that external object. We have a constellation of such descriptions, tied together by meta-descriptions. One such meta-description might be the feeling we have that we are inside our heads. We can act upon these descriptions, including reporting upon them. When we say “that ball is red” we are reporting upon information contained within the descriptions our brains compute from sensory input and stored concepts.

The world as we experience it is a virtual world, a description that is only as complex as we need to act into the external world. It may not always represent the actual world but to us, it IS the world.

***************************************************************

An enduring mystery is that of consciousness or awareness. We believe that we are aware in a particular way, that “we” are in our heads. This is because we have experiences and can talk about them. We see colours (eg red), we feel pain and we can be happy or sad. Strangely however we cannot really say what red, pain or happiness actually are – the best we can do is agree that we each see a red ball when a red ball is present. That ball seems to us to be out there but obviously it isn’t. It must somehow be in our heads, like a photograph or movie being shown for us on the screen of our minds. Someone, it seems, is inside our heads, in some kind of inner space, watching the movie of experience.

This someone is what we might think of as the soul or at the very least as an observer – an actual thing that is separate from yet co-existing with our physical body. But the problem with thinking this is that it is a non-explanation. If an internal observer “sees” our experiences, then how does this observer do that? Could there be an observer inside the observer? Maybe, but then we seem to need a further observer and so on. We can never bottom out into a true physical explanation of what the observer is.

The philosopher Daniel Dennett is famous for his dismissal of this kind of idea. He tells us that there can be no Cartesian Theatre, no inner screen on which the movie of experience plays for our observer. He takes the view that we are misled somehow, that our experience is in some way an illusion. He doesn’t mean by this that we are not having experiences – after all, we describe our experiences by what they signify (the red of the ball, for example). Experiences exist.

However, while experiences exist, does it follow that we are indeed inside our heads, contrary to Dennett’s claim? Or is it the case that other than the brain doing stuff, there is nothing else happening? After all, as physical systems brains just do physical things, so unless something non-physical is happening we aren’t really in there. Perhaps we simply do not have a soul after all – maybe the truth is that no-one is home. I wrote about this in my short essay, “Do animals have souls?“, where I say that my answer to this is that no, we are not really home and we do not have souls. I should add the caveat though that in one very real way there IS something it is like to be us. How does this happen if we agree that there can’t be an observer inside our heads?

The most likely explanation is that experience – the “what it’s like” – just is how brains compute (manipulate) information in order to produce behaviours. The idea that brains compute information is known as computationalism. There has been plenty of criticism for the idea but it remains the dominant paradigm to explain cognition and by extension consciousness. I tend to think that a fundamental capability of material universes is computation – the right kinds of systems can gain information by manipulating other information according to some rule (ie logic). Brains do this.

Consider a sort of paradigm example. Light is not really illumination or colour – it is a narrow frequency range of electro-magnetic radiation reflected from or emitted by material objects. Sensory apparatus can detect light and use the resulting interaction between light and the detection of light to gain information about objects. In such a manner brains can use this information along with a range of stored sensory and affordance data to model the world and relations with the world. The things we “see” don’t really look like anything at all out in the world; what we see is entirely an abstract construction of our brains using the information rendered by the interaction between light and our eyes.

The usefulness of this kind of abstracted information is obvious – it facilitates  behaviours. More complex behavioural possibilities are uncovered both by environmental conditions and improved detection/processing/actioning capabilities. In the same way physical forms are optimised by evolution to better suit adaptation to changing environments so too is the computability of information derived from interactions between the organism and its external and internal environments. In a very real sense, the “instruction set” for programming brains/nervous systems is derived evolutionarily over long stretches of time.

The end result is that in our heads are no more than the models we use to direct behaviours and the objects in these models (for example, red balls and “me”) are more like organisational artefacts – they stand in for (represent) how we use information to direct internal activity to generate external behaviours. Our brains construct a virtual world using information sampled from the outside world.

Put another way, our experiences are not OF the world, they ARE the world. And in this world is a kind of control model – a sense of self that coordinates and modulates behaviour based on perceptual information prioritised according to behavioural goals. This self is in effect a control model that ties together the artefacts of these processes and enables us to observe, monitor and report upon progress. Michael Graziano has offered a compelling and cogent explanation for how attention mediates this control model when he describes his Attention Schema Theory.

You can see from this that when I agree that consciousness is an illusion, I don’t mean we are not having experiences. Rather I mean that our experiences are not really an inner being “seeing” and “hearing” things and having a genuine “self”. Sensory perceptions are abstracted informational objects – objects of organisation and process. The different sensory modalities simply wrap up the information in handy codes. A red ball (vision) contains information that is pared down for easier manipulation – it isn’t the state of all the bits of the brain but is rather an abstraction that contains shape, colour, location, distance and so on upon which we can undertake ongoing behavioural computations (eg how to catch the ball). The sound of a bell is the same thing but for sound waves (audition). The underlying brain cells are the same and they do the same things, but the information being manipulated is different as are the affordances offered. I think that J. Kevin O’Regan has explained this idea very well when he describes his sensori-motor theory in his excellent book “Why Red Doesn’t Sound Like A Bell“.

Now, can other computational systems have experiences? I am inclined to say yes. All computations that manipulate information may be accompanied by some kind of “what it’s like”. However, it seems likely that the computational devices we have built are limited by the narrowness of their functionality and a paucity of broadly integrated complexity (eg complex feed-back and feed-forward circuits). More than this, I tend to think that without a particular kind of memory system, it seems very unlikely that these computational activities have any kind of awareness, much less self-awareness.

That said, as complexity increases so too does the potential for experience and therefore computational devices that mirror the circuitry of brains and incorporate the right kinds of memory should have experience (recall that consciousness is a kind of logical information space in which relationships are modelled). In particular, as memory morphs into the kind of global workspace (for want of a better description) as outlined by people like Baars then the experiences become accessible to the system as a form of awareness. Very complex systems with the right system capabilities would be aware. Tononi’s Integrated Information Theory is probably very much on the right track in this regard.

While consciousness as experience might be explained as the product of brains computing information, it is unlikely that the unitary everyday experience we enjoy is directly affecting behaviour. The reason for this is that it seems to come too late. The consciousness we have seems fully formed and informative yet to process perceptual input, access stored content, create a virtual world of experience and then to decide responses takes time and time is of the essence for creatures in the world.

It is more likely that our brains are constantly predicting and revising those predictions as it shapes behaviour to external circumstances. This may be accompanied by some kind of what it’s like experience, but this is likely not to be what we call everyday awareness/consciousness. As the brain computes various scenarios and generates draft narratives of what’s happening to us, behavioural plans are created, discarded and executed. All in tiny fractions of time, such is the computing power of the brain. Only once behaviour is complete is it likely that a full and final draft (as it were) is produced for storage in memory systems. This final draft is a pared down, information rich abstraction of what just happened and is probably most useful for learning. My best guess is that it also informs ongoing processes and behaviour in a complex feedback/re-entrant looping mechanism.

While ongoing internal brain processes may be manipulating informational objects, true conscious experience – our unified unfolding narrative – is more likely to be an after the fact construction. This is why I see memory as critical to enjoying a rich conscious experience. In memory, the abstracted information that describes external world, behavioural responses and functional outcomes is stored for learning and ongoing comparitive feedback. I suspect that our moment by moment awareness of the world is in fact a memory function.

It is in this sense that some researchers have proposed the hippocampal formation as critical to these activities and I think this makes sense. While I simply don’t know enough to say yay or nay, I think propositions such as those of Ralf-Peter Behrendt and Matt Faw may be on the right track. Faw’s suggestion that moment by moment experience is the first instantiation of a memory seems to fit the bill. Even if not exactly right, this proposition seems to give us grounds for viewing everyday consciousness as primarily a memory function.

Placing this into a simpler explanatory framework, brains have evolved to interpret and manipulate information about the body and the external world in order to manage behaviour. The world we experience is a kind of “logical space” in which information is abstracted into a model of how the brain processes and organises that information and the behaviours available to be enacted.

Essential to the complete experience that complex organisms like humans enjoy is a memory system that permits a recurrent, re-entrant process of remembering moment by moment, informed both by prior stored concepts/experiences and predictive refinements in order to model relations betwen the organism and the external world. Such models facilitate increasingly complex and dynamic behavioural responses.

In the end, we are simply very good natural simulations.