Aaron Sloman

Meanings, Theories, and Models of Consciousness

Abstract

Many people assume that the word "consciousness" refers to some kind of unique, well-defined state or process which is either present or absent, so that it makes sense to ask what IT is, how IT evolved, which animals have IT, which neural mechanisms bring IT about, whether machines can have IT etc.

This is actually a kind of deep linguistic self-deception, as indicated by the fact that people differ so much regarding what they mean by "consciousness" -- e.g. how they define IT. Even the same person can be inconsistent, e.g. sometimes claiming that consciousness is absent during sleep and later claiming that it is present during dreaming.

A popular alternative view that consciousness is just a matter of degree of something fails to account for the variety of types of phenomena. Differences between an ant and an ape are not merely differences of degree: there are many kinds of things an ape can do that an ant cannot. Searching for differences of degree is the wrong way to understand biological variety: e.g. because mutations and crossover produce differences of kind.

A third stance, our design-based stance, treats the noun "consciousness" as referring loosely to a large and ill-defined variety of states and processes in which organisms (and other machines) have or acquire information. From this viewpoint an eagle's consciousness will involve different kinds of phenomena from that of an insect. Likewise a newborn human infant's consciousness will lack many features of a typical normal, wide-awake adult's.

For instance: not all animals have adult human-like self-consciousness, i.e. consciousness of being conscious of anything, and probably human neonates don't have that sort of consciousness. Likewise different varieties of consciousness are to be found in people with various kinds of brain-damage or other pathologies.

In order to understand this huge variety of types of consciousness we need to consider the variety of information-processing architectures possible for organisms and machines, and then, for the different sorts of architectures, to analyse the kinds of consciousness they support, which will also involve analysing the varieties of perception, learning, decision making, affective states, learning, development, communications and actions they support.

To provide an ontological framework, we offer a (first draft) generic schema (CogAff) for thinking about and describing a wide range of types of architectures, and we conjecture that humans have a particularly rich instance of this schema, which we call H-Cogaff.

If normal adult humans conform to H-Cogaff, they will have at least reactive, deliberative and meta-management layers in their architectures, all concurrently active and not forming any simple control hierarchy. (These subsume the six layers in Minsky's 'The Emotion Machine' I think).

Different forms of consciousness will be supported by the different mechanisms in the different layers.

For instance, primitive kinds of self-consciousness involving proprioceptive feedback in reactive layers will be shared with many other animals. However the ability to monitor, categorise, compare, and evaluate or remember one's own deliberative processes and relate them to possible mental states of others will not be possible without both a deliberative and a meta-management layer.

A deliberative layer requires, and provides mechanisms for, consciousness of chunked aspects of the environment and the ability to learn and use associations between one's actions and their consequences. For instance advanced deliberative mechanisms support consideration of what might happen, what might be the case, what might explain something observed. Meta-management extends that to learning and thinking about one's own possible *internal* states and processes.

Insects very probably lack both: it's not just that they have tiny degrees of them. (This is an empirical claim and could turn out false!)

A fully developed version of this architectural theory will explain what it is about the human architecture that gives rise to particular popular beliefs and muddles about consciousness in humans.

For instance, robots with human-like architectures will fall into the same traps as humans when they philosophise about consciousness, as several science fiction writers have predicted.

Notice that this architecture-based theory implies that there can be many concurrent causally active processes in virtual machines within the same individual. It does NOT imply that mental states can be defined in terms of their externally observable input-output contingencies. On the contrary, many interesting ones cannot -- requiring us to replace the simple-minded forms of functionalism normally discussed by philosophers with engineering-inspired versions: Virtual Machine Functionalism. [Most states of a sophisticated operating system cannot be defined by input-output contingencies of the whole machine either.]

These ideas are developed in papers and presentations on the CogAff web-site:

http://www.cs.bham.ac.uk/research/cogaff/

http://www.cs.bham.ac.uk/research/cogaff/talks/

including a paper on consciousness jointly authored with Ron Chrisley published in a recent issue of the Journal of Consciousness Studies edited by Owen Holland.