Aaron Sloman

The H-CogAff architecture

Abstract

Some of the implications of the postulated H-Cogaff architecture will be analysed, for instance the implication that different perceptual and motor layers co-evolved with different central layers, and the implication that there are probably far more types of learning and development than have hitherto been studied (e.g. different kinds of learning in different parts of the architecture, and some kinds of learning linking different architectural layers). H-Cogaff also suggests a wide variety of types of affective states, making nonsense of simplistic taxonomies of types of emotion. If there's time, I'll discuss architectural and representational requirements for perception of affordances -- these inherently involve counterfactual conditionals.

However the theory still has many gaps, and the broad-brush divisions postulated in the CogAff schema are clearly inadequate in relation to the known diversity of biological phenomena. So we need a long-term research programme aimed at extending and refining our schema, along with its ontology both for architectures and for mental states and processes.

This, in turn, should lead to a deeper richer variant of the H-Cogaff architecture, which is better able to explain the enormous variety of commonplace and bizarre mental phenomena found both in everyday life and in neurological and psychiatric wards, especially in the many amazing kinds of development that occur in the first ten years of a child's life.

For instance we still lack a good theory of what it is to find something funny, or what aesthetic pleasure is. If our future robots cannot enjoy hearing a Bach two-part invention or be entertained by Asimov's story about robots becoming religious bigots, then we'll have missed something. But there's a long, long way to go, and part of our aim is to draw attention to the many phenomena that we cannot yet explain or model in order to produce requirements specifications for conscious machines of many kinds (biological and artificial).

This defines long term collaborative research goals and should help to defragment AI and cognitive science, and divert attention away from silly debates such as whether GOFAI has failed or whether connectionist models are best, or whether dynamical systems or embodiment are the key to anything.

Here's a sketch of H-Cogaff:

http://www.cs.bham.ac.uk/~axs/fig/hcogaff.gif