Tom Ziemke

What's life got to do with it? Why an artificial self might have to be autopoietic

Abstract

Many researchers interested in the possibility of consciousness in computers, robots or other artefacts agree that several aspects of consciousness require an agent, i.e. a system that interacts with its environment by means of perception, action, etc. Much of the discussion focuses on the question which type(s) of information processing, representation, learning processes, etc. an artificial agent would have to be equipped with in order to achieve different types or levels of consciousness. The question exactly what it takes for an artificial (or natural) system to be an agent in the first place, and thus a candidate for an artificial self or consciousness, on the other hand, receives much less attention, although it has been pointed out by several authors (e.g. Franklin & Graesser) that current definitions of what constitutes an agent are somewhat vague. This talk discusses the theories of von Uexkuell, Maturana & Varela, Bickhard, Christensen & Hooker which, roughly speaking, argue that what it really takes for a system to constitute an agent capable of cognition, consciousness, etc. is that it is a living system. Implications for the possibility of consciousness in artefacts are discussed.