Towards a Theory of Awareness at ICAART 2024

At the ICAART 2024 Conference in Rome we had a Special Session organized by Luc Steels on the Awareness Inside topic funded by the EIC Pathfinder programme.

In the session I presented the initial developments towards a theory of awareness that the ASLab team is doing inside the CORESENSE and METATOOL projects.

We did a short, ten minute presentation, and after that we had an interesting conversation around the ideas presented and the possibility of developing a complete model-based theory of aware agents.

The presentation addressed the possibility of having a solid theory of awareness both for humans and machines and what are the elements that such a theory should have. We talked about two of these elements: the clear delimitation of a domain of phenomena, and the essential concepts that are the cornerstones of the theory.

These were the concepts that we proposed/discussed:

  • Sensing: The production of information for the subject from an object.
  • Perceiving: The integration of the sensory information bound to an object into a model of the object.
  • Model: Integrated actionable representation; an information structure that sustains a modelling relation.
  • Engine: Set of operations over a model.
  • Inference: Derive conclusions from the model. Apply engines to model.
  • Valid inference: A inference whose result matches the phenomenon at the modelled object.
  • Exert a model: Perform valid inferences from the model.
  • Understanding: Achieving exertability of a model of the object/environment.
  • Specific understanding: Understanding concerning a specific set of exertions.
  • Mission understanding: Understanding concerning a set of mission-bound exertions.
  • Omega understanding: Understanding all possible exertions of a model in relation to an object.
  • Awareness: Real-time understanding of sensory flows.
  • Self-awareness: Subject-bound awareness. Awareness concerning inner perception.

Get the complete presentation:

And the paper:

IEEE Standard for Robot Task Representation

We have completed the development of a new standard on ontologies for robotics. In this case we have addressed the specification of concept related to the assignment of tasks to robots to be executed by them

The new IEEE P1872.1 Standard for Robot Task Representation is available in draft status at the IEEE Xplore repository. The purpose of the standard is to provide a robot task ontology for knowledge representation and reasoning in robotics and automation.

This standard defines an ontology that allows the representation, reasoning, and communication of task knowledge in the robotics and automation domain. The ontology includes a list of essential terms and their definitions, attributes, types, structures, properties, constraints, and relationships for planners and designers represent task knowledge allowing them to better communicate among agents in the automated system.

The standard provides a unified way of representing robot task knowledge and provides a common set of terms and definitions structured in a logical theory, allowing for unambiguous knowledge transfer among groups of human, robots, and other artificial systems. It is linked with existent robotics ontologies. Given the recent advancement in robot standards (such as ISO 15066:2016 for collaborative robots or IEEE 1872.2:2022 for autonomous robots), the proper definition and implementation of tasks and task-based robot control has become a key toward advanced human-robot interaction.

Having a shared common robot task representation will also allow for greater reuse of task knowledge among research and development efforts in the same robot domain as well as efforts in different robot domains.

Get the draft of IEEE P1872.1 Standard for Robot Task Representation from IEEE.

Asking ChatGPT about our possibilitites in CORESENSE

Much in the vein of ancient times people asking Pythia about their future, I asked ChatGPT about our possibilities in the CORESENSE project to create machines capable of understanding their environment. This is what it said:

Me: Can a machine understand its environment?

ChatGPT: Whether a machine can understand its environment depends on its capabilities and design. Some machines, such as certain types of robots or artificial intelligence systems, are designed to gather and process information about their environment in order to make decisions and take actions. These machines may be able to build a model or representation of their environment and use this information to navigate and interact with it. However, the extent to which these machines can truly “understand” their environment is a complex and contentious topic that is not fully understood. Some argue that machines can only simulate understanding and do not have the same level of conscious awareness and comprehension as humans. Others believe that machines can achieve true understanding through advanced algorithms and learning systems. Ultimately, the ability of a machine to understand its environment depends on its design and capabilities.

ChatGPT is an AI-based chatbot developed by OpenAI to have a dialogue much a la Turing test. The program is a very large English language model that has been fine-tuned with both supervised and reinforcement learning techniques.

It has always been models

There is a relatively recent boom on model-based X. Model-based development, model-based design, model-based systems engineering, model-based minds, …

In all the domains of engineering, it looks like we have just discovered the use of models to support our work. But this is, obviously, false. It has always been models. All around. All the time.

When an engineer-to-be starts his studies, the first he learns is physics and mathematics: i.e. how to model reality and the language to build the model. In recent parlance we would say that math is just a metamodel of reality. A model of the models that we use to capture the knowledge we have about an extant system or the ideas we have about a system to be engineered.

The distinction between knowledge and ideas may seem relevant but it is not so much. They’re all about mental content; that may or may not be related or co-related to some reality out there. Both knowledge and ideas are models of realities-that-are or realities-to-be that are of relevance to us or our stakeholders.

It has always been models. In our minds and in the collaborative processes that we use to engineer our systems. Model-based X is not new. It is just good, old-fashioned engineering.