New Systematic Review article at Frontiers in Robotics and AI

We have just published a systematic review article in the journal Frontiers in Robotics and AI. The article, titled A survey of ontology-enabled processes for dependable robot autonomy analyses the use of formal ontologies in the improvement of robot autonomy.

The article has been the product of the effort of Esther Aguado, as part of the work of her excellent PhD Thesis (with some important collaboration from the rest of the authors in the context of the Horizon 2020 project ROBOMINERS and the Horizon Europe project CORESENSE).

A summary of the article:

Autonomous robots are already present in a variety of domains performing complex tasks. Their deployment in open-ended environments offers endless possibilities. However, there are still risks due to unresolved issues in dependability and trust. Knowledge representation and reasoning provide tools for handling explicit information, endowing systems with a deeper understanding of the situations they face. This article explores the use of declarative knowledge for autonomous robots to represent and reason about their environment, their designs, and the complex missions they accomplish. This information can be exploited at runtime by the robots themselves to adapt their structure or re-plan their actions to finish their mission goals, even in the presence of unexpected events. The primary focus of this article is to provide an overview of popular and recent research that uses knowledge-based approaches to increase robot autonomy. Specifically, the ontologies surveyed are related to the selection and arrangement of actions, representing concepts such as autonomy, planning, or behavior. Additionally, they may be related to overcoming contingencies with concepts such as fault or adapt. A systematic exploration is carried out to analyze the use of ontologies in autonomous robots, with the objective of facilitating the development of complex missions. Special attention is dedicated to examining how ontologies are leveraged in real time to ensure the successful completion of missions while aligning with user and owner expectations. The motivation of this analysis is to examine the potential of knowledge-driven approaches as a means to improve flexibility, explainability, and efficacy in autonomous robotic systems.

Get the article:

And also Aguado’s Thesis:

Towards a Theory of Awareness at ICAART 2024

At the ICAART 2024 Conference in Rome we had a Special Session organized by Luc Steels on the Awareness Inside topic funded by the EIC Pathfinder programme.

In the session I presented the initial developments towards a theory of awareness that the ASLab team is doing inside the CORESENSE and METATOOL projects.

We did a short, ten minute presentation, and after that we had an interesting conversation around the ideas presented and the possibility of developing a complete model-based theory of aware agents.

The presentation addressed the possibility of having a solid theory of awareness both for humans and machines and what are the elements that such a theory should have. We talked about two of these elements: the clear delimitation of a domain of phenomena, and the essential concepts that are the cornerstones of the theory.

These were the concepts that we proposed/discussed:

  • Sensing: The production of information for the subject from an object.
  • Perceiving: The integration of the sensory information bound to an object into a model of the object.
  • Model: Integrated actionable representation; an information structure that sustains a modelling relation.
  • Engine: Set of operations over a model.
  • Inference: Derive conclusions from the model. Apply engines to model.
  • Valid inference: A inference whose result matches the phenomenon at the modelled object.
  • Exert a model: Perform valid inferences from the model.
  • Understanding: Achieving exertability of a model of the object/environment.
  • Specific understanding: Understanding concerning a specific set of exertions.
  • Mission understanding: Understanding concerning a set of mission-bound exertions.
  • Omega understanding: Understanding all possible exertions of a model in relation to an object.
  • Awareness: Real-time understanding of sensory flows.
  • Self-awareness: Subject-bound awareness. Awareness concerning inner perception.

Get the complete presentation:

And the paper:

IEEE Standard for Robot Task Representation

We have completed the development of a new standard on ontologies for robotics. In this case we have addressed the specification of concept related to the assignment of tasks to robots to be executed by them

The new IEEE P1872.1 Standard for Robot Task Representation is available in draft status at the IEEE Xplore repository. The purpose of the standard is to provide a robot task ontology for knowledge representation and reasoning in robotics and automation.

This standard defines an ontology that allows the representation, reasoning, and communication of task knowledge in the robotics and automation domain. The ontology includes a list of essential terms and their definitions, attributes, types, structures, properties, constraints, and relationships for planners and designers represent task knowledge allowing them to better communicate among agents in the automated system.

The standard provides a unified way of representing robot task knowledge and provides a common set of terms and definitions structured in a logical theory, allowing for unambiguous knowledge transfer among groups of human, robots, and other artificial systems. It is linked with existent robotics ontologies. Given the recent advancement in robot standards (such as ISO 15066:2016 for collaborative robots or IEEE 1872.2:2022 for autonomous robots), the proper definition and implementation of tasks and task-based robot control has become a key toward advanced human-robot interaction.

Having a shared common robot task representation will also allow for greater reuse of task knowledge among research and development efforts in the same robot domain as well as efforts in different robot domains.

Get the draft of IEEE P1872.1 Standard for Robot Task Representation from IEEE.

Asking ChatGPT about our possibilitites in CORESENSE

Much in the vein of ancient times people asking Pythia about their future, I asked ChatGPT about our possibilities in the CORESENSE project to create machines capable of understanding their environment. This is what it said:

Me: Can a machine understand its environment?

ChatGPT: Whether a machine can understand its environment depends on its capabilities and design. Some machines, such as certain types of robots or artificial intelligence systems, are designed to gather and process information about their environment in order to make decisions and take actions. These machines may be able to build a model or representation of their environment and use this information to navigate and interact with it. However, the extent to which these machines can truly “understand” their environment is a complex and contentious topic that is not fully understood. Some argue that machines can only simulate understanding and do not have the same level of conscious awareness and comprehension as humans. Others believe that machines can achieve true understanding through advanced algorithms and learning systems. Ultimately, the ability of a machine to understand its environment depends on its design and capabilities.

ChatGPT is an AI-based chatbot developed by OpenAI to have a dialogue much a la Turing test. The program is a very large English language model that has been fine-tuned with both supervised and reinforcement learning techniques.

It has always been models

There is a relatively recent boom on model-based X. Model-based development, model-based design, model-based systems engineering, model-based minds, …

In all the domains of engineering, it looks like we have just discovered the use of models to support our work. But this is, obviously, false. It has always been models. All around. All the time.

When an engineer-to-be starts his studies, the first he learns is physics and mathematics: i.e. how to model reality and the language to build the model. In recent parlance we would say that math is just a metamodel of reality. A model of the models that we use to capture the knowledge we have about an extant system or the ideas we have about a system to be engineered.

The distinction between knowledge and ideas may seem relevant but it is not so much. They’re all about mental content; that may or may not be related or co-related to some reality out there. Both knowledge and ideas are models of realities-that-are or realities-to-be that are of relevance to us or our stakeholders.

It has always been models. In our minds and in the collaborative processes that we use to engineer our systems. Model-based X is not new. It is just good, old-fashioned engineering.