New Systematic Review article at Frontiers in Robotics and AI

We have just published a systematic review article in the journal Frontiers in Robotics and AI. The article, titled A survey of ontology-enabled processes for dependable robot autonomy analyses the use of formal ontologies in the improvement of robot autonomy.

The article has been the product of the effort of Esther Aguado, as part of the work of her excellent PhD Thesis (with some important collaboration from the rest of the authors in the context of the Horizon 2020 project ROBOMINERS and the Horizon Europe project CORESENSE).

A summary of the article:

Autonomous robots are already present in a variety of domains performing complex tasks. Their deployment in open-ended environments offers endless possibilities. However, there are still risks due to unresolved issues in dependability and trust. Knowledge representation and reasoning provide tools for handling explicit information, endowing systems with a deeper understanding of the situations they face. This article explores the use of declarative knowledge for autonomous robots to represent and reason about their environment, their designs, and the complex missions they accomplish. This information can be exploited at runtime by the robots themselves to adapt their structure or re-plan their actions to finish their mission goals, even in the presence of unexpected events. The primary focus of this article is to provide an overview of popular and recent research that uses knowledge-based approaches to increase robot autonomy. Specifically, the ontologies surveyed are related to the selection and arrangement of actions, representing concepts such as autonomy, planning, or behavior. Additionally, they may be related to overcoming contingencies with concepts such as fault or adapt. A systematic exploration is carried out to analyze the use of ontologies in autonomous robots, with the objective of facilitating the development of complex missions. Special attention is dedicated to examining how ontologies are leveraged in real time to ensure the successful completion of missions while aligning with user and owner expectations. The motivation of this analysis is to examine the potential of knowledge-driven approaches as a means to improve flexibility, explainability, and efficacy in autonomous robotic systems.

Get the article:

And also Aguado’s Thesis:

Towards a Theory of Awareness at ICAART 2024

At the ICAART 2024 Conference in Rome we had a Special Session organized by Luc Steels on the Awareness Inside topic funded by the EIC Pathfinder programme.

In the session I presented the initial developments towards a theory of awareness that the ASLab team is doing inside the CORESENSE and METATOOL projects.

We did a short, ten minute presentation, and after that we had an interesting conversation around the ideas presented and the possibility of developing a complete model-based theory of aware agents.

The presentation addressed the possibility of having a solid theory of awareness both for humans and machines and what are the elements that such a theory should have. We talked about two of these elements: the clear delimitation of a domain of phenomena, and the essential concepts that are the cornerstones of the theory.

These were the concepts that we proposed/discussed:

  • Sensing: The production of information for the subject from an object.
  • Perceiving: The integration of the sensory information bound to an object into a model of the object.
  • Model: Integrated actionable representation; an information structure that sustains a modelling relation.
  • Engine: Set of operations over a model.
  • Inference: Derive conclusions from the model. Apply engines to model.
  • Valid inference: A inference whose result matches the phenomenon at the modelled object.
  • Exert a model: Perform valid inferences from the model.
  • Understanding: Achieving exertability of a model of the object/environment.
  • Specific understanding: Understanding concerning a specific set of exertions.
  • Mission understanding: Understanding concerning a set of mission-bound exertions.
  • Omega understanding: Understanding all possible exertions of a model in relation to an object.
  • Awareness: Real-time understanding of sensory flows.
  • Self-awareness: Subject-bound awareness. Awareness concerning inner perception.

Get the complete presentation:

And the paper:

IEEE Standard for Robot Task Representation

We have completed the development of a new standard on ontologies for robotics. In this case we have addressed the specification of concept related to the assignment of tasks to robots to be executed by them

The new IEEE P1872.1 Standard for Robot Task Representation is available in draft status at the IEEE Xplore repository. The purpose of the standard is to provide a robot task ontology for knowledge representation and reasoning in robotics and automation.

This standard defines an ontology that allows the representation, reasoning, and communication of task knowledge in the robotics and automation domain. The ontology includes a list of essential terms and their definitions, attributes, types, structures, properties, constraints, and relationships for planners and designers represent task knowledge allowing them to better communicate among agents in the automated system.

The standard provides a unified way of representing robot task knowledge and provides a common set of terms and definitions structured in a logical theory, allowing for unambiguous knowledge transfer among groups of human, robots, and other artificial systems. It is linked with existent robotics ontologies. Given the recent advancement in robot standards (such as ISO 15066:2016 for collaborative robots or IEEE 1872.2:2022 for autonomous robots), the proper definition and implementation of tasks and task-based robot control has become a key toward advanced human-robot interaction.

Having a shared common robot task representation will also allow for greater reuse of task knowledge among research and development efforts in the same robot domain as well as efforts in different robot domains.

Get the draft of IEEE P1872.1 Standard for Robot Task Representation from IEEE.

AI for Conscious Machines

November 14, 2013

The many challenges of Artificial Intelligence. November 13-15 2023

A workshop to evaluate the challenges and repercussions of the “new” Artificial Intelligence (AI) taking place at Residencia La Cristalera, Miraflores de la Sierra (Madrid).

It has been organized by Juan Poyatos (CNB), Juan Rodríguez (IIIA), and David Ríos (ICMAT) abd sponsored by AIHub-CSIC and the AXA Research Fund.

https://aihub.csic.es/evento/workshop-the-many-challenges-of-artificial-intelligence/


There I gave a talk on machine consciousness based on our current activity in the Horizon Europe projects CORESENSE and METATOOL.

A Chat with ChatGPT

This is a short “education” dialog between a user that wants to improve his knowledge of “Cognitive Science” [1] and ChatGPT [2]. It shows how ChatGPT can be used to improve understanding of a domain. This can be immediately mapped to other domains [3].

The user configures a prompt is such a way as to direct the way into which the conversation will happen:

I am learning cognitive science. Ask me a series of questions to test my knowledge. Identify errors and areas I do not know well. Correct my answers to make them better.

Starting the dialog

Connect to ChatGPT and start the dialog with a prompt.

User:

I am learning cognitive science. Ask me a series of questions to test my knowledge. Identify errors and areas I do not know well. Correct my answers to make them better.

ChatGPT:

Sure! I’d be happy to ask you a series of questions to test your knowledge in cognitive science. Let’s get started:

1. What is cognitive science?
2. Name some sub-disciplines within cognitive science.
3. What is the difference between bottom-up and top-down processing?
4. Explain the concept of working memory.
5. How does classical conditioning work, and what are some real-life examples?
6. Can you define the term “schema” and provide an example?
7. What is the dual-process theory of decision-making?
8. Describe the concept of attentional processes and provide an example.
9. Explain the difference between implicit and explicit memory.
10. What is the role of the prefrontal cortex in cognition?

Feel free to answer these questions, and I’ll provide feedback and corrections where necessary.

The dialog then happens by 1) ChatGPT asking the questions, 2) the user answering them, and 3) ChatGPT commenting on the answers.

What is cognitive science?

User:

Cognitive science is the science that studies mind

ChatGPT:

That’s a good start! Cognitive science does indeed study the mind, but it is a multidisciplinary field that goes beyond just studying the mind. It encompasses the study of various mental processes, including perception, attention, memory, language, problem-solving, and decision-making. Cognitive science combines insights and methodologies from psychology, neuroscience, computer science, linguistics, philosophy, and anthropology to understand how the mind works.

What are some sub-disciplines within cognitive science?

User:

linguistics, psychology, anthropology, neuroscience, artificial intelligence

ChatGPT:

Great! You’ve mentioned some of the key sub-disciplines within cognitive science. Here are a few more you could consider:

Cognitive psychology: Focuses on studying mental processes such as perception, attention, memory, language, and problem-solving. Computational modeling: Involves developing computer models and simulations to understand and replicate cognitive processes. Cognitive neuroscience: Investigates the neural basis of cognitive processes using techniques such as brain imaging (e.g., fMRI) and neurophysiological recordings. Philosophy of mind: Explores philosophical questions related to consciousness, perception, intentionality, and the nature of the mind. Cognitive anthropology: Examines how culture and societal factors influence cognitive processes. Artificial intelligence and cognitive robotics: Involves developing intelligent systems and robots that can mimic and understand human cognitive abilities.

Now let’s move on to the next question:

What is the difference between bottom-up and top-down processing?

User:

bottom up refers to the thinking processes at the most abstract level and how they are realized by systems and subsystems down to neurons. Bottom-up processing addresses the flow of signal through sensing and neural elements and how they organise in structures up to high-level cognitive processes

ChatGPT:

It seems like there is a slight misunderstanding in your answer. Let me clarify the concepts of bottom-up and top-down processing:

Bottom-up processing refers to the way sensory information is received and processed from the environment. It starts with the basic sensory input and then progresses to higher-level cognitive processing. In other words, it involves building up a perception or understanding of something based on the individual sensory cues and features present in the input. It is data-driven and starts from the “bottom” or the sensory level.

On the other hand, top-down processing involves the use of prior knowledge, expectations, and cognitive processes to interpret and make sense of incoming information. It relies on higher-level cognitive processes, such as memory, attention, and context, to influence the interpretation and understanding of sensory input. Top-down processing is driven by existing knowledge and expectations, guiding the interpretation of sensory information from the “top” or higher-level cognitive processes.

In summary, bottom-up processing starts with sensory information and builds up to higher-level cognition, while top-down processing involves using prior knowledge and expectations to guide the interpretation of incoming sensory information.

Let’s proceed to the next question:

Explain the concept of working memory.

User:

Is the memory that holds recent data for the reasoning processes and that will change as the mind think

ChatGPT:

You’re on the right track! Working memory is a cognitive system responsible for temporarily holding and manipulating information in the mind as we engage in various mental tasks. It is often referred to as the “workbench” or “mental scratchpad” of the mind.

Working memory involves the active processing and manipulation of information rather than long-term storage. It is closely related to our attentional processes and plays a vital role in complex cognitive tasks such as problem-solving, reasoning, decision-making, and language comprehension.

The capacity of working memory is limited, and it can hold only a certain amount of information at any given time. This capacity can vary among individuals. Working memory can be divided into several components, such as the phonological loop (for holding and manipulating verbal information), the visuospatial sketchpad (for visual and spatial information), and the central executive (for coordinating and managing information).

It’s important to note that working memory is dynamic and can change as we engage in different cognitive activities. As new information is processed and manipulated, the contents of working memory can be updated or replaced.

Now let’s move on to the next question:

How does classical conditioning work, and what are some real-life examples?

User:

classical conditioning associates some perceptual clues to other negative or possitive situations so the agent eventually reacts to the conditioned stimulus as it would react to the non-conditioned stimulus. Paulov’s dog is an example

ChatGPT:

You are correct in your description of classical conditioning, but there are a few clarifications I can provide.

Classical conditioning, also known as Pavlovian conditioning, is a type of associative learning in which an initially neutral stimulus (conditioned stimulus or CS) becomes associated with a meaningful stimulus (unconditioned stimulus or US) that elicits a specific response. Through repeated pairings of the CS and the US, the individual learns to associate the CS with the US and develops a conditioned response (CR) to the CS alone.

In the famous example of Pavlov’s dog, Ivan Pavlov paired the sound of a bell (CS) with the presentation of food (US), which naturally caused the dog to salivate. After multiple pairings, the dog started to associate the sound of the bell with the impending arrival of food. Eventually, the dog began to salivate (CR) in response to the bell alone, even without the food being presented. The bell had become a conditioned stimulus that triggered a conditioned response.

Real-life examples of classical conditioning can be found in various contexts. Here are a few:

Advertising: Advertisers often use classical conditioning techniques to associate their products with positive emotions or desirable outcomes. For instance, a commercial showing people enjoying a refreshing drink at the beach aims to create a positive emotional response to the product.

Phobias: Phobias can develop through classical conditioning. If someone has a negative experience, such as being attacked by a dog, they may develop a fear response (CR) to dogs (CS) even in the absence of any actual danger.

Allergies: People with allergies can develop conditioned responses to stimuli associated with their allergens. For example, someone allergic to cats may start sneezing (CR) when they encounter cat fur (CS) due to the association formed between the two.

Remember that classical conditioning involves the process of learning associations between stimuli and responses, leading to a conditioned response to a previously neutral stimulus.

Let’s move on to the next question:

Can you define the term “schema” and provide an example?

User:

schema is a mental structure to hold together perceptions and thoughts concerning a situation. for example when we enter a class we expect the proffessor to teach and the students to pay attention and make questions

ChatGPT:

Good job with your definition of schema! Just a small refinement: a schema is a mental framework or organized structure that helps us make sense of and interpret information about the world. It consists of our pre-existing knowledge, beliefs, expectations, and experiences related to a particular concept, object, or situation.

Your example of a classroom schema is quite accurate. When we enter a class, our schema for a classroom includes expectations such as the professor teaching, students paying attention, and the interactive exchange of questions and answers. This schema helps us navigate and understand the classroom environment more efficiently.

Schemas play a crucial role in cognitive processing by allowing us to quickly interpret and categorize new information based on our existing knowledge and expectations. They help fill in gaps, guide our attention, and influence our memory and interpretation of events.

Now let’s move on to the next question:

What is the dual-process theory of decision-making?

User:

it refers to the double proceses of 1) making decissions by direct association situation-decision and 2) making decisions by deep reasoning on a model of the situation

ChatGPT:

Your response touches upon the general idea of the dual-process theory of decision-making, but let’s expand and clarify it further.

The dual-process theory of decision-making proposes that there are two distinct cognitive processes involved in decision-making:

Intuitive (or automatic) processing: This process involves making decisions quickly and effortlessly based on immediate associations and heuristics. It relies on past experiences, emotions, and subconscious biases. It is often referred to as “System 1” thinking. Intuitive processing is more prone to biases and can lead to impulsive or instinctive decisions.

Reflective (or controlled) processing: This process involves making decisions through deliberate and conscious reasoning. It relies on analytical thinking, logical evaluation, and considering various options and consequences. It is often referred to as “System 2” thinking. Reflective processing requires more effort and time and is typically used when decisions are complex or require careful consideration.

In essence, the dual-process theory suggests that decision-making can involve a combination of both intuitive and reflective processes, depending on the situation and individual preferences. It recognizes that some decisions are made quickly and intuitively, while others require deliberate and thoughtful analysis.

It’s important to note that the interaction between these two processes can vary among individuals and contexts. Some decisions may heavily rely on one process over the other, while some decisions may involve a balance or interplay between both processes.

Now let’s move on to the next question:

Describe the concept of attentional processes and provide an example.

User:

attentional processes ar mental processes that focus the effort on a concrete subset of the perceptions that are more relevant. For example when someone calls us by our name in a party we pay attention to this person speech and not others

ChatGPT:

You’re on the right track with your definition of attentional processes. Let’s refine it a bit further:

Attentional processes are cognitive mechanisms that enable us to selectively focus our awareness and cognitive resources on specific stimuli or aspects of our environment. Attention helps filter and prioritize incoming sensory information, allowing us to allocate our mental effort to the most relevant or important information at any given moment.

Your example of someone calling us by our name in a party is a good illustration of attentional processes. In that situation, our attention is directed to the person who called our name, and we prioritize their speech over other conversations or ambient noise. Our attentional processes enhance the processing and perception of relevant information while suppressing or filtering out irrelevant or distracting stimuli.

Attentional processes can operate at different levels and can be influenced by factors such as novelty, salience, personal relevance, and task demands. They play a crucial role in various cognitive tasks, including perception, memory, problem-solving, and decision-making.

Let’s move on to the next question:

Explain the difference between implicit and explicit memory.

User:

implicit means that we cannot directly associate memory elements to the perceived facts. Explicit means that this connection is direct and observable

ChatGPT:

Your answer captures the essence of the difference between implicit and explicit memory, but let’s provide a more comprehensive explanation:

Implicit Memory: Implicit memory refers to the type of memory that operates unconsciously and influences our behavior or cognitive processes without our awareness. It involves the retention and utilization of information obtained from past experiences, but it does not require conscious effort to retrieve or express that information. Implicit memory is often expressed through improved performance or changes in behavior, without explicit awareness of the underlying memory content. Examples of implicit memory include skills we have acquired, such as riding a bicycle or playing an instrument, or the ability to navigate familiar routes without conscious recall of specific details.

Explicit Memory: Explicit memory, also known as declarative memory, involves the conscious recollection and intentional retrieval of information from past experiences. It is the type of memory that we can consciously access and verbalize. Explicit memory can be further divided into two subtypes:

Episodic Memory: Episodic memory refers to the memory of specific events or episodes that we have personally experienced. It involves the recollection of contextual details, such as time, place, emotions, and associated facts. For example, recalling your last vacation or a recent birthday celebration relies on episodic memory.

Semantic Memory: Semantic memory involves the storage and retrieval of general knowledge, concepts, facts, and meanings that are not tied to a particular event or personal experience. It encompasses general knowledge about the world, language, history, and various academic subjects. Remembering the capital of a country or knowing the definition of a word relies on semantic memory.

In summary, implicit memory operates unconsciously and affects behavior without explicit awareness, while explicit memory involves conscious recollection and intentional retrieval of information from past experiences, including episodic and semantic memory components.

Now let’s proceed to the final question:

What is the role of the prefrontal cortex in cognition?

User:

high level decision making, voluntary action, attention and self-reflection

ChatGPT:

You’ve identified several important roles of the prefrontal cortex in cognition. Let’s expand on them and provide a more comprehensive understanding:

The prefrontal cortex (PFC) plays a critical role in various cognitive functions. Some of its key roles include:

Executive Function: The PFC is involved in higher-order executive functions, such as planning, reasoning, problem-solving, and decision-making. It helps us set goals, prioritize tasks, and exert cognitive control over our behavior.

Working Memory: The PFC is crucial for working memory processes, which involve the temporary storage and manipulation of information. It helps us hold and manipulate relevant information in mind while engaged in complex cognitive tasks.

Attention: The PFC is involved in attentional processes, allowing us to selectively focus our attention on relevant stimuli while filtering out distractions. It helps allocate cognitive resources to prioritize information processing.

Inhibition: The PFC is responsible for inhibitory control, allowing us to suppress inappropriate or irrelevant responses. It helps regulate impulsive behaviors and enables response inhibition when necessary.

Self-Reflection and Metacognition: The PFC is involved in self-awareness, introspection, and metacognition—the ability to monitor and reflect on our own mental states, thoughts, and actions. It plays a role in self-evaluation, monitoring our performance, and adapting strategies accordingly.

Social Cognition: The PFC is also implicated in social cognition, which involves understanding and processing social information, such as emotions, intentions, and social norms. It helps us interpret and respond appropriately to social cues and engage in social interactions.

It’s important to note that the prefrontal cortex works in conjunction with other brain regions to support these cognitive functions. Disruptions or damage to the PFC can lead to deficits in decision-making, attentional control, impulse regulation, and other cognitive processes.

Great job with the questions! If you have any more or need further clarification, feel free to ask.

Online chat

At this URL it is possible to see the conversation online:

https://chat.openai.com/share/fedba626-4838-4f14-840d-0e9ba12cd3cc

Doing the same for other domain

We can use the same prompt for any other domain, e.g. category theory .

User:

I am learning category theory. Ask me a series of questions to test my knowledge. Identify errors and areas I do not know well. Correct my answers to make them better.

ChatGPT:

Sure! I’d be happy to ask you a series of questions to test your knowledge of category theory. Let’s get started:

What is a category?
What are the main components of a category?
What is a morphism in a category?
What is the identity morphism and why is it important?
What is the composition of morphisms and what properties does it satisfy?
What is a functor and what does it preserve?
Can you give an example of a functor between two categories?
What is a natural transformation and what does it relate?
Can you explain the concept of isomorphism in a category?
What is an initial object in a category?
What is a terminal object in a category?
What is a product in a category?
What is a coproduct in a category?
Can you explain the concept of universal properties?
What are some examples of categories besides the category of sets?
Please provide your answers, and I’ll evaluate them and offer corrections or additional explanations as needed.

References

[1] Jay D. Friedenberg, Gordon W. Silverman, and Michael J. Spivey. Cognitive Science: An Introduction to the Study of Mind. SAGE Publications, Inc, Los Angeles, 4th edition, 2021.

[2] https://openai.com/blog/chatgpt

[1] S Awodey. Category Theory. Oxford University Press, 2006.

Asking ChatGPT about our possibilitites in CORESENSE

Much in the vein of ancient times people asking Pythia about their future, I asked ChatGPT about our possibilities in the CORESENSE project to create machines capable of understanding their environment. This is what it said:

Me: Can a machine understand its environment?

ChatGPT: Whether a machine can understand its environment depends on its capabilities and design. Some machines, such as certain types of robots or artificial intelligence systems, are designed to gather and process information about their environment in order to make decisions and take actions. These machines may be able to build a model or representation of their environment and use this information to navigate and interact with it. However, the extent to which these machines can truly “understand” their environment is a complex and contentious topic that is not fully understood. Some argue that machines can only simulate understanding and do not have the same level of conscious awareness and comprehension as humans. Others believe that machines can achieve true understanding through advanced algorithms and learning systems. Ultimately, the ability of a machine to understand its environment depends on its design and capabilities.

ChatGPT is an AI-based chatbot developed by OpenAI to have a dialogue much a la Turing test. The program is a very large English language model that has been fine-tuned with both supervised and reinforcement learning techniques.

Autonomous robots’ personae

Autonomous robots are seen by engineers as mere human avatars deployed for the benefit of some absent principal. Other times, when bound to research in cognitive science, they are seen as mere plastic and metal incarnations of some fundamental, effortfully distilled essence of human beings.

They are not.

Siri Hustvedt – The Delusions of Certainty

Siri Hustvedt, The Delusions of Certainty, Simon & Schuster, 2016.

Thanks. After reading the book I am less deluded than when I started. I now know that Hustvedt is an educated, well-read erudite. I now know that Hustvedt doesn’t like men, information, and computers nor technology in general. I now know that singular cases are not supportive of any theoretical stance, unless it is your stance. I now know that data is always biased if takes you to a different opinion from literature mainstream.

This book is worth reading. It is engaging, informative and challenging. It contains tons of valuable references for anyone interested in studying the mind. It shows how delusional is our own perspective of ourselves. It clearly shows that humanities are not keeping apace with the evolution of the human world and keep themselves deeply bound to renaissance ideas on humans.

Read it.

A new Horizon Europe project: CoreSense

We have signed a grant agreement with the EC to coordinate a new research project on intelligent robotics. The CoreSense project will develop a new hybrid cognitive architecture to make robots capable of understanding and being aware of what is going on. The project will start on October 1, 2022 will span four years (2022-2026) and joins six partners across europe in an effort to push forward the limits of robotic cognition.

Cognitive robots are augmenting their autonomy, enabling them to deployments in increasingly open-ended environments. This offers enormous possibilities for improvements in human economy and wellbeing. However, it also poses strong risks that are difficult to assess and control by humans. The trend towards increased autonomy conveys augmented problems concerning reliability, resilience, and trust for autonomous robots in open worlds. The essence of the problem can be traced to robots suffering from a lack of understanding of what is going on and a lack of awareness of their role in these situations. This is a problem that artificial intelligence approaches based on machine learning are not addressing well. Autonomous robots do not fully understand their open environments, their complex missions, their intricate realizations, and the unexpected events that affect their performance. An improvement in the capability to understand of autonomous robots is needed.

The CoreSense project tries to provide a solution to this need in the form of a AI theory of understanding, a theory of robot awareness, some enginering-grade reusable software assets to apply these theories in real robots. The project will build three demonstrations of its capability to augment resilience of drone teams, augment flexibility of manufacturing robots, and augment human alignment of social robots.

In summary, CoreSense will develop a cognitive architecture for autonomous robots based on a formal concept of understanding, supporting value-oriented situation understanding and self-awareness to improve robot flexibility, resilience and explainability.

There are six project partners:

Universidad Politécnica de Madrid – ES – Coordinator
Delft University of Technology – NL
Fraunhofer IPA – DE
Universidad Rey Juan Carlos – ES
PAL Robotics – ES
Irish Manufacturing Research – IR

Principal Investigator: Ricardo Sanz