Asking ChatGPT about our possibilitites in CORESENSE

Much in the vein of ancient times people asking Pythia about their future, I asked ChatGPT about our possibilities in the CORESENSE project to create machines capable of understanding their environment. This is what it said:

Me: Can a machine understand its environment?

ChatGPT: Whether a machine can understand its environment depends on its capabilities and design. Some machines, such as certain types of robots or artificial intelligence systems, are designed to gather and process information about their environment in order to make decisions and take actions. These machines may be able to build a model or representation of their environment and use this information to navigate and interact with it. However, the extent to which these machines can truly “understand” their environment is a complex and contentious topic that is not fully understood. Some argue that machines can only simulate understanding and do not have the same level of conscious awareness and comprehension as humans. Others believe that machines can achieve true understanding through advanced algorithms and learning systems. Ultimately, the ability of a machine to understand its environment depends on its design and capabilities.

ChatGPT is an AI-based chatbot developed by OpenAI to have a dialogue much a la Turing test. The program is a very large English language model that has been fine-tuned with both supervised and reinforcement learning techniques.

It has always been models

There is a relatively recent boom on model-based X. Model-based development, model-based design, model-based systems engineering, model-based minds, …

In all the domains of engineering, it looks like we have just discovered the use of models to support our work. But this is, obviously, false. It has always been models. All around. All the time.

When an engineer-to-be starts his studies, the first he learns is physics and mathematics: i.e. how to model reality and the language to build the model. In recent parlance we would say that math is just a metamodel of reality. A model of the models that we use to capture the knowledge we have about an extant system or the ideas we have about a system to be engineered.

The distinction between knowledge and ideas may seem relevant but it is not so much. They’re all about mental content; that may or may not be related or co-related to some reality out there. Both knowledge and ideas are models of realities-that-are or realities-to-be that are of relevance to us or our stakeholders.

It has always been models. In our minds and in the collaborative processes that we use to engineer our systems. Model-based X is not new. It is just good, old-fashioned engineering.