DARPA Workshop on Self-Aware Machines

In 2004, DARPA and John McCarthy organised a Workshop on Self-Aware Computing Systems because the topic of artificial self-awareness was gaining momentum. It was a workshop by invitation at Washington D.C. Most participants came from USA, but there were two from Europe: Aaron Sloman from UK and Ricardo Sanz from Spain.

Sanz, Hayes, Minsky at DARPA Workshop on Self-Aware Computing Systems.

These were the thirty-three participants in the workshop:

Aaron SlomanEyal AmirPush Singh
Bernard BaarsJames Van OverscheldeRaghu Ramakrishnan
Brian WilliamsJohn McCarthyRicardo Sanz
Greg SullivanKen ForbusRichard Scherl
Danny BobrowTom HinrichsRichard Gabriel
Markus FromhertzLen SchubertRichard Thomason
Deborah McGuinnessLokendra ShastriRobert Stroud
Drew McDermottMichael CoxSheila McIlraith
Don PerlisMichael WhitbrockStan Franklin
Mike AndersonMike AndersonStuart Shapiro
Tim OatesOwen HollandYaron Shlomi
DARPA Worksop participants.

During three days we discussed the possibilities and approaches to machine self-awareness, within the specific pespective of artificial intelligence. Twenty years after, the discussion remains at the same point. Not much advance has been produced

Maybe the problem is too difficult for human minds.

It has always been models

There is a relatively recent boom on model-based X. Model-based development, model-based design, model-based systems engineering, …

In all the domains of engineering, it looks like we have just discovered the use of models to support our work. But this is, obviously, false. It has always been models. All around. All the time.

When an engineer-to-be starts his studies, the first he learns is physics and mathematics: i.e. how to model reality and the language to build the model. In recent parlance we would say that math is just a metamodel of reality. A model of the models that we use to capture the knowledge we have about an extant system or the ideas we have about a system to be engineered.

The distinction between knowledge and ideas may seem relevant but it is not so much. They’re all about mental content; that may or may not be related or co-related to some reality out there. Both knowledge and ideas are models of realities-that-are or realities-to-be that are of relevance to us or our stakeholders.

It has always been models. In our minds and in the collaborative processes that we use to engineer our systems. Model-based X is not new. It is just good, old-fashioned engineering.

It has always been models

There is a relatively recent boom on model-based X. Model-based development, model-based design, model-based systems engineering, model-based minds, …

In all the domains of engineering, it looks like we have just discovered the use of models to support our work. But this is, obviously, false. It has always been models. All around. All the time.

When an engineer-to-be starts his studies, the first he learns is physics and mathematics: i.e. how to model reality and the language to build the model. In recent parlance we would say that math is just a metamodel of reality. A model of the models that we use to capture the knowledge we have about an extant system or the ideas we have about a system to be engineered.

The distinction between knowledge and ideas may seem relevant but it is not so much. They’re all about mental content; that may or may not be related or co-related to some reality out there. Both knowledge and ideas are models of realities-that-are or realities-to-be that are of relevance to us or our stakeholders.

It has always been models. In our minds and in the collaborative processes that we use to engineer our systems. Model-based X is not new. It is just good, old-fashioned engineering.

Sensores Inteligentes y el Futuro de las Máquinas

La incorporación de inteligencia artificial a los sensores de las máquinas permite el desarrollo de aplicaciones sofisticadas de monitorización y control que llevarán, eventualmente, a la construcción de máquinas auto-conscientes.

El primer nivel ataca el problema del monitorizar el cambio

La integración de tecnología de sensores inteligentes y de medición avanzada en las máquinas y sistemas mecánicos, permite implementar aplicaciones de monitorización de condición de máquinas. Los sistemas de mantenimiento basados en la condición permiten disminuir las interrupciones no programadas y optimizar el rendimiento de la máquina, reduciendo costes de mantenimiento y reparación. Se puede utilizar también la tecnología de medición y sensores para aumentar la seguridad de la maquinaria gracias a la disponibilidad de la información sobre el estado del sistema en cualquier momento durante la operación. La disponibilidad de diferentes infraestructuras técnicas permite desplegar los sistemas de monitorización en la propia máquina, por medio de sistemas empotrados, o remotamente, por medio de sistemas distribuidos.

El segundo nivel ataca el problema de anticipar y controlar el cambio

La explotación local de la información de condición y la disponibilidad de modelos operativos de la máquina, permite hacer una integración más transparente de la información del sensor con el controlador, habilitando mecanismos de anticipación y control del cambio. Esto permite desarrollar sistemas de protección o de auto-curación o máquinas. La construcción de sistemas adaptativos permite adaptar la operación de la maquina a las características cambiantes de los componentes mecánicos que la constituyen. La maquina se hace auto-consciente de su propio cuerpo. Esto permite además la mejora de los modelos de la propia máquina, teniendo en cuenta el desgaste y la variación observada, permitiendo implementar controladores adaptativos de mejores prestaciones.

El tercer nivel ataca el problema del aprovechar el cambio

Por último, mediante la integración de sensores avanzados con inteligencia artificial se pueden construir máquinas que aprovechan la dinámica propia y de la realidad, para buscar condiciones optimas de explotación. La máquinas con controles inteligentes dinámicos se adaptan a la evolución de los parámetros ambientales y del proceso de fabricación para acercarse a resultados cercanos a los óptimos. La incorporación de modelos flexibles de conocimiento de los procesos en curso -en los que participa la máquina- permite un comportamiento adaptable y flexible. La máquina puede detectar alteraciones en sí misma, en las materias prima con las que opera, o en las tareas a las que tiene que hacer frente dentro del proceso productivo.

Architectures of Mind

The investigation about what are the best architectures for mind construction has too many intervening threads and interferences. Sometimes heterogeneous people of different domains get together to try to clarify some of the issues concerning mind architecture. In many of these gatherings, people from science and technology try to devise strategies for building computational models or system archures to create artificial minds.

Image
Proposals on mind engineering processes.

One of these efforts was the EU Funded ICEA Project. The IST 027819 ICEA Integrating Cognition, Emotion and Autonomy was a four-year project, funded by IST Cognitive Systems Unit. The Project was focused on brain-inspired cognitive architectures, robotics and embodied cognition, bringing together cognitive scientists, neuroscientists, psychologists, computational modelers, roboticists and control engineers. The primary aim of the project is to develop a novel cognitive systems architecture integrating cognitive, emotional and bioregulatory (self-maintenance) processes, based on the architecture and physiology of the mammalian brain.

The works but extremely interesting and the teams involved were mostly very active and dedicated to the work. However, the results obtained -basically more elaborated scientific ideas- didn’t have a translation in a concrete architectural implementation of a system that could prove the insights. Different systems wer implemented -real, simulated- but the overall picture was a bit lost.

We need stronger tema integration and specially more convergent objectives to get these types of activities produce the expected and potential results. This is not an easy taks however, as the goal of a cognitive psychologist of an industrial control engineer seem so far a part that convergence sounds more lie a myth than a real, strategical possibility.

All this said, something in fully basic: technology departs from science. Mind engineering must depart from mind since and hence, convergence is necessary. This will only happen if we broaden the tagets and pursue a General Theory of Mind not trapped in the details of rat brains or humanoid robot computer-laden hearths.

Questions on Mind Theory

With Jaime Gómez

In the recent and not so recent past, several formal and abstract models have attempted to shed light on the coret topics of Mind and Brain, Robotics and Artificial Intelligence. This has created a vast proliferation of published information, which is currently lacking any one dominant model for understanding the mental processes.

The Journal of Mind Theory we try to create is an attempt to tackle these problems.

The JMT journal’s aim is to consolidate and explore these formal and abstract tools for modeling cognitive phenomena, creating a more cohesive and concrete formal approach to understanding the mind/brain, striving for precision and creating clarity in this topic of interest.

What follows is a list of questions posed by Jaime Gomez (JG) and answers from Ricardo Sanz (RS) on these issues.

The questions

JG: First off, for putting things in perspective, there seems to be some skepticism about usefulness of formal approaches. Is formal logic the best mode of thinking about mental processes? Are the grounds of validity of the laws of logic to be found in language, in conceptual structures, in the nature of representation, in the world, or where?

RS: Formal logic is an abstract framework and as such, the grounds of its validity are to be found in its own structure. The programs of Frege and Hilbert established this thread and the axiomatisations of Russel & Whitehead or Peano reflected this into logics and set theory; a program that was partially broken by Gödel results. The question is whence the formal can bear any strong relation to the real. My belief is that the answer is yes. The reason for believing this is a question of plain evidence: laws of physics seem to be in strong correlation with reality (cf. the bewilderment shown by Wigner). And it is not just a question of approximation —that we can always approximate any data set with an arbitrarily complex function— it is a question of how simple while precise the laws are in the formal side and in the real side.

The question whether formal logic is the best mode of thinking about mental processes, I think the answer is both yes and no. Yes, because mental processes are informational processes and as such, logics has a crucial role to play there. No, because there are other more powerful mathematical formalisms that fit better the descriptions we are looking for. The grounds of validity of the laws of logic or better, of mathematical physics, are to be found in the existence of isomorphic, shared structures between formalisms and reality.

JG: Are embodied and situated approaches more relevant than the use of formal tools for the modeling of biological phenomenon, in particular the mental processes?

RS: I don’t really understand what “embodied” and “situated” do mean. In a very precise sense all real systems are embodied and situated. All they have a body and are placed somewhere. For some authors, “embodied” does not mean just “having a body” but having an operation driven by a “mind” that is scattered through all the body. I cannot but agree with this spread mind idea in the understanding that minds are informational-control processes that are distributed through all the body. The problem is then not the question of “embodiment” but the very possibility of existence of “non-embodied” minds. There is no way of having a real system that is purely abstract. Abstractions are necessarily reified if they exist. Therefore, the embodied mind vs formal mind is a false dichotomy. AI systems based on inference engines are as embodied as any robot in a strong ontological sense. Formal tools are used to think about systems and then mapped into embodied and situated realizations.

A different consideration places the distinctions embodied-non embodied and situated-non-situated in the side of the thinker, scientist or engineer and not in the target system itself. This means that there is a way of thinking and building robots that may be labeled as “embodied and situated robotics”. The same can be said for the analysis of alive systems or for the theorizing. The problem here is very simple indeed. What can we say of a model of a system that puts the mind in the brain and not through all the body? What we can say about this kind of model will depend on the system being modeled: in this system the information and control processes happen in the brain, the model may be good; if there are information and control processes beyond the brain, the model is certainly bad.

So, there is no such thing as “embodied modeling”; there are just good models and bad models, and what the “embodied and situated” approach has discovered is a blatant aspect of systems: the dynamical phenomena -especially the one driven by information- can happen in all subsystems. This is nothing new but common understanding in all science and engineering and a central topic of control theory: controllers -minds- must necessarily take into account the dynamics of the body to properly control it. Thinking that a controller can move a robot arm to perform any task in the absence of bodily considerations is not just “non-embodied robotics” is simply bad engineering. Something that lurks here is the ignorance of a simple fact: given a concrete system, not all behaviors are dynamically possible and hence a working mind cannot actually decide upon the proper actions in the absence of the knowledge of the dynamics. This can be read, as “minds cannot be separated from bodies” as embodiers do or can be simply read as “controllers must take dynamics into account”. Nevertheless, this is not a new insight, but common trade in cybernetics and control engineering.

So the question of embodiment is just whether or not bodily dynamics is taken into account when acting. And the question of situatedness just whether or not environmental dynamics is taken into account when acting -the analysis being similar for that of embodiment.

This is so for both the artificial and the natural. The case of the biological phenomenon is a particular case of a this more general phenomenon of bodily/world dynamics being of relevance for bodily/world behavior. The “embodied and situated” thinking about biological phenomena or robot construction is hence just trying to avoid the naïve approaches of the illiterate.

JG: The “two cultures” conflict that C.P. Snow has pointed out looks far from being resolved, rather “the cultural panorama” seems to be more and more atomized, within each of the Snow’s cultures. The practitioners of science and engineering in the cognitive sciences seem to diverge and question the methods used and achieved by each other rather than reaching any joint consensus.

If the Sciences seek to understand the physical world and Engineering seeks to build better systems, is it justifiable to build artificial systems designed to realize tasks that are already easily accomplished by human beings? (We are referring here whether to build a humanoid robot that pours a cup of tea properly or walks straight sheds light on the motor sensor mechanisms that humans follow to carry out such tasks)

RS: The case of the two cultures is a case of misunderstanding of what the cultures are. They are not separated by educational reasons, but because they are not commensurable because they have very different purposes. In a rough analysis, the people in both worlds -the scientists and the humanists- try to make their living in a particular niche. In a sharper analysis the “scientific” culture has as single objective to make living easier (this is then decomposed in sub-objectives like understanding how the world works or moving water to our homes). The “humanities” culture has as objective the personal promotion of each author in a cultural context (in a sense it is mostly show business).

In another reading, humanities could be understood as the engineering of cultural assets of experiential value; however the lack of solid theories have driven them to create self-perpetuating myths like the many arts, religions, or political regimes. The arguable attempt of the humanities to build an understanding of the human -a scientific endeavor indeed- is devastated by the lack of objective decision making processes between theories. The proper way of understanding man is cybernetics (in McCulloch words, “understanding human understanding”) but the humanities tend in general to neglect the role that scientific knowledge about humans has to play in their business.

This is a difficult gap to fill because the problem of scientific theorizing about human thoughts and phenomenologies is daunting. Scientists are not willing to risk or waste their careers in such a minimal hope task and humanists do have the interest but lack the competences for the necessary work.

The question of whether it is justifiable to build artificial systems designed to realize tasks that are already easily accomplished by human beings can be answered in this Snowean gap context. The question of whether it makes sense to build a machine to better understand humans has a simple answer: yes. Our mathematical incompetence to solve -in formal- some human systems problems makes necessary the construction and experimentation to explore -in physical- the enormous range of design alternatives.

The theories of “the human” that we may have are of three kinds:

  • Rigorous mathematical theories -as those of physics- that we cannot solve analytically but in their simplest forms far from the complexities of a full-fledged human mind.
  • Literary theories from the humanities lacking the necessary intersubjectivity and positive character.
  • Executable models, that are reifications -usually in simplified form- of mathematical theories to be used in the performance of experiments.

Obviously the best to have are the rigorous mathematical theories -for the purposes of science, not for the self-promotional purposes of humanities- because they would be universally predictive. However they possibility of analytically solving billions of simultaneous Izhikevich neuron equations is well beyond reach. On the other side, the executable models only do a particular prediction -of no universal value- but at least give us something of objective value.

But the value of the executable models can only be such if their results are feed-back into the very theory that is the original backdrop of the executable model. Only n this way the construction of humanoid robots will prove of any scientific value. However, the lack of rigorous specifications of this theoretical backdrop and the model-associated simplifying assumptions convert most of the work on humanoid robotics in just a media show trying to make profit on human empathy for humans. These “researchers” are much worse than the humanists working in their self-promotion because they pretend to be doing real science -placing them between the bullshitters and plain liars of Frankfurt.

JG: The Hard Problem of consciousness or, how we explain a state of consciousness in terms of its neurological basis is a very controversial topic for which philosophers and scientists have different approaches and answers. Philosophers like Ned Block argues that the claim that a phenomenal property is a neural property seems just as mysterious—maybe even more mysterious–than the claim that a phenomenal property has a certain neural basis. Do you think the Hard Problem of consciousness is a problem, philosophical dilemma or scientific challenge at all?

RS: The problem of consciousness is no more and no less than a scientific problem. There some observed regularities and we still lack a positive universal law that captures all of them. This problem is said to be hard because of the apparent difference between first and third person experiences. But there is no such thing as a third person experience. The experience of dropping a bottle of wine from the top of the tower of Pisa is as first-personal as it is the experience when drinking the wine. The only issue at stake in first/third person science is abstract repeatability of experiences in controlled settings. With abstract repeatability, I mean the experience described in a level of abstraction that gets rid of unnecessary details. In the case of the drop, we can abstract from the concrete tint of the sunlight, the position of the earth in the orbit or the concrete number and nationalities of the other tourists in the tower. If we describe the experience at a certain abstraction level -e.g. the number of milliseconds a clock ticked- we can expect to obtain some laws (obviously if the world behaves in such a way).

The separation of what is relevant and what is not to achieve the required level of abstraction is hence the cornerstone of shareable experiences -i.e. the very nature of science and engineering. This is a problem for consciousness research because the human brain is very complex and not easily accessible. Time will come where a deep understanding of brain structure will be ready to be used in the systematic analysis of massive data coming from real time brain observation. Then we will be able to separate wheat from chaff and to establish rigorous correlations —i.e. scientific laws— between stimuli and qualia. The laws of redness will come. As will come laws of love and selfhood. There is no mystery here, just ignorance and limited experimental capability.

JG: Another problem usually referred in the philosophy of mind literature seems to be the Problem of Access Consciousness or How can we find out whether there is a conscious experience without the cognitive accessibility necessitated for reporting that conscious experience, since any evidence would have to derive from reports that themselves derive from that cognitive access. Do you think the Problem of Access of Consciousness is a problem, a philosophical dilemma or scientific challenge at all?

RS: This argument is permeated with two fallacies: the homunculus fallacy and the first person fallacy. This last refers to the false argumentation about the intrinsic difference between first person experience and third person experience. In the same sense that we can observe and measure someone digesting we will be able to observe and measure someone feeling. The second one comes from thinking that there is a part of our brain/mind that is “me” and the rest is something that I own and/or use. Obviously, language cannot tell us about all what is going in the brain (or the body as a whole). But when we use language to “talk with a person” what we are doing is indeed to “talk with a fragment of the person” but identifying that fragment with the person itself is a mistake. This means that in general, reports on consciousness are necessary fragmentary because the information available to the reporting mechanism is partial.

With the development of a general scientific theory of consciousness and the advances in experimental resources (see previous question) we will be in a situation of scientifically being able to tell -in third person lingo- when someone is having a particular experience. The path to follow will be similar to the present capability of saying when someone is suffering epilepsy or heart attack. Experimental signal will tell us if some phenomenon is happening and in what degree. No need for verbal reports will be then necessary.

JG: In the celebrated The Structure of Scientific Revolutions, T.S. Kuhn argued that science does not progress via a linear accumulation of new knowledge, but undergoes periodic revolutions or paradigm shifts. Three main stages could be distinguished in science. First, Prescience, which lacks a central paradigm, this is followed by normal science, when scientists attempt to validate observed facts within the paradigm. In the final stage, the failure of a result to conform to the paradigm is seen not as refuting the paradigm, but as the mistake of the researcher. Thus, as anomalous results continue to be produced, science reaches a crisis. At which point revolutionary science leads to a new paradigm, which subsumes the old-good results along with the anomalous results into one framework. Is the Kuhnian paradigm an inappropriate metaphor for the working of the human mind en soi même? Do you think that Kuhn’s account of the development of scientific paradigms provides significant insights into the current state of the cognitive sciences? Which phase of the three are we related to now?

RS: The search for a theory of mind is indeed the very elucidation of the nature of science: correlation between knowledge and reality.

The prescience phase could be equated to Pinker’s blank slate but I don’t think our minds start from scratch. Or genetic material takes us directly into a “normal-science phase” of the mind that corresponds with the normal operation of a brain when using established knowledge and finely tuning it to the concrete environment of the agent. Brain revolutions happen continuously in the mind/brain driven by mismatches between the real and the expected. They are true revolutions in the catastrophic sense of Thom and take the form of non-linear attractors as Freeman has showed us.

Concerning the application of Kuhn model to cognitive science, I think that all people agree on the common paradigm given by theoretical neuroscience. In this sense, we are in the phase of normal science but we have a big problem with the character of the anomalous results. For some researchers there are plenty of anomalous results or even topics not addressed by the theory -e.g. the qualia issue- that the established paradigm does not address -cf. Chalmers “hard problem”.

For other researchers -including myself- we are still lacking some pieces in the global picture of the theory but there are not such anomalous results. What we have is a critical incompetence in applying and deriving predictions from the theory when addressing problems of real scale. There are no anomalous results because there are no complete predictions concerning experiments with real systems. Projects like Blue Brain try to test this hypothesis by applying high performance computation to the simulation of big portions of the brain.

JG: Undeniably the Galilean distinction between primary and secondary properties supposed a great advance in science because it permitted scientists to work out the physical phenomena avoiding scholastic disquisitions or being distracted by issues that were perceived by the church authorities as eminently human and therefore divine features. Do you think this Galilean distinction between quantitative properties and the qualitative ones is still valid? How close we are to explaining the qualia in a quantitative manner?

RS: I don’t think this distinction is any longer valid nor useful. It is clear that in the past it helped focus on certain aspects of the nature that were at the same time easier and more politically correct. However, this is not the case now that we focus on very complex properties -like consciousness- and there are no religious issues at stake.

All the properties -whether primary or secondary- are measured in a process of interaction between an observed object and a measurement device. In a sense, the simpler the interaction process the less effect can be attributed to the measurement device and hence the measurement is closer to being a measurement of an intrinsic property of the object (with the necessary provisions for Heisenberg’s uncertainty). In this very sense, qualia are abstractions -higher level measurements- derived from interactions between the sensed object and the sensing agent -a grown-up device, indeed.

The explanation of human qualia in a quantitative manner is certainly coming; but is necessary to perform two previous steps:

  1. The formulation of a theoretical model of qualia of universal character -i.e. not chauvinistically anthropomorphic or animalmorphic. This is in due course in the theoretical consciousness community and will coalesce in some years.
  2. The development of detailed measurement devices of neural activity of higher spatial and temporal resolution able to observe concrete individual neuronal assemblies in vivo. This is a very difficult problem and may not be solved in many years. However, this second step may not be necessary if the theory of qualia is solid enough as to give precise accounts of all extant phenomena as to be accepted as a satisfactory explanation by the scientific community. This may be necessary for dealing with the irreducible believers on the special nature of consciousness -mysterians-, that may only be convinced after the prediction and confirmation of ad-hoc suitable experimental tests.

JG: Aristotle claimed that definitions supposed the existence of some primitive concepts that could not be defined otherwise we would never finish defining things. Do you find this approach “usable” in the current scientific paradigm? How does this claim of relate to our discussion here?

RS: The end of the apparent definition infinite regression may be a set of closed laws that bound magnitudes and have predictive power. This may be read as a self-sustaining network of definitions (as is the case of physics: f = m·a). My impression, however, is that the coming definitions in mind theory will be in terms of extant physics and information theory in a totally reductionistic sense.

JG: A real epistemological understanding requires attention, not only to the propositions known or believed, but also to knowing subjects and their interactions with the world and each other. All serious empirical inquirers-historians, literary scholars, journalists, artists, etc., as well as scientists–use something like the “hypothetical-deductive method,” how does someone’s seeing, hearing contribute to the warrant of a claim when key terms are learned by association with these observable circumstances?

RS: The theory of mind we are looking for will represent the convergence and resolution of ontology and epistemology into one and single theory. The theory of mind will indeed explain and predict how a knowing subject interacts with the world in a meaningful sense. They key here will be the provision of a theory of mind that is indeed a theory of science: how it is possible for knowledge of something to be correlated with the reality of this very something.The way on how this epistemological-ontological consilience is going to happen can be captured in a simple vision:Nature is organized and what actually happens rigorously follows laws. There are no surprises or miracles. The question of what are the exact nature of these laws -e.g. whether they are probabilistic or not- is irrelevant four our very theoretical purpose. The only requirements for a theory of mind is that they are predictive -to be used as anticipatory tools- and that they are knowable -i.e. can be captured in an information-control infrastructure. In this sense we can trust what someone has learnt -by building associations among observable circumstances- if we are able to discount from the learned laws the concrete, particularity-laden distortions coming from the individual processes of perception and action.

JG: How can works of imaginative literature or art convey truths they do not state? Could incorporating this non-formal more abstract trajectory possibly be useful? How does the precision sought by a logician differs from that sought by a novelist or poet?

RS: At the end, the problem of conveying truths is a problem of conveying a particular abstract form or structure. The vehicle can be directly abstract and tightly correlated with the aspects and complexities of the truth at hand (cf. Wigner comments of the effectiveness of mathematics in physics). But the vehicle can also be less abstract, more concrete and experiential and still convey the form that constitutes the truth to be transmitted.The discovery of truth in arts will hence try to get rid of the details of the medium and even the concrete message -remember MacLuhan’s analysis- and focus on the abstractions reified in the message. This implies a voyage from the minute details of the physicality into the transcendental forms of the hierarchical abstraction. The main difference between the logician and the artist is not that they try to convey different truths but that they have a different strategy for the packing of it. Logicians strive for the truth as it is; artists want also the truth but they enjoy more the process of unpacking it from the media.

Suggested Readings

Harry Frankfurt: On Bullshit
Eugene Izhikevich: Dynamical Systems in Neuroscience
Warren McCulloch: Embodiments of Mind
Marshall McLuhan: Understanding Media: The Extensions of Man
René Thom: Structural stability and Morphogenesis
Walter J. Freeman: Neurodynamics
Eugene P. Wigner: The Unreasonable Effectiveness of Mathematics in the Natural Sciences
Steven Pinker: The Blank Slate