What do We Learn from Cognitive Neuroscience and the Science of Information Processing Structures? What do They Have in Common?

Figure 1: Information Processing Structures in the physical and digital worlds could be represented by named sets, knowledge structures and cognitive apparatuses to model, monitor and manage both the “self” and its interactions with its environment.

“Long before children learn how to read, they obviously possess a sophisticated visual system that allows them to recognize and name objects, animals, and people. They can recognize any image regardless of its size, position, or orientation in 3-D space, and they know how to associate a name to it.”

Stanislas Dehaene (2020) “How We Learn: Why Brains Learn Better than Any Machine…for Now” Viking, an imprint of Penguin Random House, LLC, New York. P 132.

“Moreover, when we assert that a named set (fundamental triad) is the most fundamental structure, it does not mean that it is the only fundamental structure in reality. There are other fundamental structures on different levels of reality. Fields in physics, molecular structures in chemistry, and the DNA structure in biology and genetics are fundamental (basic) in these fields. However, named sets (fundamental triads) form the physical block and constructing element for all those and many other structures. Consequently, named set (fundamental triad) is the most fundamental structure in the world of structures and thus in the whole world”

Burgin, M.S. (2011) “Theory of Named Sets” Nova Science Publishers, Inc. New York. P 599

“The cells in your head are reading these words. Think of how remarkable that is. Cells are simple. A single cell can’t read, or think, or do much of anything. Yet, if we put enough cells together to make a brain, they not only read books, they write them. They design buildings, invent technologies, and decipher the mysteries of the universe. How a brain made of simple cells creates intelligence is a profoundly interesting question, and it remains a mystery.”

Hawkins, Jeff. A Thousand Brains (p. 1). Basic Books. Kindle Edition.

Prologue

This post is aimed at a new generation of computer scientists and information technology professionals and introduces some new directions in which we process, communicate, and use information in real-time to make decisions that impact risk and reward outcomes in everyday life.

Our knowledge of information processing mechanisms stems from three important advances in:

  • Our understanding of the genome, neuroscience and cognitive behaviors of biological systems,
  • Our use of digital computing machines to unravel various mysteries about how our physical world works and to model, monitor and manage it, and
  • A new set of mathematical tools in the form of named sets, knowledge structures, cognizing oracles and structural machines which allow us to not only explain how information processing structures play a key role in the physical world but also to design and implement a new class of digital automata called autopoietic machines which advance our current state of information technologies by transcending the limitations of classical computer science as we practice it today.

This is not a tutorial or a scholarly discourse of these subjects. It is just an attempt as a novice to understand the jargon and try to make sense of the concepts and apply them. Learning is usually, a circular process involving four steps. First as novices, we discover various terms considered as jargon in a new domain. Second, as apprentices, we reflect and study them deeper to connect the dots and understand the new concepts while relating them to our own knowledge. Third, we become experts as we start to apply the concepts in real world and learn from mistakes. Fourth, our knowledge expands as we share this knowledge with others and discover new areas that have not been explored in the boundaries and the process continues. Mastery in a particular domain comes from repeating this process as our knowledge expands.
This post is an attempt to chronical my reflection process as I discover new vocabulary about information, its processing, communication and use from many sources. I am sharing it in the hope that I will discover in the process some new boundaries to explore.

For those with small attention span, here is a summary.

The theories of structural machines, triadic automata, autopoietic machines, and the “knowledge structure” schema and operations on them, so well-articulated, by Prof. Mark Burgin, provide the unified science of information processing structures (SIPS). SIPS allows the transition from data structures to knowledge structures, from Turing machines to Triadic Automata and to computations that go far beyond the boundaries of the Church-Turing thesis dealing with finite resources and their fluctuations. In addition, SIPS provides a cognitive framework that augments current non-transparent deep learning with model based deep-reasoning with deep knowledge, deep memory and experience.

In essence, SIPS helps us in the following three areas:

  1. SIPS provides a theoretical framework to model and explain various findings from neuroscience touched upon in this post. It is possible to bring various theories of how the genome, the genes, neural networks and the brain functions provide autopoietic behavior using cortical columns, reference frames, and models of the “self” and its interactions with the physical world by means of the five senses.
  2. SIPS helps us in designing and implementing a new class of autopoietic machines which go beyond the boundaries of classical computer science paving the path to a new generation of information processing systems while utilizing the current generation systems in the same way as the mammalian brain utilized various functions that the reptilian brain provided to build higher level intelligence.
  3. SIPS allows us to design and implement an intelligent knowledge network, which integrates deep learning, deep memory, knowledge from various domains and provides a framework for deep reasoning to sense and act in real-time for maintaining stability and managing the risk/reward based behaviors of the system.

Autopoietic machines are built using the knowledge network, which consists of knowledge nodes and information sharing links with other knowledge nodes. The knowledge nodes that are wired together fire together to manage the behavioral changes in the system. Each knowledge node contains hardware, software and infware (a word introduced by Prof. Mark Burgin in his book on superrecursive algorithms [21]) managing the information processing and communication structures within the node. There are three types of of knowledge nodes depending on the nature of infware:

  1. An autopoietic functional node (AFN) provides autopoietic component information processing services. Each node executes a set of specific functions based on the inputs and provides outputs that other knowledge nodes utilize.
  2. An autopoietic network node (ANN) provides operations on a set of knowledge nodes to configure, monitor and manage their behaviors based on the group-level objectives.
  3. A digital genome node (DGN) is a system-level node that configures a set of autopoietic sub-networks, monitors them and manages their behaviors based on the system-level objectives.

Each knowledge node is specialized with its infware defining the knowledge structures, which model downstream entities/objects, their relationships and behaviors which are executed using appropriate software and hardware. The infware contains the the knowledge for obtaining resources, configuring, executing, monitoring, and managing the downstream components based on the node level objectives.

Figure 2: The Knowledge Network

Figure 2 depicts the structure of a knowledge network implemented in the form of a DGN which in turn is composed of two ANNs. Each ANN manages downstream AFNs. The AFN is designed to execute appropriate software and hardware to deliver the functional behaviors. The hardware and software resources are obtained from conventional computing structures (IaaS, PaaS and application workloads).

Introduction

As Stanislas Dehaene [1] points out “Every single thought we entertain, every calculation we perform, results from activation of specialized neuronal circuits implanted in our cerebral cortex. Our abstract mathematical constructions originate in the coherent activity of our cerebral circuits, and of the millions of other brains preceding us that helped shape and select our current mathematical tools.”
Individual thoughts, concepts, and the number sense arising from neural activity are composed into higher level complex structures which rise through our consciousness and are communicated through our cultures to propagate via a multitude of individual brain structures that use them and even refine them. Resulting mathematical structures are now allowing us to decipher the way brain structures function aided by the experimental observations using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) experiments on how brain codes our thoughts.


“There is a story about two friends, who were classmates in high school, talking about their jobs. One of them became a statistician and was working on population trends. He showed a reprint to his former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician explained to his former classmate the meaning of the symbols for the actual population, for the average population, and so on. His classmate was a bit incredulous and was not quite sure whether the statistician was pulling his leg. “How can you know that?” was his query. “And what is this symbol here?” “Oh,” said the statistician, “this is pi.” “What is that?” “The ratio of the circumference of the circle to its diameter.” “Well, now you are pushing your joke too far,” said the classmate, “surely the population has nothing to do with the circumference of the circle.””


This story is from Eugene Wigner’s talk [2] titled “The Unreasonable Effectiveness of Mathematics in The Natural Sciences.” He goes on to say “The first point is that mathematical concepts turn up in entirely unexpected connections. Moreover, they often permit an unexpectedly close and accurate description of the phenomena in these connections. Secondly, just because of this circumstance, and because we do not understand the reasons of their usefulness, we cannot know whether a theory formulated in terms of mathematical concepts is uniquely appropriate.”
Once again mathematics has shown up in an unexpected connection dealing with information processing structures. We describe here the new mathematics of named sets, knowledge structures, generalized theory of oracles and structural machines and how they allow us to advance digital information processing structures to become sentient, resilient and intelligent. Sentience comes from the Latin sentient-, “feeling,” and it describes things that are alive, able to feel and perceive, and show awareness or responsiveness. The degree of intelligence (the ability to acquire and apply knowledge and skills) and resilience (the capacity to recover quickly from non-deterministic difficulties without requiring a reboot) depend on the cognitive apparatuses the organism has developed.


While there are many scholarly books, articles and research papers published in the last decade explaining both the theory and few novel implementations [3] that demonstrate the power of the new mathematics dealing with information processing structures, they are not yet understood well by many. There is a reluctance on the part of classical computer scientists and current day information technology practitioners to ignore warnings about the limitations of classical computer science when dealing with information processing structures and large fluctuations disturbing them.


Here is an open secret that is up for grabs for any young computer scientist or IT professional with curiosity to make a major impact in shaping next generation information processing systems which are “truly” self-managing and therefore, sentient, resilient and intelligent. What one needs is an open mind and a willingness to challenge the status-quo touted by big companies with lot of money and marketing prowess. In this post, I will try to articulate what I understood from reading about recent advances in the theory of how biological structures process information and the new science of information processing structures that allow us to design autopoietic systems that imitate them. The term autopoiesis refers to a system capable of reproducing and maintaining itself. “An autopoietic machine is a machine organized (defined as a unity) as a network of processes of production (transformation and destruction) of components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in the space where they (the components) exist by specifying the topological domain of its realization as such a network.

What do We Learn from Cognitive Neuroscience?

An excellent perspective on the latest contributions of cognitive psychology, neuropsychology, and brain imaging to our understanding of learning and consciousness is given by Dehaene and Naccache [4]. Recent understanding of how the brain functions, reveals that:

  1. Number sense is the result of an innate brain activity: Numerical knowledge is embedded in a panoply of specialized neuronal circuits, or “modules.” More likely, a brain module specialized for identifying numbers is laid down through the spontaneous maturation of cerebral neuronal networks, under direct genetic control, and with minimal guidance from the environment [5].
  2. Reading is an evolutionary outcome of the adaptation of brain circuits using plasticity of the brain: According to Stanislas Dehaene [6] “Reading, although a recent invention, lay dormant in for millennia within the envelope of potentially inscribed in our brains. Behind the diversity of human writing systems lies a core set of universal neuronal mechanisms that, like a watermark, reveal the constraints of human nature.”
  3. Knowledge about itself and its interactions with the environment is distributed in the brain with connections between thousands of complimentary models [7]: “Reference frames are not an optional component of intelligence; they are the structure in which all information is stored in the brain. Every fact you know is paired with a location in a reference frame. To become an expert in a field such as history requires assigning historical facts to locations in an appropriate reference frame. Organizing knowledge this way makes the facts actionable. Recall the analogy of a map. By placing facts about a town onto a grid-like reference frame, we can determine what actions are needed to achieve a goal, such as how to get to a particular restaurant. The uniform grid of the map makes the facts about the town actionable. This principle applies to all knowledge.”
  4. Consciousness is a brain-wide information sharing activity [8]: “In fact, consciousness supports a number of specific operations that cannot unfold unconsciously. Subliminal information is evanescent, but conscious information is stable—we can hang our hat on to it as long as we wish. Consciousness also compresses the incoming information, reducing an immense stream of sense data to a small set of carefully selected bite-size symbols. The sampled information then can be routed to another processing stage, allowing us to perform carefully controlled chains of operations, much like a serial computer. This broadcasting function of consciousness is essential. In humans, it is greatly enhanced by language, which lets us distribute our conscious thoughts across the social network.”

In this section, we will elaborate on some of these observations and identify the common abstractions required for modeling autopoietic structures that represent knowledge about themselves and their environment along with their process evolution behaviors. In the next section we will discern the new mathematics of information processing structures that allow us to represent the models of information processing structures with generalized schemas for autopoietic machines and operations on them. These models then allow us to create a cognitive framework that explains how consciousness works as an autopoietic information processing structure. It should explain global information sharing among autonomous, concurrent and distributed processes which are autopoietic. These components execute functions (as nodes in a network), and form the structure (the nodes sharing information via communication links) and process behaviors specifying their evolution based on the interactions among themselves and their environment. The cognitive framework in the form of a network of networks allows modeling, representing knowledge structures and manage their evolution in the face of rapid fluctuations in the interactions among the components and their environment.

Number Sense as an Information Processing Structure:

All living beings are born with a basic number sense. According to Stanislas Dehaene [1], babies’ numerical inferences seem to be completely determined by the spatiotemporal trajectory of objects. “The newborn’s brain apparently comes equipped with numerical detectors that are probably laid down before birth. The plan required to wire up these detectors probably belongs to our genetic endowment. Indeed, it is hard to see how children could draw from the environment sufficient information to learn the numbers one, two, and three at such an early age. Even supposing that learning is possible before birth, or in the first few hours of life—during which visual stimulation is often close to nil—the problem remains, because it seems impossible for an organism that ignores everything about numbers to learn to recognize them. It is as if one asked a black-and-white TV to learn about colors! More likely, a brain module specialized for identifying numbers is laid down through the spontaneous maturation of cerebral neuronal networks, under direct genetic control, and with minimal guidance from the environment. Since the human genetic code is inherited from millions of years of evolution, we probably share this innate protonumerical system with many other animal species”

In addition, the infant brain seems to be coded to rely on three fundamental laws. First, an object cannot simultaneously occupy several separate locations. Second, two objects cannot occupy the same location. Finally, a physical object cannot disappear abruptly, nor can it suddenly surface at a previously empty location; its trajectory has to be continuous.

Starting from the basic mental representation of numerical quantities that we share with animals, the numerical efficacy evolves with brain structures that support oral numeration, and written numeration. Obviously, cultural intervention of the evolution of human brain helped shape the brain structures to improve the efficacy. “Across centuries, ingenious notation devices have been invented and constantly refined, the better to fit the human mind and improve the usability of numbers.”

Information processing structures and the Learning Process

What is learning and how do we learn? How do babies observe the world and learn to deal with themselves as an object and its relationship with all other objects outside of themselves? Before we start teaching machines how to learn, we should first, understand how sentient beings learn. This is the subject of the very insightful book [16] “How We Learn.” I will summarize few learnings I gleamed from this book that are relevant to my understanding of how information processing structures encoded in the genome play a role in learning and how they are relevant to designing the digital genome which allows us to create autopoietic machines. The new class of digital autopoietic machines go beyond the current state of the art in designing information processing machines using symbolic computing and neural networks based on classical computer science[1].

According to Stanislas Dehaene [16], “to learn is to progressively form, in silicon and neural circuits alike, an internal model of the outside world.” The brain in order to accomplish this, has a “structured yet plastic system with an unmatched ability to repair itself in the face of a brain injury and to recycle its brain circuits in order to acquire skills unanticipated by evolution.” 

The brain uses a set of neural structures that sense, collect, classify, and store information in the form of composable knowledge structures (a network of neural circuits modeling the objects, their relationships and behaviors) and uses them to generate hypotheses and reasoning also conceived and stored in the form of knowledge structures. The reasoning structures allow synchronizing the models with external reality using the compositional nature of these knowledge structures and correcting the models based on error-feedback. The richness of these models and their use in real-time information acquisition, storing, processing and taking action, provide the foundation for the genome-based living organism’s sentient, resilient and intelligent behaviors.

The genome provides a basic set of knowledge structures that have been created through the cellular evolution processes. For example, “at birth, baby’s’ brains are already organized and knowledgeable. They know, implicitly that the world is made of things that move only when pushed, without ever interpenetrating each other (solid objects) – and also that it contains much stranger objects that speak and move by themselves (people).” The genome contains the internalized knowledge of preceding generations in the form of hardwired genes and neural networks. The genome encodes in its DNA several kinds of knowledge structures:

  1. The knowledge structures required to use physical and chemical resources and processes to create both physical and cognitive structures of the cellular being with autopoietic behavior.
  2. The knowledge structures that provide the sense and perception using various physical structures belonging to the “self.”
  3. The knowledge structures that model, monitor and manage the stability of the overall “body” structure (the life’s processes) and
  4. The knowledge structures that map the relationships and behaviors of the body and the environment.

In the next section we will discuss our learnings from the studies of the brain and the neocortex using PET and FMRI and their relationship to the knowledge structures.

Distributed Knowledge Networks as Information Processing Structures

The book “A Thousand Brains” [7] provides a detailed map of how brain structures sense, classify, model and manage information about the body and its interactions with the environment. I will try to summarize my learnings from reading this book.

  1. Our intelligence stems from the activities in our brain consisting of two parts, an old brain and a new brain called the neocortex which are connected to each other and communicate through nerve fibers.
  2. “The neocortex is the organ of intelligence. Almost all the capabilities we think of as intelligence—such as vision, language, music, math, science, and engineering—are created by the neocortex.”
  3. The neocortex provides the framework for modeling the body and the outside world with which it interacts using the older brain that is directly connected to various parts of the body and manages the inputs and outputs through the five senses. The neocortex acts as a “sixth sense” by modeling, monitoring and managing the body and the external world.
  4. Thoughts, ideas, and perceptions are the activity of the neurons that are connected to each other and everything we know is stored in the connections between neurons. These connections store the model of the world that we have learned through our experiences. Every day we experience new things and add new pieces of knowledge to the model by forming new synapses. The neurons that are active at any point in time represent our current thoughts and perceptions.
  5. “The word “model” implies that what we know is not just stored as a pile of facts but is organized in a way that reflects the structure of the world and everything it contains.” Modeling objects, their internal structures and their interactions are modeled as entities, relationships and behaviors. A behavior is a series of activities that take place in the system in response to a particular situation or stimulus
  6. “The old brain contains dozens of separate organs, each with a specific function. They are visually distinct, and their shapes, sizes, and connections reflect what they do. For example, there are several pea-size organs in the amygdala, an older part of the brain, that are responsible for different types of aggression, such as premeditated and impulsive aggression.” In essence, the old brain is endowed with its own structure with autonomic components which provide specific functions that are performed using the body. This is accomplished through embedded, embodied, enactive and extended (4E) cognition models of their own using the cortical columns. There are about 150,000 of these columns which in their world-modeling activities, work semi-autonomously.
  7. The neocortex learns a predictive model of the world (including the “self”) and these predictions are the result of structural reconfiguration of the neural networks.
  8. The predictive model is created using the cortical column’s ability to represent knowledge in the form of “reference frames.” “A reference frame tells you where things are located relative to each other, and it can tell you how to achieve goals, such as how to get from one location to another. We realized that the brain’s model of the world is built using map-like reference frames. Not one reference frame, but hundreds of thousands of them. Indeed, we now understand that most of the cells in your neocortex are dedicated to creating and manipulating reference frames, which the brain uses to plan and think.” This observation is very relevant in designing and implementing autopoietic machines using digital computers.
  9. A collection of reference frames provides a means to model various entities and objects, their relationships and movement and other behaviors that change the state of the world from one instant to another. The difference between an entity and an object is that the entity is an abstract concept with attributes such as a computer with memory and CPU. An object is an instance of an entity with an identity, with two components which are the state and behavior.

The important lesson I take away from these observations is that the neocortex provides an integration of models of the “self” and its interactions with the external world developed  across all knowledge acquired through myriad semi-autonomous cortical columns. It provides a predictive framework in real-time to sense and act based on changes in its perception of the current state of the global model.

Consciousness as an information processing structure designed for global optimization:

While consciousness is a very complex and controversial subject, we discern some common themes from both Jeff Hawkins and Stanislas Dehaene writings [7, 8 and 16].

  1. The controversy about our understanding of consciousness stems from two schools of thought. One in which the consciousness may involve science that goes beyond mere result of neural activity and the other in which it is the consequence of physical phenomenon like any other and is eventually understood with a proper theory that is consistent with observations. Suh theories are emerging [10, 16, 17] based on recent observations with FMR and PET studies. A very interesting video summarizes some of these efforts and is very illuminating (https://youtu.be/efVBUDnD_no )
  2. The emerging model of the brain consisting of the old reptilian brain and the new mammalian brain and their interactions with “self” and the external world is throwing light on the nature and role of consciousness. The old brain with a multitude of semi-autonomous cortical columns is designed to process information from a multitude of sources filtered through the five senses of the body. The efficiency of these structures is achieved through specialization, separation of concerns and adaptation through 4E cognitive processes. These cognitive processes allow the cortical columns to create models of complex objects they sense and their movement. The information received through the senses is transformed into knowledge in the form of a neural network consisting of several hundred neurons where each neuron is associated with a specific function required to model observed features, locations and the movements. The cortical columns are designed to optimize their tasks in performing the local mission. An interesting feature of cortical columns is that they all use same mechanism of modeling knowledge independent of the sensory mechanisms from which the information is being received or the nature of the content. It is the structure and its configuration that matter to create reference frames.
  3. The new brain is designed to process information and create a global model of the “self” and its interactions with the outside through the received models from the old brain. In addition, the new brain has to resolve any disputes that arise between the old brain cognitive functions and provide global optimization of the system evolution with predictive reasoning based on global knowledge and history stored in memory in the form of neural networks.
  4. According to Stanislas Dehaene [8], “conscious perception transforms incoming information into an internal code that allows it to be processed in unique ways.” It fulfills an operational role. “Consciousness implies a natural division of labor. In the basement, an army of unconscious workers does the exhausting work, sifting through piles of data. Meanwhile at the top, a select board of executives, examining only a brief of the situation, slowly makes conscious decisions.”

The cognitive overlay, self-regulation to achieve global optimization based on a consensus approach between all the participant components deal with contention for resources, prioritization of various tasks, synchronizing various distributed autonomous processes where necessary etc.  These tasks are accomplished using the abstractions of addressing of various components, alerting, mediation and supervision. Self-regulation rules are derived from the knowledge structures representing the history and genomics. Global awareness and shared knowledge allows avoiding the pitfalls of self-referential circularity not moored to external reality and paves the path for global optimization of system behavior in the face of non-deterministic fluctuations caused by external forces.

What do We Learn from the Science of Information Processing Structures?

Computing, communication, cognition, consciousness and culture are the essential ingredients of information processing structures and the process of evolution has generated myriad structures with varying degrees of sentience, resilience and intelligence. All forms of physical structures deal with functions, their composition from groups to semigroups, and from trajectories to processes through various interactions and their reaction to fluctuations which cause disturbances. Physical and chemical systems evolve through matter and energy transformations subject to laws of physics.  Information processing and communication are subject to laws of energy and entropy of the structure interacting with its external environment and forces. Biological systems, in addition, have developed cognitive capabilities through their cognitive apparatuses – the gene and the neuron.  The evolution of the genome leveraging the genes and the neuronal structures has given rise to autopoiesis, consciousness and culture. In this section, we will analyze the new mathematics of structural machines and apply it to understand the fundamental nature of information processing structures and their properties.

Function, Structure and Fluctuations

The physical universe, as we know it, is made up of structures that deal with matter and energy. As Mark Burgin [9] points out energy and matter are different but intrinsically connected with one another. Matter cannot exist without energy (at least, zero energy), while energy is always contained in physical bodies. Taking matter as the name for all physical substances as opposed to energy and the vacuum, we have the relation that is represented by the following diagram (reflected in figure 3).

Similarity of matter and knowledge means that they may be considered in a static form, while energy and information exist only in (actual or potential) dynamics. In addition, similarity of energy and information signify that both these entities cause change in systems: energy does this in physical systems, while information does this in structural systems such as knowledge and data. In other words, the diagram states that information is related to knowledge and data as energy is related to matter. More exactly, this relation holds for cognitive information that changes such infological system as thesaurus or system of knowledge.

Figure 3: Matter-Energy and Information-Knowledge/Data Relationships

Information processing structures in the physical world are formed through the physical and chemical processes available in nature using matter, energy and their transformation rules. Atoms are composed into molecules and molecules are composed into  compounds. Component functions, composed structures and fluctuations in their interactions among the components themselves and their external environment determine their macroscopic properties. For example, as the kinetic energy increases (because of heat from external source for example), the structure of a set of water molecules is rearranged going form solid form to liquid form or from liquid form to a gaseous form through physical processes. Same holds true for chemical structures when different physical structures interact with each other and form a composed structure using matter and energy transformations. The structure, strength of the interactions and the nature of fluctuations determine their evolution. Such structures can be represented by state vectors in phase space and their dynamics is determined by well-defined mathematical structures that deal with matter, energy and their transformation rules defined by the physical processes. Mathematical representations of these structures stem from the rotational and translation invariance properties in the complex space-time manifold.

A complex adaptive system (CAS) is a structure that consists of a network of individual entities interacting with each other and its environment. Each entity exhibits a specific behavior (function) and may be composed of subnetworks of entities (structure) providing a composed behavior. It takes energy to process information, sustain its structure and exhibit the intended behavior. Various systems adapt different strategies to use matter and energy to sustain order in the face of fluctuations caused by internal or external forces. The second law of thermodynamics comes into play because of matter and energy involvement which states that “there is no natural process the only result of which is to cool a heat reservoir and do external work”. In more understandable terms, this law observes the fact that the useable energy in the universe is becoming less and less. Ultimately there would be no available energy left. Stemming from this fact, we find that the most probable state for any natural system is one of disorder. All-natural systems degenerate when left to themselves. However, an adaptive system refuses to be “left to itself” and develops self-organizing patterns seeking minimum entropy states to reconfigure the structure in order to compensate for the deviations of behavior from stable equilibrium due to fluctuations. Thus functions, structures, interactions, fluctuations, and reconfiguration processes play key roles in the evolution of CAS.

Living beings, on the other hand exhibit sentience along with some form of intelligence and resilience. The cognitive apparatuses are built using information processing structures that exploit physical, chemical and biological processes in the framework of matter and energy. These systems transform their physical and kinetic states to establish a dynamic equilibrium between themselves and their environment using the principle of entropy minimization. Biological systems have discovered a way to encode the processes and execute them in the form of genes, neurons, nervous system, the body and the brain etc., through evolutionary learning. The genome, which is the complete set of genes or the genetic material present in a cell or in an organism, defines the blueprint that includes instructions on how to organize resources to create the functional components, organize the structure and the rules to evolve the structure while interacting with environment using the encoded cognitive processes. Placed in the right environment, the cell containing the genome executes the processes that manage and maintain the self-organizing and self-managing structure adopting to fluctuations.

Any theory of the biological processes must explain the autopoietic behavior and the structures designed, built, monitored and managed in real-time to establish and maintain stability in the face of fluctuations in the interactions both within and with its environment.

In the next section, we will examine a new theory of autopoietic structures and how it fares in explaining the autopoietic processes and help design also a new class of autopoietic automata going beyond classical computer science.

Named Sets as Elements of Information Processing Structures

Structural relationships exist between data, which are entities observed in the physical world or conceived in the mental world. These structures define the knowledge of them in terms of their properties such as attributes, relationships and dynamics of their interaction. Information processing structures organize evolution of knowledge structures by an overlay of cognitive knowledge structures, which model, monitor and manage the evolution of the information processing system. The most fundamental structure is called a fundamental triad or a named set [10]. It has the following visual representation shown in Figure 4:

Figure 4: Visual representation of a fundamental triad (named set)

At the lowest level, the data elements are generally represented by a key, value pair. They are domain dependent and represent some knowledge about the domain. For example, glucose level in a person’s body has a value. Similarly, Sugar level of the person has a value. At the next level, some of the data elements have a relationship to other elements.  Some elements change depends on the changes of other elements. For example, the risk of diabetes of a person depends on the levels of sugar and insulin of that person. This information provides a model that represents the knowledge structure and changes in the knowledge structure provides new information.

Figure 5 shows the knowledge structure related to the risk of diabetes, and the levels of sugar and insulin.

Figure 5: Domain-specific micro knowledge structure schema with entities, their relationships and all possible behaviors when events change their values

The knowledge structure bears similarity to the cortical column [7] with observed features, their impact on the objects with their locations, relationships and behaviors.  Micro-knowledge structures as named sets and their composition into macro knowledge structures provide a model to represent our knowledge about the world.

The Role of Knowledge Structures in Information Processing

A knowledge structure [11 – 13] is composed of related fundamental triads (named sets), and any state change causes behavioral evolution based on the connections. The long and short of the theory of knowledge is that the attributes of objects in the form of data and the intrinsic, and ascribed knowledge of these objects in the form of algorithms and processes, make up the foundational blocks for information processing. Information processing structures utilize knowledge in the form of algorithms and processes that transform one state (determined by a set of data) of the object to another one with a specific intent. Information structures and their evolution using knowledge and data determine the flow of information. Living organisms have found a way not only to elaborate the knowledge of the physical objects, but also to create information processing structures that assist them in executing state changes. Representation of knowledge structures and operations on their schema are detailed in two papers [14, 15] describing their relationship to the design of autopoietic machines. There are two kinds of knowledge structures that enable autopoietic behaviors:

  • Micro-knowledge structures that contain schema for modeling, and executing lowest level functions of the components of the autopoietic system. In the case of the brain and the body, these are equivalent to the cortical columns that process myriad data from different sources (the senses) and create the reference frames. In the case of digital information processing systems these are the micro-services that provide computations required to fulfill functional requirements of the system and are designed in the form of algorithms to be executed in a computing machine (a general purpose computer with required CPU and memory).
  • In both biological and digital autopoietic systems, these micro-knowledge structures contain functions that discover the resources required, assemble the right structures necessary to execute the microservices defined by the blueprint.
  • Macro-knowledge structures that contain schema for modeling the knowledge of the self and its interactions with the world. In the case of the brain, the schema contain the entities, relationships and behaviors depicting the body and its interactions with the world. In the case of digital information processing systems, the schema contain the entities, relationships and behaviors of both the functional requirement execution knowledge and the non-functional requirements knowledge.

The Role of Cognizing Agents or Generalized “Oracles” in Information Processing

Alan Turing in his thesis introduce the “oracle” [18] as a device that supplies a Turing machine with the values of some function (on the natural numbers or words in some alphabet) that is not recursively, e.g., Turing-machine, computable. Burgin and Mikkilineni [19] showed that the Turing Oracle could be generalized to  allow corrections to manage the deviations from the intent of computations from an external viewpoint and could be exploited to create a monitoring and control system to infuse cognition into computing. An implementation of the Turing oracle was utilized in implementing the distributed intelligent managed element (DIME) network architecture to demonstrate self-managing distributed computing processes [20]. The video demonstrates the application of the oracle concept to create a multi-cloud orchestrator to state-fully manage, migrate and scale workloads across multiple clouds. https://youtu.be/tu7EpD_bbyk

While this approach provided migration of workloads from one cloud to another without disturbing the transactions in progress, the Turing oracle approach used was intrusive in the sense that the computation has to check whether there is any external oracle communication before it proceeds with computation. In addition, the computation itself has no way to communicate with the oracle to influence globally distributed computations with its local knowledge. In essence, the computation has no visibility to the intent except executing the algorithms specified in its program. In the next section, we see that the generalized oracles combined with the knowledge structures and structural machines provide a more powerful information processing structure with autopoiesis that allows the specification of the intent and its life-cycle management in the face of fluctuations that cause deviations.

Structural Machines, Triadic Automata and Information Processing Structures

Triadic automata, and autopoietic machines introduced by Burgin [14, 15] allow us to design a new class of distributed information processing structures that use infware containing hierarchical intelligence to model, manage and monitor distributed information processing hardware and software components as an active graph representing a network of networked processes. An autopoietic system implemented using triadic structural machines, i.e., structural machines working as triadic automata, is capable of “of regenerating, reproducing and maintaining itself by production, transformation and destruction of its components and the networks of processes downstream contained in them.” The autopoietic machines which operate on schema containing knowledge structures allow us to deploy and manage non-stop and highly reliable computing structures at scale independent of whose hardware and software are used. Figure 6 shows the structural machine operating on the knowledge structure in the form of an active graph in contrast to a data structure in the Turing Machine implementation, which is a linear sequence of symbols.

Figure 6: The schema in a triadic automaton represents a knowledge structure containing various objects, inter-object and intra-object relationships and behaviors that result when an event occurs changing the objects or their relationships

It is important to emphasize the differences between data, data structures and knowledge structures. Data are mental or physical “observables” represented as symbols. Data Structure defines the relationships between data items. Knowledge structures on the other hand, include data structures abstracted to various systems, inter-object and intra-object relationships and behaviors that result when an event occurs changing the objects or their relationships. The inclusion of behaviors in the knowledge structures and the operations on the knowledge structure schema described by Prof. Burgin provides the composability and the ability of the wired networks to fire together to represent the state of knowledge and its evolution.

How Do We Use the Learnings from Neuroscience and the Mathematics of information Processing Structures to Design a New Class of Digital Autopoietic Machines?

We learn from neuroscience that the “neurons that fire together wire together.” We propose the corollary – the nodes that are wired together fire together – which allows us to explain the knowledge structures. The knowledge structure is a network of networks composed of subnetworks, each with nodes processing information and communicating with other nodes through links. The nodes contain information processing structure that models the physical and mental models of information in the form of entities/objects, their relationships and behaviors that influence each other when changes occur.

Some nodes are similar to the cortical columns in the old brain with specialized services that sense, process and manage information in the local domain to create the knowledge about itself and its relationship to the external world. The subnetworks formed with links to other nodes provide the communication of changes that impact the wired subnetwork. Subnetworks thus provide combined knowledge structure representing higher level composed behaviors.

Other nodes with regulatory knowledge structures are like the modules in the neocortex that provides global sharing of knowledge, deep memory of the system with history and deep reasoning modules that provide predictive analytics and risk management behaviors that provide global optimization of the system evolution.

Can we use this knowledge to:

  • Model and explain how cognitive processes in the living organisms work, and
  • Design and implement a new class of digital automata that include models of themselves and their environment and exhibit autopoietic behavior?

Epilogue

Figure 7: The Anatomy of a knowledge structure

On one hand, we are beginning to understand how the genome encodes the knowledge of how to use physical and chemical processes in the physical world and create structures that process information in real-time to build a “self” with unique identity, model both the “self” and its interactions with the physical world outside, monitor and manage its evolution. The autopoietic behavior of biological
systems arise from the triadic structures consisting of the knowledge depicted in figure 7. The “hardware” [21] consists of the physical world knowledge structures that depict the entities (and objects), their relationships and behaviors in the physical world. The “software” consists of the control knowledge structures that depict the entities (and objects), their relationships and behaviors to sense, model and manage the physical world through 4E cognitive processes. The “infware” consists of the control knowledge that depict the entities (and objects), their relationships and behaviors that constitute an autopoietic entity (the “self”) that has an intent and the knowledge to achieve that intent by using the software to sense and manage the physical world.

Figure 8 shows how the knowledge network is organized. The nodes that are wired together fire together to execute the behavioral changes that optimize risks and rewards based on both local and global constraints. The goal of the system is defined in the digital genome and the predictive behaviors dictate methods for global optimization.

Figure 8: The knowledge network

On the other hand, we also have a new theory of information processing structures [15] which allows us to not only provide a theoretical understanding of how the autopoietic behavior of biological systems, but also design a and implement a new class of digital automata that are autopoietic. This post and the conference (www.tfpis.com ) are aimed at starting a discussion that may help us move from classical computer science and its limitations to the new science of information processing structures which allows us not only understand how brain learns and uses the knowledge to maintain and manage the stability and survival of the “self” but also to design and implement a new class of machines that learn and use the knowledge to provide an extension of 4E cognition with a collective consciousness.

References

[ 1] Dehaene, Stanislas. (2011). “The Number Sense: “How the Mind Creates Mathematics” Revised and Updated. Oxford University Press. Kindle Edition. P. 15.

[ 2] Wigner E.: (1960) “The Unreasonable Effectiveness of Mathematics in the Natural Sciences,” in Communications in Pure and Applied Mathematics, vol. 13, No. I (February 1960). New York: John Wiley & Sons, Inc. wigner.pdf (ed.ac.uk)

[ 3] Burgin, M.; Mikkilineni, R. Cloud computing based on agent technology, super -recursive algorithms, and DNA. Int. J. Grid Util. Comput. 2018, 9, 193–204.

[ 4] Dehaene, S., & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: Basic evidence and a workspace framework. Cognition, 79, 1 – 37.

[ 5] Dehaene, Stanislas. The Number Sense (p. 93). Oxford University Press. Kindle Edition. p 93.

[ 6] Dehaene, Stanislas. (2010). “ Reading in the Brain: The New Science of How we Read” Revised and Updated. Penguin Books, New York. P. 10.

[ 7] Jeff Hawkins, (2021). “A Thousand Brains: A New Theory of Intelligence.” Basic Books, New York.

[ 8] Dehaene, Stanislas. (2014). “ Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts” Penguin Group, New York.

[ 9] M. Burgin, (2003) Information: Problems, Paradoxes, and Solutions. tripleC 1(1): 53-70. ISSN 1726-670X DOI: https://doi.org/10.31269/triplec.v1i1.5

[ 10] Dehaene, Stanislas. (2014). “ Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts” Penguin Group, New York.

[ 11] Mikkilineni R, Burgin M. Structural Machines as Unconventional Knowledge Processors. Proceedings. 2020; 47(1):26. https://doi.org/10.3390/proceedings2020047026

[ 12] Burgin, M. (2010) Theory of Information. Fundamentality, Diversity and Unification. World Scientific Publishing, Singapore. https://www.worldscientific.com/doi/pdf/10.1142/7048

[ 13] Burgin, Mark. (2016) ” Theory of Knowledge.” World Scientific Publishing, Singapore. https://www.worldscientific.com/doi/pdf/10.1142/8893

[ 14] Burgin, M., Mikkilineni, R. and Phalke, V. Autopoietic Computing Systems and Triadic Automata: The Theory and Practice, Advances in Computer and Communications, v. 1, No. 1, 2020, pp. 16-35

[ 15] Burgin, M. and Mikkilineni, R. From Data Processing to Knowledge Processing: Working with Operational Schemas by Autopoietic Machines, Big Data Cogn. Comput. 2021, v. 5, 13 (https://doi.org/10.3390/bdcc5010013 )

[ 16] Dehaene, Stanislas. (2020). “How We Learn: Why Brains Learn Better Than Machine … for Now” Penguin Random House. ISBN 9780525559887

[ 17] Kleiner, J. Mathematical Models of Consciousness. Entropy 2020, 22, 609. https://doi.org/10.3390/e22060609

[ 18] Turing, A. M. Systems of logic defined by ordinals. Proc. Lond. Math. Soc., Ser. 2, 45, pp. 161-228, 1939.

[ 19] Burgin M. and Mikkilineni R. ‘Semantic Network Organization based on Distributed Intelligent Managed Elements’, In Proceeding of the 6th International Conference on Advances in Future Internet, Lisbon, Portugal, pp. 16-20, 2014.

[ 20] R. Mikkilineni, G. Morana, and M. Burgin. “Oracles in Software Networks: A New Scientific and Technological Approach to Designing Self-Managing Distributed Computing Processes,” In Proceedings of the 2015 European Conference on Software Architecture Workshops (ECSAW ’15). ACM, New York, NY, USA, Article 11, 8 pages, 2015.

[ 21] Burgin, M. Super-Recursive Algorithms; Springer: New York, NY, USA; Heidelberg/Berlin, Germany, 2005.

[1] Classical computer science based on John von Neumann’s stored program implementation of the Turing machine has given us the general purpose computer along with both symbolic and neural network based information processing structures. Key limitation of the general purpose computer is described in the book by Cockshott et al. (P. Cockshott, L. M. MacKenzie and G. Michaelson, “Computation and Its Limits,” Oxford University Press, Oxford, 2012.) “The key property of general-purpose computer is that they are general purpose. We can use them to deterministically model any physical system, of which they are not themselves a part, to an arbitrary degree of accuracy. Their logical limits arise when we try to get them to model a part of the world that includes themselves.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s