What is a Computer and Is the Brain a Computer?

Abstract

When someone asks the question “Is the brain a computer?”, the answer depends on the knowledge, the person or the system (for example a Large Language Model (LLM)), possesses and whether it is adequate to answer the question. Whether the response is accepted or rejected also depends on the knowledge the receiver possesses. So, it is important to understand the nature of knowledge, how it is acquired (the learning process), and how it is used. General Theory of Information provides a framework for understanding and modeling the representation and use of knowledge in both biological and artificial systems.

Introduction

When anyone answers any question, the answer depends on what knowledge that person or the system has access to at that moment of answering. Knowledge refers to useful information gained through various means including learning and experience and belongs to the realm of mental structures biological systems have developed through evolution and natural selection. “To know” involves a subject. The General Theory of Information (GTI) relates the material structures in the physical world and the mental structures that biological systems use to model their observations and interact with their external environment using the cognitive apparatuses. Information provides the bridge between a biological system’s understanding of the material world consisting of matter and energy and their mental worlds which utilize information, convert it into knowledge, use it to make sense of what is being observed, and act while the observation is still in progress. Later, we will also discuss how information forms a bridge between the mental structures and the digital structures in computing machines representing knowledge.

The material world consists of structures that are formed and evolve through the laws of transformation of matter and energy based on the various interactions among the components. The state of the system and its evolution contains the information in the form of phase space which provides a comprehensive framework for representing the states of matter and energy. The phase-space trajectory represents the set of states starting from one particular initial condition and contains and information is the difference between the states of the system.

For example, water molecule is formed through the interactions of matter and energy involving the hydrogen and oxygen atoms and can exist in various forms including ice, liquid, snowflake or steam. The information is the difference between these various states and the knowledge is the observer’s mental representation of these states in the form of structures. GTI provides a way to represent and process observer’s information into knowledge in the form of structures. They form the basis for the interaction between the physical and mental worlds, tying together nature, observers such as humans, and a society of observers interacting with each other as their common existential and cognitive basis.

According to GTI, matter and energy are the physical entities that can represent and process information and knowledge. Information and knowledge, while not physical entities themselves, can be represented physically and can influence the state and behavior of matter and energy.

In essence, as the material world evolves, we receive the information through our senses, convert it into knowledge, use it to draw conclusions, share it by explaining, and teach to someone else. Learning is the process by which we receive information and process it to either create new knowledge which we do not possess already or update current knowledge by connecting to the already possessed knowledge or rejecting it using the existing knowledge.

For the biological systems, the use of knowledge starts at the moment of conception. The genome contains the knowledge of life processes of how to build a unique entity starting from a single cell that builds itself into a full person that continues to increase knowledge through learning and is able to answer the questions such as “Is the brain a computer?”

As described by Itai and Lercher in their book (p. 11) The Society of Genes, the single fertilized egg cell develops into a full human being is achieved without a construction manager or architect. The responsibility for the necessary close coordination is shared among the cells as they come into being. It is as though each brick, wire, and pipe in a building knows the entire structure and consults with the neighboring bricks to decide where to place itself”.

The single cell replicates into trillions of cells, each executing a process with a purpose using metabolism and sharing information with other cells to execute a hierarchy of processes to manage and maintain life as defined in the genome. These processes execute autopoietic and cognitive behaviors. The autopoietic behaviors are capable of regenerating, reproducing, and maintaining the system by itself with the production, transformation, and destruction of its components and the networks of processes in these components. The cognitive behaviors are capable of sensing, predicting, and regulating the stability of the system in the face of both deterministic and non-deterministic fluctuations in the interactions among the internal components or their interactions with the environment.

The long and short of the discussion is that knowledge that each individual possesses is unique based on the knowledge inherited and the unique experiences that the individual accumulates throughout one’s lifetime and is the basis for the answers they provide. Therefore, it is important to understand the nature of knowledge and the learning process that allows its update based on information received through various means. We as humans update our knowledge continuously through both inherited and learned processes and use it when we answer a question. The answer depends on the state of our knowledge at that instant and further interactions with the external world influences the future state of our knowledge. This is an important observation because it tells us that if two persons are interacting with each other exchanging information, the evolution of the interaction is very much dependent on the knowledge gap between the participants and how wide is that gap.

GTI provide a comprehensive framework for understanding and modeling the representation and use of knowledge in both biological and artificial systems. The ontological thesis states that the autopoietic and cognitive behavior of artificial systems must function on three levels of information processing systems and be based on triadic automata. The axiological thesis states that the efficient autopoietic and cognitive behavior has to employ structural machines.

GTI is used to define a schema and associated operations to model how knowledge is represented using a scientific object called a structure. A genome in the language of GTI encapsulates “knowledge structures” coded in the form of DNA and is executed using the “structural machines” in the form of genes and neurons which use physical and chemical processes (dealing with the conversion of matter and energy). The information accumulated through biological evolution is encoded into knowledge to create the genome which contains the knowledge network defining the function, structure, and autopoietic and cognitive processes to build and evolve the system while managing both deterministic and non-deterministic fluctuations in the interactions among the internal components or their interactions with the environment. The cells are process execution engines in this model and are orchestrated by the genome acting as a structural machine using autopoietic and cognizing oracles.

The same schema is used in defining a digital genome specifying the operational knowledge of algorithms executing the software life processes with specific purposes using replication and metabolism. The result is a digital software system with a super-symbolic computing structure exhibiting autopoietic and cognitive behaviors that biological systems also exhibit.

In summary, the knowledge is represented as a network of autonomous agents executing a hierarchy of processes and each process is endowed with autopoietic and cognitive properties. Each agent executes well defined goals and collaborate as an element in a society with shared knowledge to accomplish both local and systemic goals. Associative memory and event-driven history of interactions are part of this knowledge network.

Evolution of Knowledge: From the Unknown Unknown to the Known Known

Figure 1 shows the distribution of knowledge between individuals, human society where shared knowledge exists and the vast universe of the unknown. Material world exists whether it is observed or not. The knowledge about the universe is represented as the unknown unknown. On the other side, each individual is born with some knowledge about the self and its relationship with the external world (known known). During the lifetime of the individual, the knowledge expands to discover the known unknowns and converting them to known knowns through the process of learning which consists of discovery, reflection, application, and sharing of knowledge. In a society of individuals sharing knowledge, the pool of knowledge grows and exceeds the knowledge of any individual as the number grows exponentially. This leads to some knowledge available in the pool but not known to a particular individual (unknown known). This inevitably leads to the knowledge gap between two individuals engaged in discussion. Fo example, when asked a question “what is a computer,” the answer varies depending on who answers the question.

What is a Computer?

  • Before the advent of electronic computers, the term “computer” referred to a person who performed calculations or computations. The job was typically tedious and involved long hours of manual number crunching. Alan Turing’s observation “A man in the process of computing a real number replaced by a machine which is only capable of finite number of conditions” changed the way we view computers. The result of this observation is the symbolic computing that is John von Neumann’s stored program control implementation of the Turing Machine. It used a sequence of symbols (called programs) that operate on another sequence of symbols (called data structures) to mimic how humans computed numbers. It is possible to divide the history of computing into three periods (Luck et al., 2005):
  • Computation as calculation, or operations undertaken on numbers.
  • Computation as information transformation, or operations on multimedia, such as text, audio or video data.
  • Interactive computation, or computation as interaction.

As we know, today, almost everything – basic elements, data structures, programming languages, etc. – changes very fast in computer technology but von Neumann architecture exists still as the prevalent architecture for computers. For computer scientists, ‘‘Computer science is concerned with information in much the same sense that physics is concerned with energy… The computer scientist is interested in discovering the pragmatic means by which information can be transformed.” (Denning et al.). “Computer science and engineering is the systematic study of algorithmic processes that describe and transform information: their theory, analysis, design, efficiency, implementation, and application. The fundamental question underlying all of computing is, “What can be (efficiently) automated?”

Computing took a major turn with the observation of McCulloch and Pitts in 1943, with their paper on how neurons might work. They modeled a simple neural network using electrical circuits. Their model, known as the McCulloch-Pitts neuron, is a fundamental building block of artificial neural networks. It accepts binary inputs and produces a binary output based on a certain threshold value. This model can be mainly used for classification problems.

Frank Rosenblatt in 1957 introduced the perceptron, a type of artificial neuron aimed to develop a machine that could mimic the human brain’s ability to recognize patterns and learn from experience. The perceptron takes an input, aggregates it (weighted sum), and returns 1 only if the aggregated sum is more than some threshold else returns 0. This model is a more general computational model than the McCulloch-Pitts neuron. It can be used to implement linearly separable functions. Rosenblatt also proposed the “Perceptron learning rule” which is a method for learning the weights of the inputs. This was a significant step towards the development of machine learning and artificial neural networks.

Both approaches have contributed to the current state of the art.

Figure 2 depicts the current state of evolution of computing.

Figure 2: State of the Art Today: Symbolic and Sub-Symbolic Computing.

Symbolic computing is based on algorithms that are well defined tasks that execute well-defined processes. Machine learning algorithms use statistical methods such as regression, classification, ad clustering to gain insights from data. Sub-symbolic computing differentiates itself where the algorithms are using deep learning where training, testing, and validation are used to build neural networks that process text, audio, pictures, and video.

All three methods produce knowledge that can be used in several ways:

  1. Process automation,
  2. Intelligent decision making based on insights gained from data analytics, and
  3. The use of transformers to use the knowledge from deep learning neural networks to mimic some of the cognitive tasks that human brain performs as shown in figure 2.

This advance has led some to speculate that the deep learning algorithm’s and transformers can be trained to mimic all human cognitive functions and machine intelligence will soon surpass human intelligence.

However, many proponents of this speculation either do not “know” (unknown knowns) the computation and its limits based on the Turing Machine computing model and the limitations of Church-Turing thesis or ignore (known knowns) them.

Others argue that the singularity comes from the emergence properties of complex adaptive systems and the deep learning algorithms and evolutionary algorithms are complex adaptive systems (CAS). However, there is a world of difference between complex adaptive system behaviors and genome-based biological system behaviors. They cite examples of birds, bees, ants, groups of cars or people in a city or town etc. Emergence in complex adaptive systems (CAS) refers to the phenomenon where novel characteristics and behaviors arise from the interactions of individual components, or agents, within the system. For example, each ant in a colony follows simple rules, such as following a pheromone trail to food. However, the collective behavior of the colony—finding food, defending the nest, caring for larvae—can be quite complex and appears intelligent. The key to understanding this process lies in the concept of feedback. In a CAS, agents interact with each other and their environment, and these interactions produce feedback that influences future interactions. Over time, these feedback loops can lead to the development of complex patterns of behavior that are adaptive and resilient. The self-organization phenomena of CAS are understood in terms of function, structure, and fluctuations, their impact on the equilibrium states of the system, and transition to different energy minima defining different stable states.

While CAS exhibit self-organization, the self-regulation exhibited using autopoietic and cognitive behaviors by genomic systems is a quite a different matter.

This leads us to the question “Is Brain a Computer?

Is the Brain a Computer?

If we take the current state of the art computing we described above, the computing systems lack autopoietic behavior which requires the algorithms to have self-awareness and self-regulation knowledge to accomplish the system’s goals when deviations from expected behaviors occur because of large fluctuations in component interactions. For example, if there is a large demand for resources or large disruptions in resource availability, the system halts unless external intervention occurs.

On the other hand, biological systems have built-in knowledge to maintain homeostasis using autopoietic behaviors. In addition, the brain creates associative memory and event driven interaction history of all the entities, their relationships and even-driven behaviors and uses them to make sense of what it is observing and act appropriately while the observation is still in progress.

Suffice it to say that current state of the art computing machines fall short in both these counts.

Can we improve them to include these behaviors?

I suggest reading these papers and references cited and make up your own mind.

https://www.preprints.org/manuscript/202404.1298/v1

https://www.mdpi.com/2409-9287/8/6/107

Figure 3 summarizes my view.

Figure3: Structural Machine implementing the knowledge network using super-symbolic computing structures.

Leave a comment