
Figure 1: The observer and the observed according to the General Theory of Information articulated by prof. Mark Burgin (www.tfpis.com)
Abstract
Making computing machines mimic living organisms has captured the imagination of many since the dawn of digital computers. According to Charles Darwin, the difference in mind between humans and higher animals, great as it is, certainly is one of degree and not of kind. Human intelligence stems from the genome that is transmitted from the survivor to the successor. Machine intelligence stems from humans designing how human knowledge can be represented as a sequence of symbols (data structures) and use operations on them (programs), also represented as a sequence of symbols to model and interact with the world. The evolution of the data structures, using John von Neumann’s stored program control implementation of the Turing Machine, being operated on by the program leads to process automation and gaining insights by programs mimicking neural networks of the human brain. This blog explores the difference between current state of the art of human and machine intelligence using the General Theory of Information.
Introduction
According to a dictionary definition from Oxford Languages, intelligence is the ability to acquire and apply knowledge and skills. Knowledge, therefore, plays a crucial role and the mechanisms that help acquire, process, and use knowledge to execute specific goals play an important role in developing the required skills and using intelligence.
Human intelligence stems from the knowledge transferred by the survivors to their successors in the form of a genome. The genome contains all of the information (for executing the life processes) needed for a human to develop and grow. It contains the operational knowledge to create, monitor, and manage 30+ trillion cells, each cell executing a process that uses replication to specialize and grow, and metabolism to use the energy and matter transformations to derive the resources to execute the functions constituting the life processes. The trillions of cells thus created behave like a community, where individual cell roles are well-defined, and their relationships with other cells are defined through shared knowledge and they collaborate by exchanging messages with each other defined by specific relationships and behaviors.
“The single fertilized egg cell develops into a full human being is achieved without a construction manager or architect. The responsibility for the necessary close coordination is shared among the cells as they come into being. It is as though each brick, wire, and pipe in a building knows the entire structure and consults with the neighboring bricks to decide where to place itself.”
The information is carried in the genome as operational knowledge and that brings up the question “Knowledge and Information – What is the Difference?”
The general theory of information (GTI) gives a comprehensive answer to this question. See M. Burgin and R. Mikkilineni, “General Theory of Information Paves the Way to a Secure, Service-Oriented Internet Connecting People, Things, and Businesses,” 2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI), Kanazawa, Japan, 2022, pp. 144-149, doi: 10.1109/IIAIAAI55812.2022.00037. p. 146
“While some researchers proclaim that information is a sort of data, while others maintain that information is a kind of knowledge, the scientific approach tells that it is more adequate treating information as an essence that has a dissimilar nature because other concepts represent various kinds of structures. Assuming that matter is the name for all substances and the vacuum as opposed to energy, then relations between information and knowledge bring us to the Knowledge-Information-Matter-Energy.”
According to the General Theory of Information (GTI), “Information is related to knowledge as energy is related to the matter. Energy has the potential to create, preserve or modify material structures, while information has the potential to create, preserve or modify knowledge structures. Energy and matter belong to the physical world, whereas information and knowledge belong to the world of ideal structures and are represented in the mental world.”
The genome bridges the material world to the mental world by providing the knowledge to build autopoietic and cognitive processes dealing with information acquisition, processing, and its conversion into knowledge which provides the fuel for higher-level intelligence. Autopoiesis refers to the behavior of a system that replicates itself and maintains identity and stability while facing fluctuations caused by external influences. Cognitive behaviors model the system’s state, sense internal and external changes, analyze, predict and take action to mitigate any risk to its functional fulfillment. A single cell in a biological system is both autopoietic and cognitive. Each cell is endowed with all the knowledge to use metabolism (conversion of matter and energy) to build the required material and mental structures to execute various life processes. It provides the knowledge to replicate and assume various roles with specialized functions, and build composite structures that not only perform specialized functions, but also orchestrate the system as a whole to maintain non-functional requirements that maintain stability, safety, security, and survival, while fulfilling various functional requirements to interact with the environment and execute various life processes including the creation and the use of cognitive mental structures that model and interact with the world. Figure 1 summarizes the learnings from the General Theory of Information.
The machine intelligence stems from the stored program implementation of the Turing Machine derived from Alan Turing’s observation of how humans used numbers and operations on them. This 5-minute video summarizes my understanding of the evolution of machine intelligence and its relationship to human intelligence.
Conclusion
I am neither a computer scientist nor a philosopher.
When I came to the United States as a graduate student to study physics, computer science was not an academic discipline, and most of the computers being used were by the physicists, engineers and mathematicians. I had the privilege of learning physics from some of the great physicists of that time. I did my Ph D thesis under the guidance of a well-known solid-state physicist, Walter Kohn who got a Nobel prize in chemistry in 1998. My work involved using computers to solve many body physics problems and during my stay at the University of Paris, Orsay, and the Courant Insitute in New York, I worked on force biased Monte-Carlo and molecular dynamics simulations collaborating with eminent physicists and mathematicians including Loup Verlet, D. Levesque, Malvin Kalos, Joel Lebowitz, Geoffrey Chester, and Jerome Percus. Later, my collaboration with Bruce Berne at Columbia University resulted in interesting new approaches to both Montecarlo and Molecular Dynamics simulations to study hydrophobic interaction. Later, I had the opportunity to join the Bell Labs, where I participated in many innovative approaches to expert system development, object-oriented approaches to automate business and operations support systems. Currently I teach machine learning and cloud computing to graduate students, and practice what I teach as a CTO at Opos.ai, a healthcare startup helping to bridge the knowledge gap between the patients, and the healthcare professionals using machine intelligence to augment human intelligence by increasing their knowledge and reduce human stupidity resulting from self-referential circularity of their logics not moored to external reality by predicting the consequences of their possible actions.
My accidental interest in computer science and the theory behind computation began when I was examining the complexity of deploying, operating and managing applications especially in a distributed environment where CAP theorem limitations were becoming significant. CAP theorem states that a distributed system can deliver only two of three desired characteristics: consistency, availability, and partition tolerance. As demand for 24/7 availability and consistency across a widely distributed workloads across the globe, the need for circumventing the CAP theorem limitation was becoming obvious. At the same time, I also noticed that prof. Peter Wegner and others were pointing to the limitations of the Turing Computing model on which all general- purpose computers were based on. The subject was controversial and vigorous debate was raging just as it is now between AI enthusiasts and AI critiques. I wrote to few prominent computer scientists to find out how we could look at the computing model to overcome the CAP theorem limitations. Unfortunately, they were either busy with their own stuff or did not have a good answer. It also became clear to me after reading Penrose’s articulation of the Turing machine and Feynman’s detailed lectures on Turing machines, that all connected Turing machines are sequential and has problem supporting asynchronous and parallel computations and also the Church-Turing thesis (which is discussed in the video) has limitations. I became an accidental student of computer science, and started to study the evolution of computing, its progress, and the role of general theory of information in relating computing based on sequences of symbols to information processing that goes beyond symbolic computing. Both Peter Wegner and Mark Burgin mentored and shaped ny understanding of computer science and its relationship to the material, mental and the digital worlds we live in.
I am sharing my understanding here so that it may assist in giving the next generation computer scientists and information technology professionals a head start that I did not have. Hopefully, the curious among them will be able to prove or disprove my understanding and make a contribution to our understanding of the difference between human and machine intelligence.
Here are few conclusions I came to as a student of the General Theory of Information and practitioner of Machine Intelligence Applications.
First, current digital symbolic computing and sub-symbolic computing structures are powerful in providing process automation and creating large knowledge pools from which we can derive insights to make decisions. Figure 2 provides the current state of the art of human and machine intelligence.

Figure 2: The genome-derived human intelligence is compared with symbolic computing derived machine intelligence. Human is involved in using the wisdom to take advantage of process automation and insights from sub-symbolic computing.
The genome-derived human intelligence is a multilayered network of networks with self-regulation, and orchestration of trillions of autonomous cellular processes organized as local, clustered, and global structures communicating and collaborating with shared knowledge. Each instance is unique and the knowledge and the autopoietic and cognitive behaviors evolve based on individual’s unique experiences and history. As a result, each individual with a unique mental world, interacts with the outside including other genome-derived entities along with the material world.
The self-identity defines the individual and the unique experience and history defines the mental structures. They also have developed a culture where groups form into societies with social contracts that define the societal genome and leads to collective intelligence.
While the individual and collective intelligence have contributed to improving the skills and knowledge leading to higher intelligence, and higher quality of life, both suffer from the self-referential circularity and can lead to human stupidity if not moored to external reality using a higher-level logic. Major conflicts of human history are derived from the self-referential circularity not moored to external reality and in many cases, by accident or luck, we have survived annihilation. Human stupidity has nothing to do with the tools they design and use. It has to do more with the self-referential circularity of their logics.
Machine intelligence on the other hand, has no self-identity and self-regulation. They are a collection programs designed by humans, that automate processes and provide insights for decision making by analyzing large pools of information and creating large knowledge pools. These knowledge pools by themselves are not intelligent. It takes other programs written by humans or humans themselves to use them as they see fit.
Therefore, the use of these process automation programs and knowledge pools by humans dictate the results and are subject to whether they are biased by human self-referential circularity or the result of a higher-level logic resolving the inconsistencies of lower-level logics. The higher-level self-regulation can only be done by humans at this point because the machines lack a self-identity or a group identity and a self-regulation mechanism.
While humans have the ability to create a societal genome-based self-regulation, there seems to be a conflict between autocratic, oligarchic, and democratic mechanisms of self-regulation. They seem to compete to eliminate each other.
However, according to the Geneal Theory of Information, it is possible to create a digital genome that addresses a specific goal by defining the functional requirements, non-functional requirements, and the best practices from experience and to execute both cognitive and autopoietic behaviors with real-time self-regulation and knowledge acquisition. Super-symbolic computing with structural machines operating on knowledge structures that are constantly updated by symbolic and sub-symbolic processes perhaps will offer a means to reduce the knowledge gap between various actors making decisions in real-time. Hopefully, transparency based on model-based reasoning will help reduce the knowledge gap and foster confidence.
Whatever path the machine intelligence evolves, the General Theory of Information tells us that digital neural networks implemented using symbolic computing alone by themselves will not become super-intelligent by developing higher-level reasoning by induction, deduction, and abduction. Current symbolic and sub-symbolic computing structures are limited by the short-falls discussed in the video and they can easily be exploited by human greed and power-mongering. A recent attempt to pervert the language (which is a carrier of information that has the potential to create or update the knowledge of the receiver) seems to be a popular weapon used by those wielding power with autocratic and oligarchic regulation. Unfortunately, there is no antidote for human stupidity. My only hope is that the next generation computer scientists and information technology professionals will develop digital genome-based systems (using super-symbolic computing) that reduce the knowledge gap between various actors involved in decision making, provide assistance in exposing the pitfalls of self-referential circularity, and suggest ways to move forward with higher levels of intelligence to combat human stupidity.