From Prompt to Purpose: Toward Mindful Machines and the Architecture of Meaningful Intelligence

🔊Listen to a Deep Dive on Mindful Machines by clicking the above picture

Video: Implementing Intuition and Insight in Digital Machines with Post-Turing Computing Models

Abstract

The rapid proliferation of large language models (LLMs) like GPT has catalyzed a transformation in artificial intelligence. These systems demonstrate linguistic fluency, scalability, and modularity, yet remain limited by their lack of memory, embodiment, intentionality, and meta-cognition. This paper introduces Mindful Machines—a novel class of AI systems designed not merely to simulate intelligence, but to participate in meaning-making. Grounded in the General Theory of Information (GTI), 4E cognition, and digital genome encoding, Mindful Machines aim to overcome the epistemic, architectural, and ethical limitations of current generative and agentic paradigms. We compare existing approaches, explore the architectural components of Mindful Machines, and articulate a vision for intelligence that is structurally adaptive, ethically coherent, and teleonomically guided.

1. Introduction: The Limits of Generative AI

LLMs excel at producing text, completing prompts, and aligning multimodal inputs—but these capabilities mask structural deficiencies:

  • No persistence of memory: LLMs operate within short context windows, lacking continuity across interactions.
  • No grounded intentionality: They follow prompts but have no goals of their own.
  • No meta-cognition: They cannot reflect on their output or improve themselves autonomously.
  • No embodiment: They are not situated in any environment or system of consequences.

These limitations render LLMs competent without consciousness, powerful yet shallow simulators of intelligence.

2. Reframing Intelligence: From Tokens to Teleonomy

Mindful Machines represent a fundamental rethinking of intelligence. They are not just tools but synthetic selves—systems that perceive, remember, plan, and evolve.

Key principles include:

  • Teleonomy: Purpose emerges from internal structure and self-regulating logic—not from human-imposed tasks.
  • Structural coupling: Like biological organisms, these systems adapt their form and function in tandem with environmental feedback.
  • Narrative coherence: Truth is contextual, grounded in memory and identity—not raw data correlation.

This shift moves AI from task-based automation to epistemic orchestration—where systems do not just generate content, but construct and refine meaning over time.

3. The Architecture of Mindful Machines

Mindful Machines Architecture

A. Digital Genome

At the core is a digital genome: a formal specification encoding the system’s modular design, interaction rules, memory architecture, and evolutionary capabilities. It plays four major roles:

  1. Structure: Defines forms, functions, and dependencies.
  2. Memory schema: Stores knowledge across semantic, episodic, and causal layers.
  3. Adaptation logic: Guides repair, replication, and recomposition.
  4. Goal orchestration: Enables teleonomic behavior by aligning submodules with emergent goals.

B. 4E Cognition

Mindful Machines implement 4E cognition:

  • Embodied: They are grounded in sensory-like data streams (e.g., video, audio, logs) and act within environments.
  • Embedded: Their cognition is shaped by contextual variables and relational states.
  • Enactive: They learn meaning through action and feedback.
  • Extended: Their cognition spans internal modules (e.g., memory, planners) and external tools (e.g., APIs, databases).

LLMs serve as useful cortical modules—interpreting text, summarizing state, or generating hypotheses—but they are orchestrated by a larger adaptive system.

C. Cognizing Oracles

These are meta-cognitive agents that:

  • Evaluate internal states and memories.
  • Reframe interpretations based on shifting context.
  • Track narrative coherence and ethical consistency.
  • Guide self-repair and self-improvement.

They replace the brittle logic of agentic systems with reflective adaptation grounded in context.

4. Comparative Overview

DimensionLLMs (Gen-AI)Agentic AIMindful Machines
ArchitectureToken sequence predictionGoal-driven planningGenome-encoded modular systems
MemoryStateless context windowsTask-state trackingSemantic, episodic, and causal memory
IntentionalityPrompt-alignedExternally defined goalsTeleonomic (structure-driven goals)
Meta-CognitionSimulated via promptsNone or limited task-level logicReflexive self-models and oracles
Ethics & OntologySymbolic but shallowConstraint-based heuristicsCausal-symbolic reasoning with feedback
ResilienceFragile, needs retrainingFails with recursionSelf-adaptive, narrative-stable
ScalabilityHorizontal (replication)Multi-agent coordinationHolarchic, structurally coherent growth
CognitionSurface fluencyTactical control4E-based, improvisational, contextual
EmbodimentDisembodiedLimited interface useFully situated, interface-aware

5. Implementing 4E Cognition with LLMs and Sensor Integration

LLMs can play an essential role in mindful architectures when embedded within a multimodal, memory-rich, event-driven system:

  • Text, audio, video inputs are transformed into structured representations using LLMs as perceptual interpreters.
  • These representations are linked to causal and episodic memory scaffolds, allowing the system to track “what happened,” “why it mattered,” and “how it shaped future decisions.”
  • Cognizing oracles evaluate this evolving memory to refine behavior and learning strategies.
  • The system becomes interactive, self-reflective, and goal-modifying—capabilities far beyond what LLMs can do alone.

This is how cognitive modularity, intentional behavior, and narrative learning are achieved—not with more parameters, but with structural coherence and reflexive design.

6. Enterprise Implications: From Automation to Epistemic Collaboration

Gen-AI offers:

  • Cheap, scalable summaries.
  • Natural language interfaces.
  • Prompt-tuned solutions.

Agentic AI offers:

  • Rule-based process automation.
  • Modular execution pipelines.
  • Task resilience within narrow bounds.

Mindful Machines offer:

  • Adaptive co-evolution with enterprise goals.
  • Causal reasoning with narrative grounding.
  • Human-aligned decision-making and ethical continuity.

Example: In healthcare, a genome-guided assistant evolves with a patient’s narrative history, delivering proactive care insights—not just reactive diagnosis. It collaborates, reasons, and adjusts, much like a human caregiver.

7. From Critique to Constructive Transformation

Skeptics rightly question AI systems’ depth, ethics, and safety. Mindful Machines directly address these concerns:

  • Not black boxes: they feature transparent, modular structure.
  • Not reactive tools: they enact persistent, modifiable goals.
  • Not static models: they evolve through experience and feedback.
  • Not ethically naïve: they embed meaning, coherence, and pluralistic logic in decision frameworks.

They do not reject current technologies. Instead, they repurpose them within a more rigorous and biologically inspired framework.

8. Conclusion: Toward Participatory Intelligence

In the age of Gen-AI, the dominant question is: “What can AI generate?”

In the age of Mindful Machines, the question becomes:
“What can AI understand, remember, and responsibly decide?”

By combining structural memory, reflexive models, digital genomes, and 4E cognition, Mindful Machines chart a path beyond prompt-based mimicry. They promise not just smarter software, but synthetic entities that participate in meaning-making, co-evolve with human systems, and guide the next phase of technological intelligence.

The challenge ahead is not just technical—it is epistemological, ethical, and architectural.

From Cells to Code and Chips: Integrating Enterprise Processes with Digital Genome

When answering the question ‘Which came first, the chicken or the egg?’ it is often said that the chicken is simply an egg’s way of making another egg.

Dyson, G. The Darwin Among the Machines: The evolution of Global Intelligence, Basic Books, New York, 1997.

He also says

“Metabolism and replication, however intricately they may be linked in the biological world as it now exists, are logically separable. It is logically possible to postulate organisms that are composed of pure hardware and capable of metabolism but incapable of replication. It is also possible to postulate organisms that are composed of pure software and capable of replication but incapable of metabolism.”

This observation has a profound implication on how we build intelligent machines using code and micro-chip circuits. The decoupling of pure software that is capable of replication but incapable of metabolism from pure hardware, which is capable of metabolism, but incapable of replication suggests new possibilities to combine them to build a new class of silicon-based society of software components that are autopoietic and meta-cognitive just like a society of cells in a carbon-based biological system.

General Theory of Information and the Burgin-Mikkilineni Thesis shows a path. In this post, we examine the Digital Genome derived from the General Theory of Information and how to build autopoietic and meta-cognitive distributed software societies.

Introduction

The quest to understand life has fascinated humanity for centuries, inspiring philosophers, scientists, and thinkers to explore the essence of existence. This post delves into the current understanding of life, drawing on historical and philosophical contexts, and explores the potential for machines to exhibit life-like properties. By leveraging the digital genome paradigm derived from the General Theory of Information (GTI), we can create an enterprise system that mimics the body, brain, and mind functions of biological systems, enhancing efficiency, adaptability, and scalability. The purpose of this post is fourfold:

  1. Understand the difference between material structures and biological structures and how properties of “life” differentiate them,
  2. Study the role of the genome, associative memory, event-driven interaction history of the components of the system, the roles of body, brain, and mind using the general theory of information, and
  3. Use the digital genome derived from the General Theory of information to build digital genome-based digital “body, brain, and mind” transforming end-to-end business visibility and control.
  4. Gain insights from the development of autopoietic and meta-cognitive distributed software applications that leverage the digital genome for specification, design, deployment, and operation—integrating both deep learning (sub-symbolic) and algorithmic (symbolic) computing structures.”.

Properties of Life

The exploration of life dates back to ancient civilizations. Greek philosophers like Aristotle pondered the nature of living beings, proposing that life is characterized by growth, reproduction, and the ability to respond to stimuli. In the 17th century, René Descartes introduced the concept of dualism, distinguishing between the mind and the body, and suggesting that life involves both physical and mental processes.

In the 20th century, the discovery of DNA revolutionized our understanding of life. The double helix structure, elucidated by James Watson and Francis Crick, revealed the genetic blueprint that governs the development and functioning of living organisms. This breakthrough laid the foundation for modern biology and genetics, providing insights into the mechanisms of life.

Erwin Schrödinger, in his seminal work “What is Life?” published in 1944, approached the question from a physicist’s viewpoint. Schrödinger proposed that life is governed by the laws of physics and chemistry, yet it exhibits unique properties that distinguish it from non-living matter. He introduced the concept of an “aperiodic crystal,” suggesting that genetic information is stored in a stable yet complex molecular structure. Schrödinger’s ideas influenced key figures in molecular biology, including Watson and Crick, and helped pave the way for the discovery of DNA.

Today, life is generally characterized by self-replication, metabolism, and evolution. It involves processes such as growth, response to stimuli, and reproduction. Modern definitions emphasize the role of communication and network creation among cells, viruses, and RNA networks. These processes are underpinned by the exchange of matter and energy, guided by information encoded in genetic material.

Addy Pross, in his book “What is Life?: How Chemistry Becomes Biology,” describes life as a continuous chemical process governed by principles of stability and complexity. He argues that Darwinian evolution is a biological expression of a deeper chemical principle, where replicating molecules tend to become more complex and acquire the properties of life. Pross’s perspective highlights the dynamic nature of life, driven by chemical interactions that lead to increased complexity and stability.

Energy and Matter: Universe is made up of energy and matter and the transformation rules governed by the laws of nature. Energy, with its boundless potential, is the architect of matter, shaping and modifying its structure while altering the entropy of the system. Information, in parallel, describes the state of a system and its changes, guiding the evolution of matter through the laws of transformation. These laws, whether quantum or classical, are encapsulated in the Schrödinger equation or Hamilton’s canonical equations, dictating the behavior of ideal structures. Ideal structures are theoretical models used in science to describe how systems behave under perfect conditions.

Thermodynamics and Stability – Seeking Equilibrium: The evolution of energy and entropy within a system adheres to the immutable laws of thermodynamics. Systems strive for equilibrium, seeking states of energy minima. When multiple minima exist, transitions occur based on the interactions of the system’s components. These phase transitions, driven by fluctuations in energy and entropy, exemplify the system’s adaptive nature, moving from one energy minimum to another in a phenomenon known as emergence.

Complex Adaptive Systems – The Role of Entropy: Emergence is a hallmark of complex adaptive systems, where fluctuations in interactions lead to changes in energy and entropy, propelling the system from one stable state to another. These transitions are often beyond the control of the system, influenced by external forces that alter the interactions between components and their environment. Entropy, a measure of structural order, evolves as the system adapts to new conditions.

Biological Systems – Mastering Entropy: Biological systems, with their unique properties, manage entropy through energy exchanges within the system and with their environment. GTI posits that information is the bridge between the material world and the mental world of biological systems. These systems have evolved to create and update knowledge, transforming information from the material world into mental structures.

This table summarizes the difference between Complex adaptive systems and biological systems:

FeatureComplex Adaptive SystemsBiological Systems
ControlDecentralizedOften decentralized, but with internal regulatory subsystems (e.g., homeostasis)
AdaptationReactive, via feedback loopsAdaptive through feedback and proactive responses
EmergenceYesYes, but often more structured and purposeful
Self-RegulationLimited to system-level feedbackActively maintain internal balance (e.g., temperature, pH)
Self-ReflectionAbsent – agents do not possess awarenessPresent in higher organisms (e.g., humans) capable of learning, planning, introspection

The Genome – Blueprint of Life: The genome, a repository of knowledge, provides the instructions for creating, operating, and managing biological processes. It specifies functional and non-functional requirements, best practices for energy and entropy management, and ensures stability and survival. Biological systems inherit this knowledge, using it to build, operate, and self-regulate their structures, maintaining stability while interacting with their environment.

Information and Knowledge – The Essence of Existence: Information has the potential to create or modify knowledge in biological systems. From cradle to grave, we are shaped by the information we receive, converting it into knowledge that guides our actions and interactions. Our genome equips us with the foundational knowledge to manage our existence, while our experiences and perceptions continually update our mental structures.

General Theory of Information: Mark Burgin’s General Theory of Information (GTI) provides a comprehensive framework for understanding information in biological systems. GTI bridges the material world of matter and energy with the mental worlds of biological systems, emphasizing the role of information and knowledge in maintaining life processes. Information, in this context, is not just data but a fundamental component that guides the organization and functioning of living systems.

Genomics and the Blueprint of Life Processes The genome, an organism’s complete set of genetic material, serves as the blueprint for its development, functioning, and adaptation. The genetic code embedded within the genome enables computational models and analytical techniques to decode, interpret, and manipulate genetic information. Advances in genomics have significantly enhanced our ability to understand and engineer genomes, offering deeper insights into the fundamental mechanisms of life. These breakthroughs have paved the way for innovative biotechnologies, including precision medicine, synthetic biology, and genetic engineering, transforming fields such as healthcare, agriculture, and environmental science

Digital Genome in GTI: The digital genome, as described using the General Theory of Information (GTI), is a digital specification of operational knowledge that defines and executes the life processes of distributed software applications. It includes functional requirements, non-functional requirements, and best-practice policies to maintain system behavior. This digital genome specifies the operational processes that design, deploy, operate, and manage applications, ensuring they can self-regulate and adapt to changing conditions. By integrating code and circuits, the digital genome enables machines to perform complex tasks autonomously, mimicking biological processes.

Autopoiesis: Autopoiesis, introduced by Humberto Maturana and Francisco Varela, describes systems capable of self-production and maintenance. Living cells are prime examples of autopoietic systems, continuously regenerating their components to sustain themselves. Autopoiesis emphasizes the self-organizing nature of life, where systems maintain their structure and function through internal processes.

Metacognition: Metacognition involves awareness and understanding of one’s own thought processes. It includes reflecting on how we think and using strategies to improve problem-solving and learning. Metacognition is crucial for self-regulation and cognitive development, enabling organisms to adapt to changing environments and optimize their behavior.

Relationship Between Matter, Energy, Information, and Knowledge: The relationship between matter, energy, information, and knowledge is fundamental to understanding life. Matter and energy are the physical substrates, while information and knowledge guide the organization and functioning of living systems. This interplay is crucial for the self-regulation and evolution of life. Life can be seen as a process of exchanging energy to lower entropy, maintaining stability, and achieving a purpose designed in the genome through the application of knowledge.

Observer and the Observed: The concept of the observer and the observed, explored by philosophers like J. Krishnamurti, emphasizes the role of perception in understanding reality. It suggests that the observer’s perspective shapes their experience of the observed, highlighting the importance of consciousness in defining life. This perspective underscores the subjective nature of life, where the observer’s knowledge and awareness influence their understanding of the world.

Body, Brain, and Mind Analogy:

Body:

  • Biological Systems: The body processes and executes the tasks given to it by interacting with its environment. It performs physical actions, responds to stimuli, and maintains homeostasis.
  • Digital Genome-Driven Machines: Software performs tasks that interact with its environment. It processes data, executes algorithms, and adapts to changes in input and conditions.

Brain:

  • Biological Systems: The brain uses neural networks to receive information and convert it into knowledge through 4E cognition (embodied, embedded, enacted, and extended cognition). It stores information as associative memory and event-driven interaction history.
  • Digital Genome-Driven Machines: Intelligent machines use deep learning neural networks to create knowledge from information derived from text, images, audio, and video. This knowledge is stored in the form of optimized parameters of a neural network. Digital genome-based machines also create associative memory and event-driven interaction history.

Mind:

  • Biological Systems: The mind uses memory to execute concurrent processes and tasks using the body and brain. It integrates sensory inputs, cognitive processes, and motor actions to achieve complex behaviors.
  • Digital Genome-Driven Machines: Digital genome-based systems execute autopoietic (self-producing) and metacognitive (self-reflective) processes concurrently to execute tasks using memory. These systems can self-regulate, adapt, and optimize their performance based on real-time data and historical interactions.

This table summarizes the properties of life.

Implications for Artificial Life

The understanding of life has profound implications for artificial life (ALife), which involves creating systems that exhibit characteristics of living organisms. Artificial life research explores the nature of life by modeling and synthesizing living systems, ranging from software simulations to biochemical systems. These systems can potentially exhibit self-maintenance, growth, reproduction, and adaptation, challenging traditional definitions of life and consciousness.

Infusing Life into Machines

Our knowledge of what constitutes life allows us to infuse life into machines by designing systems that mimic the properties of living organisms. This involves creating machines that can self-replicate, adapt to their environment, and maintain stability through energy exchange. Advances in synthetic biology and robotics have enabled the development of artificial cells and autonomous robots that can perform complex biological functions. To infuse life into machines, we must integrate principles of autopoiesis, metacognition, and information theory. Machines can be designed to self-produce and maintain their components, similar to living cells. Incorporating metacognitive capabilities allows machines to reflect on their processes and optimize their behavior. Information theory provides the framework for encoding and processing the knowledge required for machines to achieve their purpose and maintain stability.

Self-Replication in Machines

Self-replication in machines refers to the ability of a machine to autonomously reproduce itself using raw materials found in its environment. This concept, first proposed by John von Neumann, involves creating machines that can build copies of themselves, much like biological organisms. Self-replicating machines can be used in various applications, such as space exploration, where they could build infrastructure using local resources. Today, we can consider intelligent machine replication as consisting of software replication and the selection of available hardware in the cloud with IaaS and PaaS that is required to execute the application. The decoupling of application replication and self-regulation from the hardware infrastructure and services required for executing the application is a major change from today’s implementation. As Dyson observed, we are using components that are composed of pure software and capable of replication but incapable of metabolism and hardware, which is capable of metabolism, but incapable of replication.

Software Replication and Autopoiesis

Software replication is relatively straightforward compared to hardware replication. Software can be easily copied and distributed across multiple systems. To make software autopoietic, it must be designed to maintain and reproduce its own structure. This involves creating systems that can monitor their own state, repair themselves, and adapt to changes in their environment.

Metacognition in software involves the ability of software systems to reflect on their own processes and make adjustments. This self-awareness allows software to optimize its performance, correct errors, and adapt to new conditions. By integrating metacognitive capabilities, software can become more resilient and intelligent, much like living organisms.

Hardware Replication and Redundancy

Hardware replication is more challenging due to the physical nature of components. However, redundancy can be used to achieve similar results. Redundancy involves having multiple copies of critical components to ensure that the system remains functional even if some components fail. This approach is commonly used in data centers and critical infrastructure to enhance reliability and fault tolerance. The hardware required to execute the software is available in multiple cloud sources on demand with elasticity and ubiquity. The knowledge of how and where to get these resources allows the provisioning of required hardware for the software components designed through the digital genome specification.

Biological Analogy: Redundancy and Material Replacement

In biological systems, redundancy and material replacement are essential for maintaining reliable functions despite the inherent unreliability of individual components. For example, gene redundancy provides a backup system that enhances an organism’s resilience to mutations or environmental changes. Multiple genes with overlapping functions ensure that essential biological processes continue even if one gene is compromised.

Metabolism in biological systems involves the continuous replacement and repair of cellular components. Cells use metabolic processes to convert nutrients into energy and building blocks, which are then used to repair and replace damaged or worn-out parts. This constant renewal ensures that cells remain functional and can adapt to changing conditions.

Digital Genome, Associative Memory, and Event-Driven Interaction History

The digital genome, as described using GTI, specifies the operational knowledge required for distributed software applications to self-regulate and adapt. Associative memory in intelligent machines mimics the human brain’s ability to link concepts and retrieve information based on associations. Event-driven interaction history allows machines to dynamically update their state based on real-time events, enhancing their adaptability and responsiveness.

These concepts bring intelligent machines closer to biological systems by enabling them to self-regulate, adapt, and maintain stability through continuous learning and interaction with their environment. By integrating these principles, we can create machines that exhibit life-like properties, enhancing their resilience, adaptability, and intelligence.

Computer and the Computed

Just as the observer and the observed, the integration of the computer and the computed addresses foundational limitations of current machine intelligence implementations using the stored program computing model. Traditional computing models are often static and sequential, where instructions are executed in a predetermined order. This sequential nature limits the flexibility and adaptability of the system. Mind on the other hand requires concurrent processes operating to execute multiple tasks.

Purpose Driving Life

Life is a multifaceted phenomenon defined by a system’s purpose and its ability to achieve that purpose through knowledge of functional and non-functional requirements, best practices, and energy exchange. The interplay of matter, energy, information, and knowledge is central to maintaining stability and lowering entropy. By integrating insights from chemistry, biology, information theory, and philosophy, we gain a deeper understanding of the essence of life and the principles that govern its existence.

The digital genome paradigm bridges the gap between human and machine intelligence by creating a synergy between the body, brain, and mind of computing systems. This approach significantly reduces complexity, improves scalability, and enhances efficiency by embedding self-regulation and metacognition within the system itself. By maintaining a rich history of events and relationships, intelligent machines can make informed decisions, allocate resources effectively, and respond to changes in real-time.

Just as the observer and the observed, the integration of the computer and the computed addresses foundational limitations of current machine intelligence implementations using the stored program computing model. Traditional computing models are often static and sequential, limiting flexibility and adaptability. In contrast, integrating the computer and the computed involves asynchronous communication and dynamic interaction between components, allowing for more resilient and intelligent systems.

By drawing inspiration from biological systems and leveraging the digital genome, associative memory, and event-driven interaction history, we can create machines that exhibit life-like properties. These machines can self-regulate, adapt, and evolve, much like biological organisms, redefining the boundaries of existence and opening new possibilities for artificial life. This comprehensive understanding of life, both biological and artificial, underscores the potential for creating intelligent systems that not only mimic but also enhance the capabilities of living organisms.

Bridging Human and Machine Intelligence: The Promise of the Digital Genome

The digital genome paradigm bridges the gap between human and machine intelligence by creating a synergy between the body, brain, and mind of computing systems:

Body: In biological systems, the body interacts with the external world through senses and the nervous system. Similarly, in digital systems, process execution interacts with the external world using data structures and knowledge. This interaction allows the system to gather information and respond to changes in its environment.

Brain: The brain uses neural networks to provide 4E cognition (embodied, embedded, enacted, and extended cognition). In digital systems, deep learning algorithms optimize neural network parameters to achieve similar cognitive capabilities. This enables the system to process complex information, learn from experiences, and make informed decisions.

Mind: The mind consists of concurrent processes that implement autopoietic and meta-cognitive functions. These processes work off associative memory and event-driven interaction history. This allows the system to reflect on its actions, learn from past interactions, and adapt its behavior to optimize future outcomes.

By integrating these elements, the digital genome paradigm creates a holistic and dynamic system that can self-regulate, adapt, and evolve, much like biological organisms.

Reducing Complexity, Improving Scalability, and Enhancing Efficiency

The digital genome approach significantly reduces complexity by embedding self-regulation and meta-cognition within the system itself. This eliminates the need for external management layers, streamlining operations and reducing the potential for bottlenecks. Systems can autonomously manage their functions, adapt to changes, and optimize their behavior based on past experiences.

Scalability is improved because the digital genome provides a modular and flexible framework that can be easily extended and adapted. New functionalities and components can be integrated without disrupting the existing system, allowing for seamless growth and evolution.

Efficiency is enhanced through the system’s ability to learn from its interactions and optimize its behavior. By maintaining a rich history of events and relationships, the system can make informed decisions, allocate resources effectively, and respond to changes in real-time.

Resiliency is also a key benefit. The self-regulating nature of digital genome-based systems ensures that they can maintain stability and continuity even in the face of disruptions. The system’s ability to adapt and evolve makes it robust and capable of handling unforeseen challenges.

Example Implementations

VoD Service with Associative Memory and Event-Driven Interaction History: A Video-on-Demand (VoD) service can leverage the digital genome approach to enhance user experience and operational efficiency. By using associative memory, the service can remember user preferences, viewing habits, and interactions. Event-driven transaction history allows the system to track user behavior over time, optimizing content recommendations and personalizing the user interface. The system can adapt to changing user preferences, ensuring a dynamic and engaging experience.

Medical-Knowledge-Based Digital Assistant: In the healthcare domain, a digital assistant powered by the digital genome can bridge the knowledge gap between patients and doctors. The assistant uses associative memory to store medical knowledge and patient history, while event-driven transaction history tracks interactions and updates. This enables the assistant to provide accurate and context-aware information, assist in early diagnosis, and support decision-making. The system can learn from each interaction, improving its recommendations and enhancing patient care over time.

Conclusion

The figure represents how the body, brain, and are related in producing human intelligence.

The digital genome approach represents a paradigm shift in computing, moving from static, externally managed systems to dynamic, self-regulating entities. By integrating autopoiesis, meta-cognition, associative memory, and event-driven transaction history, digital genome-based systems can adapt, learn, and optimize their behavior. This approach bridges the gap between human and machine intelligence by creating a synergy between the body, brain, and mind of computing systems. It reduces complexity, improves scalability, enhances efficiency, and ensures high resiliency, making it a promising solution for the challenges of modern computing.

This picture shows the Digital Genome Paradigm

Prototype Demonstrations of Digital Genome Implementations

The Digital Genome concept has been implemented to demonstrate the feasibility, versatility and potential to transform various domain specific use cases:

  1. Video on Demand (VoD) Service: In the context of VoD systems, the digital genome specifies the operational processes and best practices for designing, deploying, and managing distributed applications. By integrating associative memory and event-driven interaction history, these systems can dynamically adapt to changing conditions and user interactions, ensuring a seamless streaming experience. The digital genome acts as a blueprint, guiding the system’s behavior and interactions, much like a biological genome guides an organism’s development. This approach enables VoD systems to maintain structural stability and optimize content delivery, providing users with a high-quality, personalized viewing experience. Digital Genome and Self-Regulating Distributed Software Applications with Associative Memory and Event-Driven History
  2. Medical Knowledge-Driven Early Diagnosis Digital Assistant: In the realm of healthcare, the digital genome is implemented in a medical knowledge-based digital assistant to enhance early diagnosis and treatment. The assistant leverages the digital genome to integrate patient’s medical data, and the medical knowledge derived from multiple sources, to offer personalized treatment plans tailored to individual patients. Associative memory helps the system link related medical data, such as symptoms, diagnoses, and treatments, enabling it to recognize patterns and make informed decisions. Event-driven interaction history ensures that the system captures real-time events and updates patient profiles dynamically, providing timely and accurate responses to new information. This intelligent, adaptive approach enhances the efficiency and effectiveness of medical diagnosis and treatment, bridging the knowledge gap between patients and healthcare professionals and ultimately improving patient outcomes. General Theory of Information, Digital Genome, Large Language Models, and Medical Knowledge-Driven Digital Assistant

This Presentation shows a new approach to creating a transparent model-based machine intelligence that captures the associative long-term memory based on event history. The system is designed to use medical knowledge from various sources including the large language models (LLMs) to create and use event history in the early medical disease diagnosis process. The system is designed using the Structural Machines, Cognizing Oracles, and Knowledge Structures suggested by the General Theory of Information.

As the saying goes

“Theory without practice is like a map without a journey, and practice without theory is like building castles on quicksand.”

In this post we attempt to demonstrate that theory and practice when integrated will allow us to:

  • Understand material, Biological, and digital structures using the General Theory of Information.
  • Explore the roles of genome, memory, and interaction history via General Theory of Information which relates material structures and biological structures through the relationships between energy, matter, information, and knowledge.
  • Use digital genome to enhance enterprise systems using Digital Body, Brain and Mind.

This post is sincerely intended as food for thought and invites open discussion to advance our knowledge.

Post Script

Here is the full presentation I made at the first International Online Conference of the Journal Philosophies, addressing Intelligent Inquiry into Intelligence-Contributing to the 2025 IS4SI Summit, 10–14 June 2025.

Mindful Machines and the General Theory of Information: A New Paradigm and Its Applications by Dr. Rao Mikkilineni, Ph D.

Mark Burgin’s Legacy, General Theory of Information, and Future of AI

Video 1: General Theory of Information and the Evolution of Machine Intelligence

February 18, 2025 marks the second anniversary of Late Prof. Mark Burgin’s passing away leaving a wealth of information in many books, papers in several journals and International Conferences for us to update our knowledge. I had the privilege of learning about the General Theory of Information and work closely with him to develop several applications of the theory to build a new class of distributed software applications with self-regulation and cognitive capabilities integrating current symbolic, and sub-symbolic computing structures with super-symbolic computing.

My collaboration with him started in 2015, when I discovered his book super-recursive algorithms discussing the limitations of traditional Turing machines. Burgin argues that Turing machines are limited to computable functions, meaning they can only solve problems that are algorithmically solvable within their framework. This exclud.es certain complex problems that require more advanced computational models. I shared with him my paper presented at the Turing Centenary Conference (2012) “The Turing O-Machine and the DIME Network Architecture: Injecting the Architectural Resiliency into Distributed Computing” which suggested a way to go beyond the limitations to design, develop, and deploy distributed software applications with higher resiliency, efficiency and scalability. He offered to meet with me to discuss how his recent work on General Theory of Information could extend the efforts to improve information systems.

Our collaboration resulted in several papers and implementation of a new class of autopoietic and cognitive distributed software applications.

Being and Becoming:

The phrase “from being to becoming” which, has its roots in ancient Greek philosophy, signifies a concept that contrasts two states of existence: being and becoming. Being refers to a state of existence that is static, unchanging, and eternal. It represents the idea of something that simply is, without undergoing any transformation. Becoming, on the other hand, is about change, growth, and transformation over time. It emphasizes the dynamic and fluid nature of reality, where things are constantly evolving and developing. According to Plato, the physical world we perceive through our senses is not true reality. Instead, the true reality consists of abstract, perfect, and unchanging entities called Forms or Ideas. These Forms are the perfect blueprints of all things that exist in the physical world. The concept of being and becoming underscores the importance of change, growth, and transformation in understanding human existence and the nature of reality. They suggest that rather than being static entities, we are constantly evolving and shaping our identities through our experiences and actions.

General Theory of Information:

Mark Burgin provided a scientific interpretation of Plato’s Ideas/Forms with the General Theory of Information (GTI) by introducing the concept of the Fundamental Triad which relates the three structures:

  • Material Structures: These are the physical entities and objects in the world. They represent the tangible aspect of reality that we can observe and measure.
  • Mental Structures: These exist within biological systems, such as the cognitive processes in living beings. They represent the informational and cognitive aspect of reality.
  • Ideal Structures: These are abstract, perfect entities or principles. They represent the highest level of information, akin to fundamental truths or laws of nature.

The three types of structures are interconnected and interact with each other. Material structures provide the physical basis for mental structures, while mental structures process and interpret information based on ideal structures. This triadic relationship helps in understanding the different aspects of reality and how they influence each other. Some examples of material structures are:

  •  Atoms and Molecules: The basic building blocks of matter.
  • Biological Cells: The fundamental units of life.
  • Machines and Devices: Physical tools and technology, such as computers and smartphones

Some examples of mental structures are:

  • Schemas: Cognitive frameworks that help individuals organize and interpret information. For instance, a child’s schema for a dog evolves as they encounter different breeds.
  • Memories: Stored information in the brain in the form of associative memory and event-driven interaction history that influences behavior and decision-making.
  • Concepts and Beliefs: Mental representations of ideas and principles.

Some examples of ideal structures are:

  • Mathematical Theorems: Abstract principles that describe fundamental truths in mathematics.
  • Scientific Laws: Universal principles that govern natural phenomena, such as Newton’s laws of motion.
  • Philosophical Concepts: Abstract ideas like justice, beauty, and truth

Figure 1 shows Mark Burgin’s representation of the interrelationships of the three structures.

In GTI, scientific laws are considered Ideal Structures because they represent abstract, universal principles that govern natural phenomena. These laws are perfect and unchanging, much like Plato’s Forms. They serve as guiding principles for understanding and interpreting the material and mental structures. For example, the laws of physics help us understand the behavior of physical objects (material structures) and can also influence cognitive processes (mental structures) through their applications in technology and science. Scientific laws facilitate the flow of information between different levels of reality. They provide a framework for predicting and explaining phenomena, thereby linking the material world with abstract concepts. These laws form the foundation for scientific knowledge and inquiry. They allow us to build models and theories that can be tested and refined, contributing to our overall understanding of the universe.

In the physical world, material structures are formed by matter and energy transformations obeying the laws of nature and these structures carry ontological information that represents their state and dynamics over time. Ontological information, in this context, refers to the intrinsic information that defines the existence and properties of material structures. This information is crucial for understanding how these structures evolve and interact within the material world. Their evolution can be described using ideal structures such as Hamilton’s equations (describing phase space evolution) or Schrodinger equation (describing the wave function evolution) depending on the structure and energy.

GTI bridges the gap between the material world (consisting of matter and energy) and the mental worlds of biological systems, which utilize information and knowledge to interact with their environment. GTI posits that biological systems are unique in their ability to receive information from the material world and transform it into knowledge in the form of mental structures. The knowledge belongs to the realm of biological systems and they maintain their structural identity, observe themselves and their interactions with the external world, and use knowledge to make sense of these observations. They inherit this ability through the genome passed on by the survivor to the successor in the form of encoded knowledge in the form of genes and chromosomes. They contain the operational knowledge to build, operate, and manage a society of cells that execute life processes.  Each cell receives input and executes a process and shares output with other cells. A special type of cells called the neurons provide ability to receive information and convert it into knowledge stored in the form of associative memory and event-driven interaction history.  When signals are received through their senses, the neurons fire together and wire together to process information and transform it into knowledge, Other neural networks use the knowledge to make sense of their observations and act based on their experience to optimize their future state.

Information, Knowledge, Intelligence, and Wisdom

 At its core, GTI is built on the premise that information isn’t just data or knowledge, but a dynamic and process-driven concept that encompasses not just the static storage or transmission of data, but also the evolution and transformation of a system’s state through interaction and context. According to General Theory of Information, knowledge belongs to the realm of biological systems with the ability to process information received from material structures and represent the knowledge as a network of networks, where nodes and edges process information, store it, and communicate with others using shared knowledge.

Figure 2 depicts the relationships between material world and the mental world where knowledge exists.

Here are some of the key implications and consequences that GTI brings to software engineering, AI, and future technologies:

Designing Distributed Software Applications: GTI emphasizes the role of information flow and transformation in dynamic, distributed environments. In the context of distributed software, it suggests that systems must be designed to handle continuous change and context shifts in how information is interpreted, processed, and exchanged.

GTI’s focus on dynamic interactions and history means that distributed systems can evolve by leveraging historical interactions and context, leading to more adaptable, resilient, and self-organizing architectures.

Development of Super-Symbolic Computing: In the realm of AI, the notion of super-symbolic computing is particularly influential. Traditional symbolic computing uses discrete, formal representations (like logic and language) to represent knowledge. Super-symbolic computing, which extends this framework, considers more holistic, emergent forms of representing knowledge, especially as it relates to more complex, associative, and dynamic processes.

Adaptive Problem Solving: By incorporating GTI, systems can work with higher-level abstractions that are not strictly formal but are instead contextual and adaptive. This allows AI to handle more nuanced decision-making processes, potentially moving beyond rigid symbolic structures to more fluid and adaptive problem-solving methods.

Associative Memory: Associative memory in AI is a model of memory that doesn’t rely on fixed addresses or specific data retrieval pathways but instead stores information in a more contextually associative manner (like the way human memory works). This concept aligns with the GTI’s approach, which focuses on how information is not just static but is connected and evolves based on past interactions.

In distributed systems and AI, associative memory could enable more dynamic knowledge retrieval and adaptation, where systems can “remember” previous interactions, adapt to changing conditions, and even form new, emergent structures based on new inputs.

Event-Driven Interaction History: GTI posits that interactions (whether between humans, machines, or other systems) generate information that evolves over time. Event-driven systems, which are increasingly popular in distributed systems, can make use of this by tracking events as they occur, then responding to and evolving based on the accumulated history.

  • In AI systems, this means that models could be better equipped to understand not just the current state but how that state has evolved, leading to a more nuanced understanding of context and history.GTI highlights the potential of composable, modular knowledge representation systems. By creating knowledge structures that can be pieced together and restructured dynamically, we can move away from rigid, one-size-fits-all models of knowledge.
  • This is particularly important in environments like AI and distributed systems, where the complexity and diversity of information demand flexible, scalable approaches to how knowledge is represented and accessed.

      Composable Knowledge Representation: In essence, the contributions of GTI, especially as extended by others, suggest a paradigm shift from traditional models of computation and information systems toward more adaptive, flexible, and context-aware architectures. This shift has the potential to reshape how we design and deploy technologies, especially in the realms of AI, distributed systems, and knowledge representation.

      Here are some examples of the applications demonstrating the application of GTI in building autopoietic and cognitive distributed software applications.

       At its core, GTI is built on the premise that information isn’t just data or knowledge, but a dynamic and process-driven concept that encompasses not just the static storage or transmission of data, but also the evolution and transformation of information through interaction and context. Here are some of the key implications and consequences that GTI brings to software engineering, AI, and future technologies:

      Designing Distributed Software Applications: GTI emphasizes the role of information flow and transformation in dynamic, distributed environments. In the context of distributed software, it suggests that systems must be designed to handle continuous change and context shifts in how information is interpreted, processed, and exchanged. GTI’s focus on dynamic interactions and history means that distributed systems can evolve by leveraging historical interactions and context, leading to more adaptable, resilient, and self-organizing architectures.

      Development of Super-Symbolic Computing: In the realm of AI, the notion of super-symbolic computing is particularly influential. Traditional symbolic computing uses discrete, formal representations (like logic and language) to represent knowledge. Super-symbolic computing, which extends this framework, considers more holistic, emergent forms of representing knowledge, especially as it relates to more complex, associative, and dynamic processes.

      Applications of the Theory

      Hopefully, the General Theory of Information and Late Prof. Mark Burgin’s writings will inspire the next generation of computer scientists and IT professionals to critically examine our current understanding of both human and machine intelligence, especially in light of the General Theory of Information (GTI).

      Human Intelligence: Flaws and Fixes

      Human intelligence suffers from the self-referential circularity of the reasoning systems we use to process knowledge and interact with the world. Unless these systems are anchored to external reality with a higher-level reasoning framework, conflicting decisions from various logic systems lead to inconsistency. For example, the self-regulation mechanism of a system based on autocratic, oligarchic, and democratic axioms (statements or propositions regarded as self-evidently true) often results in conflicts due to their self-referential nature. These conflicts arise because the logics are not moored to external reality, as discussed in the book Life After Google. A higher-level reasoning system is required to address and resolve these inconsistencies. Clearly, current systems are not sufficient.

      Machine Intelligence: Current State and Challenges

      The current state of machine intelligence also has significant flaws. Sub-symbolic and symbolic computing alone are insufficient for reasoning based on both the current state and the history of the system. Large Language Models (LLMs), which are sub-symbolic computing structures, need higher-level modeling and reasoning systems to integrate the knowledge they derive. Autopoietic and cognitive knowledge networks must specify life processes and execute them using structural machines. Without a genome specification of life processes passed from survivors to successors, there is no true intelligence. Similarly, without a digital genome specification of machine life processes describing a particular domain (entities, relationships, and their behavior history), machine intelligence cannot be complete and consistent.

      The Path Forward

      Transparency and access to information at the right time and place, in real-time, can reduce the knowledge gap between various actors and potentially lead to consistent and value-added actions. Machines and their intelligence are designed by humans in the form of a digital genome that specifies the machine’s life processes. Facilitating transparency and access to information moored to external reality can augment human intelligence through machine intelligence.

      Here is a video that chronicles my understanding of the evolution of machine intelligence influenced by my association with Mark.

      Natural Intelligence, Machine Intelligence, General Theory of Information, and all that Jazz: Part I

      Video: Ingredients of natural intelligence, machine intelligence, and GTI

      Part I: Understanding Natural and Machine Intelligence

      The holy grail of computer scientists and information technology professionals is to design and build machines that replicate the capabilities of human intelligence. While trillions of dollars spent have produced impressive results in process automation, intelligent decision-making using insights from machine learning, and a host of very impressive applications with generative AI, the machines still fall short in delivering systems that replicate higher-level reasoning moored to external reality, commonsense, and wisdom. Before we replicate the unique autopoietic and cognitive behaviors of biological systems using machine intelligence, we need to understand the true nature of natural intelligence.

      The video above is my attempt to update my knowledge and capture the main ingredients of natural intelligence, machine intelligence as we practice today, and the theoretical foundations of GTI. This Part I devoted to understanding the differences between natural and artificial intelligence using the General Theory of Information. In Part II, I will attempt to capture how, we implement autopoietic and cognitive distributed applications which are cloud agnostic and integrate both symbolic and sub-symbolic computing structures with super-symbolic computing.

      Natural Intelligence: Biological systems, while made up of material structures, are unique in their ability to keep the identity of their structures, observe themselves and their interactions with the external world using information processing structures, and make sense of what they are observing fast enough to do something about it while they are still observing it. They inherit the knowledge to build, run, manage their structures, and interact with their environment using neural cognitive capabilities acquired from their genome transmitted by the survivor to the successor. A genome is an organism’s complete set of genetic instructions. Each genome contains all of the information needed in the form of life processes to build the organism and allow it to grow and develop a society of genes. As described by Itai and Lercher in their book (p. 11) “The Society of Genes”, the single fertilized egg cell develops into a full human being without a construction manager or architect. The responsibility for the necessary close coordination is shared among the cells as they come into being. It is as though each brick, wire, and pipe in a building knows the entire structure and consults with the neighboring bricks to decide where to place itself”.

      In essence, the source for human intelligence starts from the genome that provides the knowledge to build, operate, and manage a society of cells that work together with a unique identity and an ability to make sense of information received from various senses and take action based on experience. The knowledge is passed on from the successors to the survivors as chromosomes which contain knowledge to create a society of cells that behave like a community, where individual cell roles are well-defined, and their relationships with other cells are defined through shared knowledge and they collaborate by exchanging messages with each other defined by specific relationships and behaviors. DNA provides a symbolic computing structure with the knowledge to use matter and energy to create and maintain stable structures with specific tasks. In addition, the neurons, also known as nerve cells, form the fundamental units of brain and the nervous system which carry information. The brain contains billions of neurons and form complex networks that process information, and update knowledge which is stored in form of associative memory and event-driven interaction history. Both associative memory and event-driven interaction history strengthen connections based on experiences and events. This allows the brain to learn, adapt, and recall information efficiently.

      As discussed in the video, current state-of-the-art AI does not consider:

      • The sense of “self” – the model of the computer (information processing) and the computed (knowledge representation).
      • Associative memory and event-driven transaction history which provide a single point of reference for all reasoning.
      • Ability to use experience moored to external reality and using commonsense, ethics, and other knowledge from various sources.

      In Part II, we will explore associative memory, event-driven interaction history, meta-cognition, and higher-level reasoning moored to external reality, commonsense, ethics, and other knowledge from several sources in the digital world.

      What is a Computer and Is the Brain a Computer?

      Abstract

      “We may conceive of the cortex as a massive hierarchy of predictive systems, each of which tries to explain the inputs and exchanges the remaining error messages with the others, in the hope that they may do a better job.”
      ― Stanislas Dehaene, How We Learn: Why Brains Learn Better Than Any Machine .

      This video captures what a learning process is and how it related to machine learning where computers are taught to learn from text, images, voice, and video.

      What is a Computer?

      When someone asks the question “Is the brain a computer?”, the answer depends on the knowledge, the person or the system (for example a Large Language Model (LLM)), possesses and whether it is adequate to answer the question. Whether the response is accepted or rejected also depends on the knowledge the receiver possesses. So, it is important to understand the nature of knowledge, how it is acquired (the learning process), and how it is used. General Theory of Information provides a framework for understanding and modeling the representation and use of knowledge in both biological and artificial systems.

      Introduction

      When anyone answers any question, the answer depends on what knowledge that person or the system has access to at that moment of answering. Knowledge refers to useful information gained through various means including learning and experience and belongs to the realm of mental structures biological systems have developed through evolution and natural selection. “To know” involves a subject. The General Theory of Information (GTI) relates the material structures in the physical world and the mental structures that biological systems use to model their observations and interact with their external environment using the cognitive apparatuses. Information provides the bridge between a biological system’s understanding of the material world consisting of matter and energy and their mental worlds which utilize information, convert it into knowledge, use it to make sense of what is being observed, and act while the observation is still in progress. Later, we will also discuss how information forms a bridge between the mental structures and the digital structures in computing machines representing knowledge.

      The material world consists of structures that are formed and evolve through the laws of transformation of matter and energy based on the various interactions among the components. The state of the system and its evolution contains the information in the form of phase space which provides a comprehensive framework for representing the states of matter and energy. The phase-space trajectory represents the set of states starting from one particular initial condition and contains and information is the difference between the states of the system.

      For example, water molecule is formed through the interactions of matter and energy involving the hydrogen and oxygen atoms and can exist in various forms including ice, liquid, snowflake or steam. The information is the difference between these various states and the knowledge is the observer’s mental representation of these states in the form of structures. GTI provides a way to represent and process observer’s information into knowledge in the form of structures. They form the basis for the interaction between the physical and mental worlds, tying together nature, observers such as humans, and a society of observers interacting with each other as their common existential and cognitive basis.

      According to GTI, matter and energy are the physical entities that can represent and process information and knowledge. Information and knowledge, while not physical entities themselves, can be represented physically and can influence the state and behavior of matter and energy.

      In essence, as the material world evolves, we receive the information through our senses, convert it into knowledge, use it to draw conclusions, share it by explaining, and teach to someone else. Learning is the process by which we receive information and process it to either create new knowledge which we do not possess already or update current knowledge by connecting to the already possessed knowledge or rejecting it using the existing knowledge.

      For the biological systems, the use of knowledge starts at the moment of conception. The genome contains the knowledge of life processes of how to build a unique entity starting from a single cell that builds itself into a full person that continues to increase knowledge through learning and is able to answer the questions such as “Is the brain a computer?”

      As described by Itai and Lercher in their book (p. 11) The Society of Genes, the single fertilized egg cell develops into a full human being is achieved without a construction manager or architect. The responsibility for the necessary close coordination is shared among the cells as they come into being. It is as though each brick, wire, and pipe in a building knows the entire structure and consults with the neighboring bricks to decide where to place itself”.

      The single cell replicates into trillions of cells, each executing a process with a purpose using metabolism and sharing information with other cells to execute a hierarchy of processes to manage and maintain life as defined in the genome. These processes execute autopoietic and cognitive behaviors. The autopoietic behaviors are capable of regenerating, reproducing, and maintaining the system by itself with the production, transformation, and destruction of its components and the networks of processes in these components. The cognitive behaviors are capable of sensing, predicting, and regulating the stability of the system in the face of both deterministic and non-deterministic fluctuations in the interactions among the internal components or their interactions with the environment.

      The long and short of the discussion is that knowledge that each individual possesses is unique based on the knowledge inherited and the unique experiences that the individual accumulates throughout one’s lifetime and is the basis for the answers they provide. Therefore, it is important to understand the nature of knowledge and the learning process that allows its update based on information received through various means. We as humans update our knowledge continuously through both inherited and learned processes and use it when we answer a question. The answer depends on the state of our knowledge at that instant and further interactions with the external world influences the future state of our knowledge. This is an important observation because it tells us that if two persons are interacting with each other exchanging information, the evolution of the interaction is very much dependent on the knowledge gap between the participants and how wide is that gap.

      GTI provide a comprehensive framework for understanding and modeling the representation and use of knowledge in both biological and artificial systems. The ontological thesis states that the autopoietic and cognitive behavior of artificial systems must function on three levels of information processing systems and be based on triadic automata. The axiological thesis states that the efficient autopoietic and cognitive behavior has to employ structural machines.

      GTI is used to define a schema and associated operations to model how knowledge is represented using a scientific object called a structure. A genome in the language of GTI encapsulates “knowledge structures” coded in the form of DNA and is executed using the “structural machines” in the form of genes and neurons which use physical and chemical processes (dealing with the conversion of matter and energy). The information accumulated through biological evolution is encoded into knowledge to create the genome which contains the knowledge network defining the function, structure, and autopoietic and cognitive processes to build and evolve the system while managing both deterministic and non-deterministic fluctuations in the interactions among the internal components or their interactions with the environment. The cells are process execution engines in this model and are orchestrated by the genome acting as a structural machine using autopoietic and cognizing oracles.

      The same schema is used in defining a digital genome specifying the operational knowledge of algorithms executing the software life processes with specific purposes using replication and metabolism. The result is a digital software system with a super-symbolic computing structure exhibiting autopoietic and cognitive behaviors that biological systems also exhibit.

      In summary, the knowledge is represented as a network of autonomous agents executing a hierarchy of processes and each process is endowed with autopoietic and cognitive properties. Each agent executes well defined goals and collaborate as an element in a society with shared knowledge to accomplish both local and systemic goals. Associative memory and event-driven history of interactions are part of this knowledge network.

      Evolution of Knowledge: From the Unknown Unknown to the Known Known

      Figure 1 shows the distribution of knowledge between individuals, human society where shared knowledge exists and the vast universe of the unknown. Material world exists whether it is observed or not. The knowledge about the universe is represented as the unknown unknown. On the other side, each individual is born with some knowledge about the self and its relationship with the external world (known known). During the lifetime of the individual, the knowledge expands to discover the known unknowns and converting them to known knowns through the process of learning which consists of discovery, reflection, application, and sharing of knowledge. In a society of individuals sharing knowledge, the pool of knowledge grows and exceeds the knowledge of any individual as the number grows exponentially. This leads to some knowledge available in the pool but not known to a particular individual (unknown known). This inevitably leads to the knowledge gap between two individuals engaged in discussion. Fo example, when asked a question “what is a computer,” the answer varies depending on who answers the question.

      Evolution of a Computer?

      • Before the advent of electronic computers, the term “computer” referred to a person who performed calculations or computations. The job was typically tedious and involved long hours of manual number crunching. Alan Turing’s observation “A man in the process of computing a real number replaced by a machine which is only capable of finite number of conditions” changed the way we view computers. The result of this observation is the symbolic computing that is John von Neumann’s stored program control implementation of the Turing Machine. It used a sequence of symbols (called programs) that operate on another sequence of symbols (called data structures) to mimic how humans computed numbers. It is possible to divide the history of computing into three periods (Luck et al., 2005):
      • Computation as calculation, or operations undertaken on numbers.
      • Computation as information transformation, or operations on multimedia, such as text, audio or video data.
      • Interactive computation, or computation as interaction.

      As we know, today, almost everything – basic elements, data structures, programming languages, etc. – changes very fast in computer technology but von Neumann architecture exists still as the prevalent architecture for computers. For computer scientists, ‘‘Computer science is concerned with information in much the same sense that physics is concerned with energy… The computer scientist is interested in discovering the pragmatic means by which information can be transformed.” (Denning et al.). “Computer science and engineering is the systematic study of algorithmic processes that describe and transform information: their theory, analysis, design, efficiency, implementation, and application. The fundamental question underlying all of computing is, “What can be (efficiently) automated?”

      Computing took a major turn with the observation of McCulloch and Pitts in 1943, with their paper on how neurons might work. They modeled a simple neural network using electrical circuits. Their model, known as the McCulloch-Pitts neuron, is a fundamental building block of artificial neural networks. It accepts binary inputs and produces a binary output based on a certain threshold value. This model can be mainly used for classification problems.

      Frank Rosenblatt in 1957 introduced the perceptron, a type of artificial neuron aimed to develop a machine that could mimic the human brain’s ability to recognize patterns and learn from experience. The perceptron takes an input, aggregates it (weighted sum), and returns 1 only if the aggregated sum is more than some threshold else returns 0. This model is a more general computational model than the McCulloch-Pitts neuron. It can be used to implement linearly separable functions. Rosenblatt also proposed the “Perceptron learning rule” which is a method for learning the weights of the inputs. This was a significant step towards the development of machine learning and artificial neural networks.

      Both approaches have contributed to the current state of the art.

      Figure 2 depicts the current state of evolution of computing.

      Figure 2: State of the Art Today: Symbolic and Sub-Symbolic Computing.

      Symbolic computing is based on algorithms that are well defined tasks that execute well-defined processes. Machine learning algorithms use statistical methods such as regression, classification, ad clustering to gain insights from data. Sub-symbolic computing differentiates itself where the algorithms are using deep learning where training, testing, and validation are used to build neural networks that process text, audio, pictures, and video.

      All three methods produce knowledge that can be used in several ways:

      1. Process automation,
      2. Intelligent decision making based on insights gained from data analytics, and
      3. The use of transformers to use the knowledge from deep learning neural networks to mimic some of the cognitive tasks that human brain performs as shown in figure 2.

      This advance has led some to speculate that the deep learning algorithm’s and transformers can be trained to mimic all human cognitive functions and machine intelligence will soon surpass human intelligence.

      However, many proponents of this speculation either do not “know” (unknown knowns) the computation and its limits based on the Turing Machine computing model and the limitations of Church-Turing thesis or ignore (known knowns) them.

      Others argue that the singularity comes from the emergence properties of complex adaptive systems and the deep learning algorithms and evolutionary algorithms are complex adaptive systems (CAS). However, there is a world of difference between complex adaptive system behaviors and genome-based biological system behaviors. They cite examples of birds, bees, ants, groups of cars or people in a city or town etc. Emergence in complex adaptive systems (CAS) refers to the phenomenon where novel characteristics and behaviors arise from the interactions of individual components, or agents, within the system. For example, each ant in a colony follows simple rules, such as following a pheromone trail to food. However, the collective behavior of the colony—finding food, defending the nest, caring for larvae—can be quite complex and appears intelligent. The key to understanding this process lies in the concept of feedback. In a CAS, agents interact with each other and their environment, and these interactions produce feedback that influences future interactions. Over time, these feedback loops can lead to the development of complex patterns of behavior that are adaptive and resilient. The self-organization phenomena of CAS are understood in terms of function, structure, and fluctuations, their impact on the equilibrium states of the system, and transition to different energy minima defining different stable states.

      While CAS exhibit self-organization, the self-regulation exhibited using autopoietic and cognitive behaviors by genomic systems is a quite a different matter.

      This leads us to the question “Is Brain a Computer?

      Is the Brain a Computer?

      If we take the current state of the art computing we described above, the computing systems lack autopoietic behavior which requires the algorithms to have self-awareness and self-regulation knowledge to accomplish the system’s goals when deviations from expected behaviors occur because of large fluctuations in component interactions. For example, if there is a large demand for resources or large disruptions in resource availability, the system halts unless external intervention occurs.

      On the other hand, biological systems have built-in knowledge to maintain homeostasis using autopoietic behaviors. In addition, the brain creates associative memory and event driven interaction history of all the entities, their relationships and even-driven behaviors and uses them to make sense of what it is observing and act appropriately while the observation is still in progress.

      Suffice it to say that current state of the art computing machines fall short in both these counts.

      Can we improve them to include these behaviors?

      I suggest reading these papers and references cited and make up your own mind.

      https://www.preprints.org/manuscript/202404.1298/v1

      https://www.mdpi.com/2409-9287/8/6/107

      Figure 3 summarizes my view.

      Figure 3: Structural Machine implementing the knowledge network using super-symbolic computing structures.

      Lesson

      The lesson I take away from this study is that when I say something in reaction to some new information, I am presented with, what I say exhibits my knowledge or lack of it about the subject and it has little to do with the true nature of the information at hand.

      Human Learning, The Knowledge Gap, Machine Learning, The Role of Large Language Models, Future of AI, and All that Jazz

      After studying various aspects of the general theory of information, to me, it is becoming clear that as individuals, we view the material world that we interact with a mental model that has evolved over our lifetime using the autopoietic and cognitive processes that we have both inherited and learned. This explains why we individually react differently to the same observations of external reality. The difference in our reactions is proportional to the knowledge gap between external reality and our internal model of external reality. As we interact with each other and the external world, we tend to either reduce or increase this knowledge gap and based on external reality, this could lead to severe consequences depending on the actions we take based on our knowledge.

      The problem of increased knowledge gap between actors interacting with each other is exaggerated by the rapid changes and large fluctuations both in how the external world is evolving and the resulting explosion of knowledge required to keep up with these changes. According to Prigogine, large fluctuations could result in the emergence of new structures due to internal self-reorganization. However, the self-organization resulting from emergence is unpredictable. This is because the process is nonlinear and depends on a multitude of factors, including the initial conditions and the nature of the fluctuations. Understanding the impact of fluctuations on these structures could potentially lead to more robust and adaptable algorithms.

      In fact, biological systems have developed a unique ability to maintain the identity of their structures, observe themselves and their interactions with the external world using information processing structures, and make sense of what they are observing fast enough to do something about it while they are still observing it. They have gone beyond self-organization to self-regulation by reducing the knowledge gap between external reality and their internal model of reality.

      How can we extend this capability to actors interacting with each other and the external world to reduce the knowledge gap between them in real-time? This requires access to the same information by all the actors interacting with each other. Each actor has the access to the “known knowns” through personal knowledge they have acquired through their lifetime. However, they do not have access to the “unknown” to them but are “known” to others. It is a lot of effort to determine what is known to others but is not known to you especially, if there are rapid changes in areas that you are not aware of but impact what you know. When external reality is changing rapidly, the need for access to unknown knowns becomes more important.

      Fortunately, the machine intelligence provides a way to extend human body, brain and mind functions using symbolic, sub-symbolic, and super-symbolic structures.

      I sincerely recommend this paper for discovering unknown knowns that are relevant to data, information, knowledge, and wisdom if you already don’t know. It contains lot of references to other knowledge from multiple sources.

      The Large Language models using Deep learning provide knowledge representation through learning – the automated formation of useful representations from data. It is now accessible in real-time through various offerings in the form of Bing Chat, Bard, ChatGPT, etc. Individuals can use access to the common knowledge pool using various tools and individuals can search for the unknown knowns before they make critical judgment or important decision.

      Here is an example I used to update my knowledge using Bing Chat.

      ## Bing

      Humans learn by interacting with the material world and processing information based on their unique identities and experiences. Here’s a breakdown of the process:

      – Encountering New Information: Learning begins when we encounter new information from our environment. This could be through reading, listening, observing, or hands-on experiences.

      – Active Processing: Learning is an active process that involves paying attention to the new information, coordinating it with what we already know, storing it in our memory, and applying it. For instance, when reviewing lecture notes, we may make new connections among concepts, recognize the significance of some information we hadn’t grasped before, anticipate how we will be tested, or ask ourselves new questions about the material.

      – Effortful Learning: Research shows that effortful learning usually signals not only deeper learning, but more durable long-lasting knowledge. It’s analogous to weight training. Investing more effort in grappling with new information builds stronger, deeper knowledge.

      – Spaced Repetition: The spaced repetition of information is crucial for memory. Revisiting the learned material after a certain period helps in better retention of the information.

      – Learning from Experience: Learning also occurs as a result of experience. Once we’ve tried a technique or a method, we can determine whether it works for us and then use it in the future.

      – Influence of Identity: Our unique identities and experiences can influence how we interpret and understand information. This can shape our attitudes, knowledge, and behavior.

      – Neural Adaptation: On a biological level, learning involves changes in the brain’s neural networks. When two neurons fire together, the connections between them—the synapses—grow stronger, indicating that learning has taken place.

      Remember, learning is a lifelong process that continues beyond formal education and takes place throughout our lives.

      Source(s) 

      ## Bing

      Machines learn from data using a process called machine learning, which is a subset of artificial intelligence. Unlike humans, machines don’t have unique identities or subjective experiences. Instead, they learn by recognizing patterns in the data they are fed and making predictions based on those patterns. Here’s a simplified explanation of how machines learn:

      – Data Input: Machines are fed data, which could be anything from images and text to numerical data.

      – Learning Algorithms: Machines use learning algorithms to process the data. These algorithms are designed to recognize patterns and learn from them. There are three main types of machine learning algorithms: supervised learning, unsupervised learning, and reinforcement learning.

          – Supervised Learning: In supervised learning, the machine is given input data along with the correct output. The goal is to learn a general rule that maps inputs to outputs.

          – Unsupervised Learning: In unsupervised learning, the machine is given input data without any corresponding output data. The goal is to find patterns and structure in the input data.

          – Reinforcement Learning: In reinforcement learning, the machine learns to make decisions by performing certain actions and receiving rewards or penalties.

      – Model Training: The machine learning model is trained on a portion of the data, learning to recognize patterns and make predictions.

      – Model Testing: The trained model is then tested on a different portion of the data to see how well it can make predictions.

      – Prediction: Once the model is trained and tested, it can make predictions on new data.

      – Continuous Learning: Machine learning models can continue to learn and improve over time as they are exposed to more data.

      It’s important to note that while machines can learn from data, they don’t understand the data in the same way humans do. They don’t have consciousness or subjective experiences. They simply process data and make predictions based on patterns.

      Source(s) 

      ## Bing

      Autopoietic behavior refers to the self-producing and self-maintaining nature of living systems. This concept was introduced by Chilean biologists Humberto Maturana and Francisco Varela to define the self-maintaining chemistry of living cells. Autopoiesis enables living beings to use the specification in their genomes to instantiate themselves using matter and energy transformations. They reproduce, replicate, and manage their stability. An autopoietic system is capable of producing and maintaining itself by creating its own parts.

      On the other hand, cognitive behavior refers to the actions of obtaining and using knowledge. It is concerned with how information is learned and passed on, as well as the creative application of knowledge and personal beliefs. Cognitive psychology involves the study of internal mental processes—all of the workings inside your brain, including perception, thinking, memory, attention, language, problem-solving, and learning. It concerns how we take in information from the outside world, and how we make sense of that information. Cognitive behavior accounts for actions of obtaining and using knowledge.

      It’s important to note that while these behaviors are distinct, they are interconnected. Autopoiesis provides the biological basis for life, while cognition allows for the processing and application of information, enabling complex behaviors and interactions with the environment.

      Source(s) 

      ## Bing

      The idea of infusing machines with autopoietic and cognitive behaviors to mimic living organisms has been a topic of interest in the field of artificial intelligence. However, current AI technologies fall short of replicating even the basic autopoietic and cognitive behaviors found in primitive biological systems.

      To make machines mimic living organisms, they must be infused with autopoietic and cognitive behaviors. This involves creating a system that can replicate itself and maintain identity and stability while facing fluctuations caused by external influences. Cognitive behaviors would allow the system to model its state, sense internal and external changes, analyze, predict, and take action to mitigate any risk to its functional fulfillment.

      The knowledge obtained from different mechanisms is integrated with a common knowledge representation and is used to manage the system’s structure and dynamics with autopoietic and cognitive behaviors. This approach is analogous to how the neocortex repurposed the reptilian brain and paves the path for digital machines to mimic living organisms using an integrated knowledge representation from different sources.

      However, it’s important to note that while machines can be programmed to mimic certain behaviors, they do not possess consciousness or subjective experiences. They simply process data and make predictions based on patterns. Therefore, while machines can exhibit behaviors that mimic autopoiesis and cognition, they do not truly possess these qualities in the way that living organisms do.

      Source(s) 

      ## Bing

      Yes, autopoietic and cognitive machines could potentially be treated as extensions of human capabilities. These machines, by updating their knowledge in real-time, can assist in predicting and acting in real-time with updated knowledge.

      Such machines can augment human capabilities in several ways:

      – Real-time Decision Making: These machines can analyze large amounts of data quickly and make real-time decisions based on that data.

      – Predictive Analysis: They can use past data to predict future trends, which can be particularly useful in fields like finance, healthcare, and marketing.

      – Automation: These machines can automate routine tasks, freeing up humans to focus on more complex tasks.

      – Personalization: They can provide personalized recommendations based on individual preferences and behaviors.

      However, it’s important to note that while these machines can mimic certain behaviors, they do not possess consciousness or subjective experiences. They simply process data and make predictions based on patterns. Therefore, while they can augment human capabilities, they are not replacements for human judgment and decision-making.

      Source(s) 

      C’est la vie.

      I close this post with this quote from a Zen Master

      Before I had studied Zen for thirty years,
      I saw mountains as mountains,
      and waters as waters.
      When I arrived at a more intimate knowledge, I came to the point where I saw that mountains are not mountains,
      and waters are not waters.
      But now that I have got its very substance
      I am at rest.
      For it’s just that
      I see mountains once again as mountains,
      and waters once again as waters.

      Li Ching-Yuen

      Bing

      Here are some quotes from great masters on information, knowledge, intelligence, and wisdom from Bing:

      • “The only true wisdom is in knowing you know nothing.” – Socrates
      • “Knowledge is having the right answers. Intelligence is asking the right questions. Wisdom is knowing when to ask the right questions.” – Unknown
      • “The measure of intelligence is the ability to change.” – Albert Einstein
      • “Intelligence is the door to freedom and alert attention is the mother of intelligence.” – Jon Kabat-Zinn
      • “Knowing yourself is the beginning of all wisdom.” – Aristotle

      Bing Eye’s View of General Theory of Information, Burgin-Mikkilineni Thesis, Autopoietic and Cognitive Automata, and all that Jazz

      Application of the General Theory of Information: A New Approach to Using Machine Intelligence to Create Digital Assistants

      Here is a discourse I had with Bing Chat. Although I have written papers on this subject, I could not have articulated it better.

      My understanding, from all the knowledge I tried to update using various papers, books, and discussions with experts, is that digital automata can be used as tools that extend the functions of our body, brain, and mind that we build operate, and manage using the knowledge we inherit from our genome and the knowledge we update using the learning processes also inherited from our genome. The general theory of information provides the tools to model, build, operate, and manage these digital automata.

      Here are my learnings from this discourse:

      • The information received through this process is processed by my brain and is converted into knowledge and is interpreted based on my knowledge that I already possess to either create new knowledge, or update existing knowledge or reject it based on my internal model of myself and my interactions with the universe.
      • There are known knowns in this information consistent with my own mental model such as GTI, Burgin-Mikkilineni thesis, etc., which I recognize using my associative memory and the evet-driven transaction history that is in my brain.
      • There are entities, relationships and behaviors that are context dependent and belong to different knowledge domains such as the “digital genome” which refers to two domains. First one is the operational knowledge for domain specific software building, operating, and managing defined through the concepts of GTI. The second one is the digital representation of the biological genome using sequences of symbols and its use in various applications of this knowledge. I recognize the context based on my associative memory and update the knowledge accordingly.
      • There are known unknowns which when I encounter, either I dismiss them as irrelevant and discard them or if I consider them relevant, I do more research to update my knowledge. An example is neuro-symbolic computing paper that I had not come across before and is related to my understanding of super-symbolic computing.

      It is the observer processing information that creates or updates knowledge and uses it to make decisions based on the associative memory and the event-driven transaction history already stored in the brain.

      The digital genome is a tool to model the operational knowledge to create a domain specific even-driven history and associative memory to augment the human model in the brain and use both to make an intelligent decision as the video demonstrates.

      Food for thought.

      How does a genome provide the knowledge for the system to build itself, and learn from the environment? 

      Bing 

      A genome is like an operating manual for an organism. It contains all the instructions that help an organism develop from a single cell into a complex system5. These instructions are encoded in the DNA, which is made up of four types of chemical building blocks: adenine (A), thymine (T), cytosine (C), and guanine (G). The order of these letters forms a code that allows DNA to function in different ways5

      The genome guides growth, helps organs do their jobs, and repairs itself when damaged5. Genes, which are segments of DNA, provide the cell with instructions for making specific proteins. These proteins carry out various functions in the body5. The process by which information from a gene is used to build a functional product is called gene expression8

      As for learning from the environment, this is a characteristic of intelligent systems. These systems use sensors to collect information from a specific environment and share it among its different elements to achieve a common goal1. This interconnection between the digital and physical worlds is often referred to as the Internet of Things1

      Moreover, systems thinking suggests that the component parts of a system will act differently when isolated from the system’s environment or other parts of the system2. For systems that learn and adapt, the growth and the degree of adaptation depend upon how well the system is engaged with its environment and other contexts influencing its organization3

      In summary, a genome provides the blueprint for building an organism, while intelligent systems and systems thinking provide frameworks for understanding how a system can learn from and adapt to its environment. 

      Source(s) 

      1. Introduction to Genomics – National Human Genome Research Institute 

      2. Intro to gene expression (central dogma) (article) | Khan Academy 

      3. Intelligent Systems: What are they, how do they work and why … – Algotive 

      4. Systems thinking – Learning for Sustainability 

      5. Systems theory – Wikipedia 

      6. What Is Virtual Learning? | Coursera 

      7. A Brief Guide to Genomics – National Human Genome Research Institute 

      8. What Is a Genome? | PLOS Genetics 

      How does the General Theory of Information and Burgin Mikkilineni Thesis explain the genome functions assisting in autopoietic and cognitive behaviors? 

      Bing 

      The General Theory of Information (GTI) and the Burgin Mikkilineni (BM) Thesis provide a comprehensive framework to understand how genomes function in autopoietic and cognitive behaviors. 

      The GTI, developed by Mark Burgin, provides a model for the operational knowledge of biological systems to build, operate, and manage life processes1. It asserts that information is a fundamental constituent of the physical world and that living beings convert information into knowledge using physical structures2

      The genome, according to GTI, provides the operational knowledge to execute life processes used to build, self-organize, operate, and maintain the system using both inherited and learned knowledge1. This operational knowledge is used to assure stability, sustenance, safety, security, and survival in the face of fluctuations in the interactions within the system and with its environment1

      The BM Thesis, on the other hand, deals with the autopoietic and cognitive behavior of artificial systems. According to the ontological BM thesis, the autopoietic and cognitive behavior of artificial systems must function on three levels of information processing systems and be based on triadic automata1. The axiological BM thesis states that efficient autopoietic and cognitive behavior has to employ structural machines2

      The BM Thesis and GTI together propose a new architecture applying the GTI to make digital automata mimic living organisms by exhibiting autopoiesis and cognitive behaviors1. The schema and associated operations derived from GTI are used to model a digital genome specifying the operational knowledge of algorithms executing the software life processes with specific purposes using replication and metabolism1. The result is a digital software system with a super-symbolic computing structure exhibiting autopoietic and cognitive behaviors that biological systems also exhibit1

      In summary, the GTI and BM Thesis explain how the genome functions in autopoietic and cognitive behaviors by providing a model for operational knowledge and a framework for implementing these behaviors in both biological and artificial systems. 

      Source(s) 

      1. Mark Burgin’s Legacy: The General Theory of Information, the Digital … 

      2. EasyChair Preprint 

      3. Mark Burgin s Legacy: The General Theory of Information,the Digital … 

      4. On the Autopoietic and Cognitive Behavior – EasyChair 

      5. BDCC | Free Full-Text | Infusing Autopoietic and Cognitive Behaviors … 

      6. General Theory of Information, Digital Genome, Large Language Models … 

      7. A New Class of Autopoietic and Cognitive Machines – MDPI 

      8. Cognitive genomics: Linking genes to behavior in the human brain 

      9. Brain Basics: Genes At Work In The Brain | National Institute of … 

      10. 3.9: Genes and Behavior – Social Sci LibreTexts 

      11. A Brief Guide to Genomics – National Human Genome Research Institute 

      12. 10.4: The Structure and Function of Cellular Genomes 

      13. https://doi.org/10.3390/philosophies8060107 

      14. https://doi.org/10.3390/bdcc6010007 

      15. https://doi.org/10.3390/cmsf2023008070 

      16. https://doi.org/10.1162/NETN_a_00003 

      Can you give me an example of a digital software system with super-symbolic computing structure? 

      Bing 

      The concept of a digital software system with a super-symbolic computing structure is a theoretical model proposed by Mark Burgin and Rao Mikkilineni12. This model uses the General Theory of Information to suggest ways to improve the current state of the art of software development, deployment, and operation through the infusion of self-monitoring, self-regulation, and self-protection processes at both the component and system levels1

      The super-symbolic computing structure is an overlay over current digital symbolic and sub-symbolic structures, very similar to the neocortex in the brain integrating the knowledge obtained from various reptilian cortical columns1. This software system is analogous to biological systems using the mammalian neocortex to manage the knowledge1

      The result is a digital software system with a super-symbolic computing structure exhibiting autopoietic and cognitive behaviors that biological systems also exhibit12. However, specific examples of such systems are still in the theoretical stage and have not been fully realized in practice yet12. The development of such systems would represent a significant advancement in the field of artificial intelligence and machine learning2

      Source(s) 

      1. Information Theoretical Principles of Software Development 

      2. Mark Burgin’s Legacy: The General Theory of Information, the Digital … 

      3. From Symbolic Computation to Super-Symbolic Computation 

      4. Mark Burgin s Legacy: The General Theory of Information,the Digital … 

      5. https://doi.org/10.3390/philosophies8060107 

      6. https://www.researchgate.net/publication/375622741_Mark_Burgin 

      How does a super-symbolic computing structure differ from other computing structures? 

      Bing 

      Super-symbolic computing structures differ from other computing structures in their ability to overlay and integrate knowledge from both symbolic and sub-symbolic computing structures12

      Here are the key differences: 

      Symbolic and Sub-symbolic Structures: Traditional computing structures can be broadly categorized into symbolic and sub-symbolic. Symbolic computing involves the manipulation of symbols or discrete units of information, such as in traditional programming languages. Sub-symbolic computing, on the other hand, involves continuous values or distributed representations, such as in neural networks12

      Super-symbolic Structure: The super-symbolic computing structure is an overlay over the current digital symbolic and sub-symbolic structures12. It integrates the knowledge derived from information gathered from symbolic and sub-symbolic computing structures2. This is very similar to the neocortex in the brain integrating the knowledge received from various cortical columns12

      Autopoietic and Cognitive Behaviors: Super-symbolic computing structures are designed to exhibit autopoietic and cognitive behaviors that biological systems also exhibit1. They provide an autopoietic and cognitive process overlay over the current symbolic and sub-symbolic computing structures without having to change them2

      Knowledge Management: Super-symbolic computing structures are analogous to biological systems using the mammalian neocortex to manage the knowledge obtained from various cortical columns2

      In summary, super-symbolic computing structures represent a significant advancement in the field of artificial intelligence and machine learning, providing a more holistic and integrated approach to information processing12

      Source(s) 

      1. Philosophies | Free Full-Text | Mark Burgin’s Legacy: The General … 

      2. Information Theoretical Principles of Software Development 

      3. Symbiotic Information Processing and Technological Progress – MDPI 

      4. https://doi.org/10.3390/philosophies8060107 

      How does a super-symbolic computing structure integrate knowledge from symbolic and sub-symbolic structures? 

      Bing 

      A super-symbolic computing structure integrates knowledge from symbolic and sub-symbolic structures by functioning as an overlay over these structures134. This is very similar to the neocortex in the brain integrating the knowledge obtained from various cortical columns1

      In the context of artificial intelligence and machine learning, this integration is often referred to as Neural-Symbolic Computing (NSC)25. NSC combines the robust learning capabilities of neural networks (sub-symbolic) with the reasoning and interpretability of symbolic representation (symbolic)25

      For example, the processing of images by operation with holistic shapes is an example of super-symbolic computing34. The advantage of the super-symbolic (transcendent) computation is its ability to operate big formal and informal systems of data and knowledge34

      In summary, a super-symbolic computing structure integrates knowledge from symbolic and sub-symbolic structures by functioning as an overlay over these structures, combining the strengths of both symbolic and sub-symbolic computing12345

      Source(s) 

      1. Philosophies | Free Full-Text | Mark Burgin’s Legacy: The General … 

      2. Symbiotic Information Processing and Technological Progress – MDPI 

      3. From Symbolic Computation to Super-Symbolic Computation 

      4. arXiv:1905.06088v1 [cs.AI] 15 May 2019 

      5. Bridging the gap: Neuro-Symbolic Computing for advanced AI … – Springer 

      6. https://doi.org/10.3390/philosophies8060107 

      How does the supersymbolic computing provide a common knowledge representation that is transparent integrating symbolic and sub-symbolic computing structures and how does the digital genome specify it using the knowledge network? 

      Bing 

      Super-symbolic computing provides a common knowledge representation by functioning as an overlay over symbolic and sub-symbolic computing structures910. It performs computations on the combined knowledge from both symbolic and sub-symbolic computations to derive higher-order autopoietic and cognitive behaviors910. This approach is analogous to how the neocortex in the brain integrates the knowledge obtained from various cortical columns56

      In the context of artificial intelligence and machine learning, this integration is often referred to as Neural-Symbolic Computing (NSC)1112. NSC combines the robust learning capabilities of neural networks (sub-symbolic) with the reasoning and interpretability of symbolic representation (symbolic)1112

      The digital genome specifies this common knowledge representation using a knowledge network124. The genome provides the operational knowledge to execute life processes used to build, self-organize, operate, and maintain the system using both inherited and learned knowledge2. The schema and associated operations derived from the General Theory of Information (GTI) are used to model a digital genome specifying the operational knowledge of algorithms executing the software life processes with specific purposes using replication and metabolism12. The result is a digital software system with a super-symbolic computing structure exhibiting autopoietic and cognitive behaviors that biological systems also exhibit12

      In summary, super-symbolic computing integrates knowledge from symbolic and sub-symbolic structures, and the digital genome specifies this common knowledge representation using a knowledge network. This results in a digital software system that exhibits autopoietic and cognitive behaviors similar to biological systems12910

      Source(s) 

      1. From Symbolic Computation to Super-Symbolic Computation – EasyChair 

      2. From Symbolic Computation to Super-Symbolic Computation 

      3. A New Class of Autopoietic and Cognitive Machines – MDPI 

      4. Infusing Autopoietic and Cognitive Behaviors into Digital Automata to … 

      5. Combining Sub-symbolic and Symbolic Methods for Explainability – Springer 

      6. On the integration of symbolic and sub-symbolic techniques for XAI: A … 

      7. General Theory of Information, Digital Genome, Large Language Models … 

      8. Mark Burgin’s Legacy: The General Theory of Information, the Digital … 

      9. General Theory of Information, Digital Genome, Large Language Models … 

      10. A New Class of Autopoietic and Cognitive Machines – MDPI 

      Is there any implementation to demonstrate the feasibility of defining a digital genome and create an autopoietic and cognitive software system? 

      Bing 

      Yes, there are ongoing efforts to implement digital genomes and create autopoietic and cognitive software systems. For instance, a medical-knowledge-based digital assistant has been implemented that uses medical knowledge derived from various sources including large language models4. The digital genome in this system specifies the knowledge to execute various tasks that implement functional requirements, non-functional requirements, and best practices to assure that the process objectives are achieved4

      Another example is the field of Digital Genome Engineering, which uses computational approaches to overcome challenges through modeling and optimization of genome engineering applications5. This approach has been used to develop models for predicting the overall activity of gene editing systems5

      Moreover, there are attempts to make modern computers mimic the resilience and intelligence of living beings using symbolic and sub-symbolic computing12. These efforts discuss the limitations of classical computer science for implementing autopoietic and cognitive behaviors in digital machines and propose a new architecture applying the General Theory of Information (GTI) to make digital automata mimic living organisms by exhibiting autopoiesis and cognitive behaviors12

      However, it’s important to note that these are still areas of active research and development, and the full realization of these concepts in practice is a work in progress1245

      Source(s) 

      1. General Theory of Information, Digital Genome, Large Language Models … 

      2. What is Digital Genome Engineering? – Transformational Bioinformatics 

      3. BDCC | Free Full-Text | Infusing Autopoietic and Cognitive Behaviors … 

      4. A New Class of Autopoietic and Cognitive Machines – MDPI 

      5. A New Class of Autopoietic and Cognitive Machines 

      6. Emerging Applications of Digital Genomics in Next-Generation … – BioSpace 

      7. https://doi.org/10.3390/cmsf2023008070 

      8. https://doi.org/10.3390/bdcc6010007 

      9. https://doi.org/10.3390/info13010024 

      Machine Intelligence, Human Intelligence, the Future of AI, General Theory of Information, and all that Jazz

      Figure 1: The relationship between human intelligence and machine intelligence.

      What is Natural Intelligence?

      Here is a dialogue between the CEO of Google, that created one of the AI programs that can be used either to create fake information that can cause deliberate harm, or provide deep insights from data and help decision-making better than any single human being can, and the 60-minutes TV show host.  The machine in both cases has been taught how to convert information into a huge pool of knowledge that can be used by humans or other machines that interact with it.

      Google CEO on AI: “There is an aspect of this we call all of us in the field call it as a black box, you know, you don’t fully understand, and you can’t quite tell why it said this or why it got wrong. We have some ideas and our ability to understand this gets better over time but that is where the state of the art is.

      60 Minutes: “You don’t fully understand how it works and yet you’ve turned it loose on society?”

      Google CEO: “Let me put it this way. I don’t think we fully understand how the human mind works either.”

      This is the old school of classical computer science. However, we have come a long way from the old school. The new science of information processing structures derived from our knowledge of genomics, neuroscience, and the general theory of information tells us that human intelligence stems from the knowledge encapsulated in the genome of biological systems and is transmitted from the survivor to the successor.

      (See Mikkilineni, R. The Science of Information Processing Structures and the Design of a New Class of Distributed Computing Structures. Proceedings 202281, 53. https://doi.org/10.3390/proceedings2022081053)

      The genome provides the operational knowledge to implement biological life processes using 30+ trillion cells which are autonomous, collaborate in groups, have the ability to process information, create knowledge, and use metabolism (conversion of matter and energy). The specification provides full knowledge of functional requirements and their fulfillment, Non-functional requirements and their fulfillment along with best practices passed on from the survivors to their successors including how to fight a virus that affected their ancestors in the past.

      “The single fertilized egg cell develops into a full human being is achieved without a construction manager or architect. The responsibility for the necessary close coordination is shared among the cells as they come into being. It is as though each brick, wire, and pipe in a building knows the entire structure and consults with the neighboring bricks to decide where to place itself.”

      Yanai, Itai; Martin, Lercher. The Society of Genes (pp. 11-13). Harvard University Press. Kindle Edition. p. 11.

      Without the genome, there is no natural intelligence. The genome enables both the autopoietic and cognitive behaviors exhibited by biological systems. For a detailed discussion of autopoietic and cognitive behaviors, please refer to A New Class of Autopoietic and Cognitive Machines and the references cited therein.

      The Crux of the Problem of Current AI:

      The use of machine intelligence, which Alan Turing described in his prophetic paper in 1948 “Intelligent Machinery” has proven to exceed his imagination and its use with both symbolic computing and machine learning has contributed to business process automation and data-driven insight-based decision making. Recent advances using large language models and generative AI have while proving an order of magnitude improvement in knowledge acquisition and its use, also created an opportunity for the abuse of technology by evil-doers and bad actors by creating fake information and synthetic media. This brings up the question of ethics in using AI and the need for some kind of regulation with checks and balances.

      The original thinkers of machine intelligence, such as John von Neumann, Alan Turing, and many others were trying first to guess how the mind works and build a machine that mimics the aspects of the mind they could decipher. Symbolic computing was proposed by Alan Turing with this statement.  “A man in the process of computing a real number replaced by a machine which is only capable of a finite number of conditions.” The Turing machine proposed by Alan Turing led to machines that automate process execution and improve the efficiency, resiliency, and scalability of human operations. Sub-symbolic computing was proposed by several scientists and mathematicians including McCulloch, Pitts, John von Neumann, and Alan Turing guessed how the neuron and neural networks work in the brain and mimicked its function using an algorithm executed on the symbolic computing structure. Thus, current AI and process automation algorithms run on an infrastructure that uses sequences of symbols to represent information and another sequence of symbols that represent operations on the information to execute processes defined as algorithms or sequences of tasks.

      For a detailed account of the evolution of machine intelligence and the relationship between human and machine intelligence, please see the video “Evolution of Machine Intelligence”

      The long and short of the lesson from the science of information processing structures is that the current symbolic and sub-symbolic computing structures are limited in their ability to mimic human intelligence that has evolved over billions of years and is transmitted by the survivors to their successors through the genome. The genome is a specification of operational knowledge that has the ability to create a “self” using 30+ trillion process-executing cells each with the knowledge to function autonomously and collaborate with other cells to execute collaborative processes with shared knowledge using matter and energy transformations. 

      We now know much more about how the mind operates with the body and brain through great advances in genomics, neuroscience, and our understanding of the general theory of information and how biological systems use genome-derived intelligence to interact with the world. Information is the bridge between the material world and the mental world of the biological system that is made possible through knowledge from the genome. Thus matter, energy, information, and knowledge play important roles in the development of human intelligence and therefore also play a deeper role in building machines that truly mimic the intelligence of biological systems.

      Current tools used to build truly intelligent systems fall short of mimicking genome-derived intelligence in two key aspects discussed in the video mentioned above:

      1.  Sub-symbolic and symbolic computing structures are not adequate to define the functional requirements, the non-functional requirements, and the best practices to execute the life processes of the computer and the computed. The reason lies in the inadequacy of the sequences of symbols and operations on them using another sequence of symbols. Alan Turing captured only a part of how human intelligence works. Sub-symbolic computing also is an algorithm that uses sequences of symbols.
      2. Emergence is a property of complex adaptive systems where complex structures under the influence of large fluctuations undergo a phase transition that is not under the control of the system. The structure itself has no control over the outcome. Biological systems have devised the genome in order to avoid the emergence and define their own destiny based on the specification and execution of life processes using the relationships between information, knowledge, matter, and energy. The general theory of information provides a framework to understand the relationship between the material world and the mental world created through the genome using the theory of ideal structures.

      The general theory of information provides a path to infuse both cognitive and autopoietic behaviors using a digital genome. See  Information Theoretical Principles of Software Development.

      A digital genome-based information processing structure is composed of both symbolic and sub-symbolic computing structures and provide model-based and transparent cognitive processes and address the black-box problem. It allows us to create a digital replica of the domain specific model of physical reality, establish real-time synchronization between the digital and material worlds, and use the digital information processing to manage our interactions with the material world using data, based on objective reality. Information is the bridge between the two worlds.

      The transparency of the model-based reasoning and its connection to physical reality reduces the knowledge gap between the various actors involved in the real world and reduces the mistakes often caused by human self-referential circularity of their logic not moored to external reality. (See Life After Google for a discussion of Gödel’s theorem and the impact of self-referential circularity not moored to external reality)

      The digital genome approach is being currently applied to specific business problems such as self-regulating distributed software applications, and medical-knowledge-based digital assistants trying to reduce the knowledge gap between the patient and the healthcare service providers by giving the right information at the right time in the right format to the parties that need it to make the right decision.

      The Presentation in this video is about showing a new approach to creating a transparent model-based machine intelligence that captures the associative long-term memory based on event history. The system is designed to use medical knowledge from various sources including the large language models (LLMs) to create and use event history in the early medical disease diagnosis process. The system is designed using the Structural Machines, Cognizing Oracles, and Knowledge Structures suggested by the General Theory of Information. This video captures the work from these three papers:

      Mikkilineni, Rao. 2023. “Mark Burgin’s Legacy: The General Theory of Information, the Digital Genome, and the Future of Machine Intelligence” Philosophies 8, no. 6: 107. https://doi.org/10.3390/philosophies8060107

      Kelly, W. Patrick, Francesco Coccaro, and Rao Mikkilineni. 2023. “General Theory of Information, Digital Genome, Large Language Models, and Medical Knowledge-Driven Digital Assistant” Computer Sciences & Mathematics Forum 8, no. 1: 70. https://doi.org/10.3390/cmsf2023008070

      Mikkilineni, R.; Kelly, W.P. A New Class of Intelligent Machines with Self-Regulating, Event-Driven Process Flows for Designing, Deploying, and Managing Distributed Software Applications. Preprints 2023, 2023111104.

      Perhaps if the classical computer scientists study the science of information processing structures, we could make progress in building predictable, stable, safe, and secure machine intelligence that assist humans in making real-time life and death decisions based on transparent and open knowledge. It is all about reducing the knowledge gap between various actors making decisions with transparent and timely information access that reduces the impact of self-referential circularity of humans not moored to external reality. Machines are better in processing information than humans, if they are endowed with well-designed digital genomes to perform specific functions. Humans are ultimately responsible for how we use machines. Good people use it wisely and evil people use it to subvert language, freedom, democracy, and ultimately the civilization itself. Therefore, it is imperative for good people to develop the antidotes with better technology that transcends today’s classical computer science and its limitations. The digital genome provides the architecture for transparency, verification of facts, and insights based on external reality.

      The Jazz metaphor here is very apt. The thesis of classical computer science is met with the antithesis of machine intelligence based on the current half-knowledge about the nature of human intelligence. The synthesis is perhaps provided by the new knowledge about the role of the genome and our understanding from the general theory of information about how to infuse autopoietic and cognitive behaviors into dgital automata to Improve their sentience, resilience, and intelligence.

      Perhaps we need to graduate from the old school of classical computer science to the new school of the science of information processing structures by updating our knowledge from data structures to knowledge structures and from Turing Machines to Structural Machines.

      Food for thought.

      Human Intelligence and Machine Intelligence, what is the Difference?

      Figure 1: The observer and the observed according to the General Theory of Information articulated by prof. Mark Burgin (www.tfpis.com)

      Abstract

      Making computing machines mimic living organisms has captured the imagination of many since the dawn of digital computers. According to Charles Darwin, the difference in mind between humans and higher animals, great as it is, certainly is one of degree and not of kind. Human intelligence stems from the genome that is transmitted from the survivor to the successor. Machine intelligence stems from humans designing how human knowledge can be represented as a sequence of symbols (data structures) and use operations on them (programs), also represented as a sequence of symbols to model and interact with the world. The evolution of the data structures, using John von Neumann’s stored program control implementation of the Turing Machine, being operated on by the program leads to process automation and gaining insights by programs mimicking neural networks of the human brain. This blog explores the difference between current state of the art of human and machine intelligence using the General Theory of Information.

      Introduction

      According to a dictionary definition from Oxford Languages, intelligence is the ability to acquire and apply knowledge and skills. Knowledge, therefore, plays a crucial role and the mechanisms that help acquire, process, and use knowledge to execute specific goals play an important role in developing the required skills and using intelligence.

      Human intelligence stems from the knowledge transferred by the survivors to their successors in the form of a genome.  The genome contains all of the information (for executing the life processes) needed for a human to develop and grow. It contains the operational knowledge to create, monitor, and manage 30+ trillion cells, each cell executing a process that uses replication to specialize and grow, and metabolism to use the energy and matter transformations to derive the resources to execute the functions constituting the life processes.  The trillions of cells thus created behave like a community, where individual cell roles are well-defined, and their relationships with other cells are defined through shared knowledge and they collaborate by exchanging messages with each other defined by specific relationships and behaviors.

      “The single fertilized egg cell develops into a full human being is achieved without a construction manager or architect. The responsibility for the necessary close coordination is shared among the cells as they come into being. It is as though each brick, wire, and pipe in a building knows the entire structure and consults with the neighboring bricks to decide where to place itself.”

      Yanai, Itai; Martin, Lercher. The Society of Genes (pp. 11-13). Harvard University Press. Kindle Edition. p. 11.

      The information is carried in the genome as operational knowledge and that brings up the question “Knowledge and Information – What is the Difference?”

      The general theory of information (GTI) gives a comprehensive answer to this question. See M. Burgin and R. Mikkilineni, “General Theory of Information Paves the Way to a Secure, Service-Oriented Internet Connecting People, Things, and Businesses,” 2022 12th International Congress on Advanced Applied Informatics (IIAI-AAI), Kanazawa, Japan, 2022, pp. 144-149, doi: 10.1109/IIAIAAI55812.2022.00037. p. 146

      “While some researchers proclaim that information is a sort of data, while others maintain that information is a kind of knowledge, the scientific approach tells that it is more adequate treating information as an essence that has a dissimilar nature because other concepts represent various kinds of structures. Assuming that matter is the name for all substances and the vacuum as opposed to energy, then relations between information and knowledge bring us to the Knowledge-Information-Matter-Energy.”

      According to the General Theory of Information (GTI), “Information is related to knowledge as energy is related to the matter. Energy has the potential to create, preserve or modify material structures, while information has the potential to create, preserve or modify knowledge structures. Energy and matter belong to the physical world, whereas information and knowledge belong to the world of ideal structures and are represented in the mental world.”

      The genome bridges the material world to the mental world by providing the knowledge to build autopoietic and cognitive processes dealing with information acquisition, processing, and its conversion into knowledge which provides the fuel for higher-level intelligence. Autopoiesis refers to the behavior of a system that replicates itself and maintains identity and stability while facing fluctuations caused by external influences. Cognitive behaviors model the system’s state, sense internal and external changes, analyze, predict and take action to mitigate any risk to its functional fulfillment. A single cell in a biological system is both autopoietic and cognitive. Each cell is endowed with all the knowledge to use metabolism (conversion of matter and energy) to build the required material and mental structures to execute various life processes. It provides the knowledge to replicate and assume various roles with specialized functions, and build composite structures that not only perform specialized functions, but also orchestrate the system as a whole to maintain non-functional requirements that maintain stability, safety, security, and survival, while fulfilling various functional requirements to interact with the environment and execute various life processes including the creation and the use of cognitive mental structures that model and interact with the world. Figure 1 summarizes the learnings from the General Theory of Information.

      The machine intelligence stems from the stored program implementation of the Turing Machine derived from Alan Turing’s observation of how humans used numbers and operations on them. This 5-minute video summarizes my understanding of the evolution of machine intelligence and its relationship to human intelligence.

      Evolution of Machine Intelligence and its relationship to human intelligence.

      Conclusion

      I am neither a computer scientist nor a philosopher.

      When I came to the United States as a graduate student to study physics, computer science was not an academic discipline, and most of the computers being used were by the physicists, engineers and mathematicians. I had the privilege of learning physics from some of the great physicists of that time. I did my Ph D thesis under the guidance of a well-known solid-state physicist, Walter Kohn who got a Nobel prize in chemistry in 1998. My work involved using computers to solve many body physics problems and during my stay at the University of Paris, Orsay, and the Courant Insitute in New York, I worked on force biased Monte-Carlo and molecular dynamics simulations collaborating with eminent physicists and mathematicians including Loup Verlet, D. Levesque, Malvin Kalos, Joel Lebowitz, Geoffrey Chester, and Jerome Percus. Later, my collaboration with Bruce Berne at Columbia University resulted in interesting new approaches to both Montecarlo and Molecular Dynamics simulations to study hydrophobic interaction. Later, I had the opportunity to join the Bell Labs, where I participated in many innovative approaches to expert system development, object-oriented approaches to automate business and operations support systems. Currently I teach machine learning and cloud computing to graduate students, and practice what I teach as a CTO at Opos.ai, a healthcare startup helping to bridge the knowledge gap between the patients, and the healthcare professionals using machine intelligence to augment human intelligence by increasing their knowledge and reduce human stupidity resulting from self-referential circularity of their logics not moored to external reality by predicting the consequences of their possible actions.

      My accidental interest in computer science and the theory behind computation began when I was examining the complexity of deploying, operating and managing applications especially in a distributed environment where CAP theorem limitations were becoming significant. CAP theorem states that a distributed system can deliver only two of three desired characteristics: consistency, availability, and partition tolerance. As demand for 24/7 availability and consistency across a widely distributed workloads across the globe, the need for circumventing the CAP theorem limitation was becoming obvious. At the same time, I also noticed that prof. Peter Wegner and others were pointing to the limitations of the Turing Computing model on which all general- purpose computers were based on. The subject was controversial and vigorous debate was raging just as it is now between AI enthusiasts and AI critiques. I wrote to few prominent computer scientists to find out how we could look at the computing model to overcome the CAP theorem limitations. Unfortunately, they were either busy with their own stuff or did not have a good answer. It also became clear to me after reading Penrose’s articulation of the Turing machine and Feynman’s detailed lectures on Turing machines, that all connected Turing machines are sequential and has problem supporting asynchronous and parallel computations and also the Church-Turing thesis (which is discussed in the video) has limitations. I became an accidental student of computer science, and started to study the evolution of computing, its progress, and the role of general theory of information in relating computing based on sequences of symbols to information processing that goes beyond symbolic computing. Both Peter Wegner and Mark Burgin mentored and shaped ny understanding of computer science and its relationship to the material, mental and the digital worlds we live in.

      I am sharing my understanding here so that it may assist in giving the next generation computer scientists and information technology professionals a head start that I did not have. Hopefully, the curious among them will be able to prove or disprove my understanding and make a contribution to our understanding of the difference between human and machine intelligence.

      Here are few conclusions I came to as a student of the General Theory of Information and practitioner of Machine Intelligence Applications.

      First, current digital symbolic computing and sub-symbolic computing structures are powerful in providing process automation and creating large knowledge pools from which we can derive insights to make decisions. Figure 2 provides the current state of the art of human and machine intelligence.

      Figure 2: The genome-derived human intelligence is compared with symbolic computing derived machine intelligence. Human is involved in using the wisdom to take advantage of process automation and insights from sub-symbolic computing.

      The genome-derived human intelligence is a multilayered network of networks with self-regulation, and orchestration of trillions of autonomous cellular processes organized as local, clustered, and global structures communicating and collaborating with shared knowledge. Each instance is unique and the knowledge and the autopoietic and cognitive behaviors evolve based on individual’s unique experiences and history. As a result, each individual with a unique mental world, interacts with the outside including other genome-derived entities along with the material world.

      The self-identity defines the individual and the unique experience and history defines the mental structures. They also have developed a culture where groups form into societies with social contracts that define the societal genome and leads to collective intelligence.

      While the individual and collective intelligence have contributed to improving the skills and knowledge leading to higher intelligence, and higher quality of life, both suffer from the self-referential circularity and can lead to human stupidity if not moored to external reality using a higher-level logic. Major conflicts of human history are derived from the self-referential circularity not moored to external reality and in many cases, by accident or luck, we have survived annihilation. Human stupidity has nothing to do with the tools they design and use. It has to do more with the self-referential circularity of their logics.

      Machine intelligence on the other hand, has no self-identity and self-regulation. They are a collection programs designed by humans, that automate processes and provide insights for decision making by analyzing large pools of information and creating large knowledge pools. These knowledge pools by themselves are not intelligent. It takes other programs written by humans or humans themselves to use them as they see fit.

      Therefore, the use of these process automation programs and knowledge pools by humans dictate the results and are subject to whether they are biased by human self-referential circularity or the result of a higher-level logic resolving the inconsistencies of lower-level logics. The higher-level self-regulation can only be done by humans at this point because the machines lack a self-identity or a group identity and a self-regulation mechanism.

      While humans have the ability to create a societal genome-based self-regulation, there seems to be a conflict between autocratic, oligarchic, and democratic mechanisms of self-regulation. They seem to compete to eliminate each other.

      However, according to the Geneal Theory of Information, it is possible to create a digital genome that addresses a specific goal by defining the functional requirements, non-functional requirements, and the best practices from experience and to execute both cognitive and autopoietic behaviors with real-time self-regulation and knowledge acquisition. Super-symbolic computing with structural machines operating on knowledge structures that are constantly updated by symbolic and sub-symbolic processes perhaps will offer a means to reduce the knowledge gap between various actors making decisions in real-time. Hopefully, transparency based on model-based reasoning will help reduce the knowledge gap and foster confidence.

      Whatever path the machine intelligence evolves, the General Theory of Information tells us that digital neural networks implemented using symbolic computing alone by themselves will not become super-intelligent by developing higher-level reasoning by induction, deduction, and abduction. Current symbolic and sub-symbolic computing structures are limited by the short-falls discussed in the video and they can easily be exploited by human greed and power-mongering. A recent attempt to pervert the language (which is a carrier of information that has the potential to create or update the knowledge of the receiver) seems to be a popular weapon used by those wielding power with autocratic and oligarchic regulation. Unfortunately, there is no antidote for human stupidity. My only hope is that the next generation computer scientists and information technology professionals will develop digital genome-based systems (using super-symbolic computing) that reduce the knowledge gap between various actors involved in decision making, provide assistance in exposing the pitfalls of self-referential circularity, and suggest ways to move forward with higher levels of intelligence to combat human stupidity.

      General Theory of Information, Managing the Business of Life, Life Processes, Free Will, and all that Jazz.

      Figure 1. The genome, the neural networks, and the networks of genes define and execute biological life processes using both wired and learned knowledge. The digital genome, cognitive, and autopoietic oracles (or agents), networks of digital genes (programs executing algorithmic processes), and digital neurons (algorithms mimicking the neural network behaviors) execute digital life processes using computing machines.

      According to Encyclopedia Britannica, “free will, in philosophy and science, is the supposed power or capacity of humans to make decisions or perform actions independently of any prior event or state of the universe.”

      Free Will (Stanford Encyclopedia of Philosophy) provides a good discussion with substantial revisions (Nov 3, 2022.)

      I asked ChatGPT some questions about free will and got these responses.

      While his nationality was not quite correct (it was corrected when I pointed it out with the correct response), I thought the response is as good as any that I find in LinkedIn discussions. Especially the observation of Shannon’s theory and the extensions implied in GTI which is misunderstood by many computer scientists and physicists. (See Information | Free Full-Text | Is Information Physical and Does It Have Mass? mdpi.com )

      What are the differences between the people discussing free will who possess different knowledge and a computer program that has been taught to process information to create knowledge and interact with people, things, and other agents that possess and use knowledge? This is a fascinating subject by itself.

      General Theory of Information and the Business of Managing Life Processes:

      Having spent several years studying GTI and discussing it with Mark, I have come to view the material world, the mental world, and the digital world as consisting of structures represented by ideal structures that Mark has provided as a scientific interpretation of the world of Plato, “Ideas/Forms.” Ideal structures in the form of Fundamental Triads or Named Sets. “There are different forms of mathematical definitions of named sets/fundamental triads: in categories (Burgin, 1990), in set theory (Burgin, 2004c), and by axioms (Burgin, 2004b). Axiomatic representation of named sets shows that named set theory, as a formalized mathematical theory, is independent from set theory and category theory. When category theory is built independently from set theory, then categorical representations of named sets are also independent of from set theory. It is also necessary to emphasize that physical fundamental triads (named sets. i.e., fundamental triads that are structures of physical objects, are independent from set theory.” (See Mark Burgin (2010). “Theory of Information: Fundamentality, Diversity, and Unification.” p 571.)

      The long and short of GTI is that it gives the most comprehensive definition of information, encompassing and clarifying many of the writings in the literature about information.  In addition, it provides an efficient means for deriving properties of information representations and information carriers (language is one such carrier.) GTI relates the material structures and mental structures in biological systems by relating energy and matter transformations to information and knowledge transformations. “Information to Knowledge is as Energy is to Matter.”

      Figure 2. The relationship between material structures, mental structures, and digital structures is described through the named sets.

      Figure 2 shows the relationship between information that belongs to the material world (ontological information), and its representation in the mental world received (epistemological information) through the cognitive apparatuses (In this case the sensors and actuators that the biological systems have built using the material structures. Biological systems which have evolved through natural selection from material structures have developed the unique capability of managing their structural stability by minimizing entropy and exchanging energy with their environment even under the influence of large fluctuations caused by external forces. They have, over time, evolved to not only manage their structures and their interactions with their environment but also have developed how to specify and execute them using replication and metabolism and the survivors pass them on to their successors in the form of a genome. Material structures are a result of energy and matter transformations. Energy has the potential to create or modify matter. Material structures contain information about their state and dynamics caused by changes or fluctuations in energy caused by their interactions with each other caused by external forces (phase space). According to GTI, information is not physical by itself, although it can have physical and/or mental representations. These representations can be depicted by the fundamental triads or named sets.

      Material structures are a result of energy and matter transformations. Energy has the potential to create or modify matter. Material structures contain information about their state and dynamics caused by changes or fluctuations in energy caused by their interactions with each other caused by external forces (phase space). According to GTI, information is not physical by itself, although it can have physical and/or mental representations. These representations can be depicted by the fundamental triads or named sets.

      Biological structures while they are also material structures, they distinguish themselves with a self-identity, a global purpose that is specified and executed using various sub-structures with autonomous components executing various processes and collaborating with shared knowledge, and information exchange in real-time. As the scale of the structures increases, material structures behave as complex adaptive systems and undergo emergence when subject to large fluctuations caused by external or internal forces impacting their interactions. This leads to phase transitions where structures are reconfigured. The life processes are specified in the genome and they are executed when appropriate resources are available (matter and energy in the womb for example) through the mechanisms of replication and metabolism. In a human being, 30+ trillion cells are created and managed as a society consisting of autonomous cells executing well-defined processes with shared knowledge with other cells, and exchanging information as specified in the shared knowledge.

      The management of life processes is executed at various levels (local in each cell), clustered (groups of cells executing a collective process, and global where the management balances the global requirements and constraints and the cluster requirements and constraints. The group-level management balances the group requirements and constraints and individual cell requirements and constraints.

      In essence, without a purpose defined at the cell to the purpose defined at the organism level, there is no life and without replication and metabolism, there is no execution of the purpose. It is interesting to note that a chicken has 67% similarity with the genome specification of a human.  The purpose is defined at various levels using both the wired knowledge from the genome and the acquired knowledge from the cognitive processes.

      We can summarize what we know about the genome specification (see On the Autopoietic and Cognitive Behavior (easychair.org)

      “The genome in the physical world is knowledge coded in the executable form in deoxyribonucleic (DNA) and executed by ribonucleic acid (RNA). DNA and RNA use the knowledge of the physical and chemical processes to discover the resources in the environment using cognitive apparatuses in the form of genes and neurons. They build and evolve the hardware utilizing various embedded, embodied, enacted, elevated, and extended (5E) cognitive (sentient, resilient, intelligent, and efficient) processes to manage both the self and the environment. The genome encapsulates both autopoietic and cognitive behaviors. The autopoietic behaviors are capable of regenerating, reproducing, and maintaining the system by itself with the production, transformation, and destruction of its components and the networks of processes in these components. The cognitive behaviors are capable of sensing, predicting, and regulating the stability of the system in the face of both deterministic and non-deterministic fluctuations in the interactions among the internal components or their interactions with the environment.”

      With this in mind, we can argue that consciousness and free will at various levels (processes inside the cell, group processes with clusters of cells, and global cluster of clusters) provides monitoring and managing the balance between requirements and constraints. The system makes those choices based on its knowledge and history (or experience). For example, when a choice has to be made to resolve contention for resources between various subsystems, global constraints and requirements take priority. When confronted with danger, choices have to be made based on the context.

      Figure 3: Hierarchical Consciousness, Knowledge, intelligence, and Free will excercize to chooe between options available to balance the requirements and constraints in solving a problem.

      We can similarly argue that humans as individuals with autonomous behaviors form into groups and societies collaborating and competing with each other as a complex adaptive system. Some groups and societies without self-regulation are subject to emergence when faced with large fluctuations (non-determinism). Others with self-regulation defined through autocratic or democratic processes determine their future where they can evaluate options when faced with large fluctuations threatening their structures and choose a path to follow. The success or failure depends on how well their internal logic is moored to external reality to avoid the self-referential circularity of all logic discussed by Kurt Gödel.

      The interesting conclusion I came to is that the General Theory of Information with the concepts of named sets, knowledge structures, cognizing oracles or agents, and the schema and operations defining the structural machines provides a model for representing the states and their evolution of biological systems, ecosystems made up of biological systems, and digital systems, Obviously, I am just a student of the general theory of information and philosophy and do not know whether all this makes sense or not. It would be interesting to hear from real philosophers and mathematicians who are masters in their fields.

      In the meanwhile, I am enjoying the reflection, application, and sharing of my knowledge. Hopefully, next-generation scientists and information technology professionals with curiosity will take our knowledge to higher levels and solve some of the tricky issues of the conflict between good and evil, survival and extinction, fraud and honesty, individual and collective requirements, and constraints, ethical and non-ethical behaviors. They exist in humans independent of technology. Technology can only be used to enhance and amplify the behaviors humans choose with their free will at individual, group, and societal level.

      From the biological genome, we can now make the transition to the creation of the digital genome.

      I conclude with the following observation from the Burgin-Mikkilineni thesis.

      “The digital genome specifies the execution of knowledge networks using both symbolic computing and sub-symbolic computing structures. The knowledge network consists of a super-symbolic network of symbolic and sub-symbolic networks executing the functions defined in their components. The structure provides the system’s behavior and evolution maintaining the system’s stability in the face of fluctuations in both internal and external interactions. The digital genome encapsulates both autopoietic and cognitive behaviors of digital information processing structures capable of sentience, resilience, and intelligence. The digital genome typifies infused cognition as opposed to evolved cognition in biological systems. The infusion is made by the human operators who teach the machines how to evolve. Self-learning and self-evolution with consciousness are not yet present in these digital structures. This brings about profound implications for how we design, build and operate digital information processing systems. It allows us to create zero-touch remote configuration, monitoring, and management of current digital information processing systems.”