Intelligence Evolution: Framing a Big Picture View

In my previous post on AI related sciences,I addressed multidisciplinarity of AI as a source of complexity in seeing a big picture of AI.  In that previous post, pathways between diverse fields of research were illustrated in a Venn diagram of overlapping circles. Beyond these pathways, a framework capturing the fundamental aspects underlying all the related fields of research and AI, would serve as a tool for grasping a big picture view of AI even better. It could also facilitate analyzing long-term progress of AI R&D.

In this post, we will share a first version of such a framework around AI, which we have formulated as a cube in order to capture and frame a big picture view of AI, and to establish a “world view” for further work on Opentech AI architecture. By sharing it, we hope to encourage feedback and comments, especially on missed fundamental aspects of intelligence evolution. The goal of the framework is to serve as an analysis framework for AI systems and their evolution – potentially facilitating also research, development and operation of future AI systems in various application domains and across industry-specific tasks and leaderboards.

Framing and visualizing a big picture view of AI in a single picture, without leaving something potentially relevant out, is not an easy task. However, Figure 1 below is an attempt to do so in form of “AI Cube” framework. The goal of the framework is to frame the complex and multidimensional context of AI in a way that could accommodate diverse approaches to AI, as well as capture the relevant aspects of research, development, implementation, operation and use of AI systems across industries.


Figure 1. The AI Cube framework for analysis of AI systems and their evolution.

Let’s walk through the AI Cube framework illustrated in Figure 1 and explain some of the reasoning behind it. Consider time.  Time matters to in so many ways, from individual entities (phenotypes), individual species (genotypes), the universe and all species (evolutionary time),as well as ideas and progress in individual disciplines (a community of academics). So starting with time, the framework illustrated in Figure 1 highlights two different dimensions for time; evolutional (shown evolving into the page) and generational (shown evolving from left-to-right across the page) time dimension. This two dimensionality of time reveals the evolutional gap between natural and artificial intelligence; Intelligence as a natural phenomena has at least 3.7 (Last Universal Ancestor; all species based on DNA) to 4.5 (history of Earth; the Universe) billion years of evolution history in an environment dating way back to 13.7 billion years and the Big Bang. In comparison, history of the artificial intelligence goes back only about 60 years until the Dartmouth Workshop and shares the same environment. Accordingly, the evolutional time gap between natural and artificial intelligence is in the scale of billions (109) years. Whereas time and timing are certainly relevant for both natural and artificial intelligence, equally important is the environment and context where perceptions, actions and sequences of those – interactions – can take place. In Figure 1, shown going from bottom-to-top of page is context – things in the environment of evolving AI systems.  For example, all the data of Wikipedia was necessary to build IBM’s Watson Jeopardy! As computing capabilities, accessible data sets, code, algorithms, and models increase in diversity, the environment and context of each AI system becomes increasingly complex.  The rest of the post will walk you through the main dimensions, concepts and aspects of interests in the AI Cube framework.  For related work, that explores natural and artificial intelligence, see the work of Hernández-Orallo (2016).

Hernández-Orallo J (2016) The measure of all minds: evaluating natural and artificial intelligence. Cambridge University Press.

Main Dimensions and Concepts

Due to the difference of scale between evolutional and generational time related to intelligence, the framework treats these two scales of time as two separate main dimensions. The evolutional time allows addressing the evolution of artificial intelligence in comparison to evolution of natural intelligence, where different species (types of AI systems) and their populations (instances of AI systems) may exist in evolutional time in the environment. The generational time allows to address the co-existence of AI systems in shared environment with natural intelligence (including living humans, human culture and existing human artifacts) at present or any given time.

The third main dimension in the framework is context, which allows addressing an AI system or family of AI systems (~species) co-existence and interaction with the relevant processes, systems and entities in the environment. This enables for example value, requirements and impact analysis of AI systems in different phases of the system life-cycle (in planning, design, development or deployed). The relevant context of an AI system or family of AI systems is always a subset of the environment. The environment is considered to contain all life, materials, entities and artifacts, as well as the biological, physical, social and artificial processes and systems, which those may compose at given time. Accordingly, the environment is seen to be in constant change as a result of all the processes and systems interacting within it.

  • For example, creation/emergence of an AI system at time (t) will therefore result into new artificial system and entity being present in the environment at time (t+1).

To address the evolutionary view of AI through time in the environment, a central concept of AI Genome (analog to biological genome enabling natural intelligence) is introduced in the framework. The concept of AI Genome is seen as a set of knowledge, tools, assets and capabilities transferred over generation of AI systems and humans for building and operating AI systems in their shared environment. Accordingly, the concept of AI genome codes for co-evolution of human and artificial intelligence over generations of both. The concept of AI Genome also narrows down the co-evolution to take place between human and artificial intelligence – within human culture – as humans are the only known naturally intelligent species trying to develop and apply artificial intelligence.  For related work, that explores AI/machine genome in the context of practopoiesis, see the work of Nicolic (2015, 2017).

Nikolić D (2015) Practopoiesis: Or how life fosters a mind. Journal of theoretical biology. 373:40-61.

Nikolić D (2017) Why deep neural nets cannot ever match biological intelligence and what to do about it?. International Journal of Automation and Computing. 14(5):532-41.

The Aspects of Interest

Intelligence evolution, both natural and artificial, is happening all around us today.  How can the AI Cube help us better understand this evolution?  Let’s summarize key aspects of interest: (1) environment, (2) R&D processes, (3) systems and (4) craft.

Regarding the main four aspects of interest around the co-evolution of human and artificial intelligence (AI Genome) illustrated in Figure 1, the first obvious aspect of interest is the environment. The environment enables, tests and limits the co-evolutionary process of human and artificial intelligence (evolution of AI Genome) via interaction with all life, materials, entities and artifacts, as well as the biological, physical, social and artificial processes and systems within it. Via dynamicity, uncertainties and adaptation over time, the environment introduces multidimensional and continuously increasing complexity. The overall complexity of the environment can also be seen as source of motivation for building AI systems in order to increase stability and predictability of the environment for humans.

  • The natural environment (biological, physical and social processes and systems) has always introduced natural complexity and challenges for evolution of natural intelligence. In addition to natural environment, also artificial environment (artifacts and artificial systems) created via human activity introduce increasingly more complexity to the environment, which could be descried as artificial complexity.

The challenge for the co-evolution of artificial and human intelligence (AI Genome) within the framework can be described as survival, well-fare/well-being and ethical co-existence in the environment of continuously increasing complexity, uncertainty and adaptation.

The three other main aspects of interest around the co-evolution of human and artificial intelligence are:

  • AI Research and Development as a process creating new knowledge, tools and assets to develop AI systems.
  • AI Systems created and used potentially providing value in their environment, and at the same time serving as experiments (~phenotype) for evolution of AI Genome (~genotype).
  • AI Craft/Production as a set of capabilities to build and operate AI systems – by individuals or human organizations.

These three aspects are easy to comprehend when comparing those to current state of art and practice of building AI systems; a multidisciplinary research and development community exists for AI (AI Research and Development), skills like programming, IT, project management, marketing, etc. exist for production of AI systems in many organizations and some individuals even might hold skills to create AI systems (AI Craft/Production). However, the framework does not exclude other kinds of more futuristic interpretations of the same aspects, such as automated self-evolution (AI Research & Development), automated self-generation (AI Craft/Production) and fully autonomic AI systems (AI Systems), if it serves the need of applying the framework for analysis of future evolution, progress and impact of AI.

Caveat: The ancient Greek based Aristotlean and creation related terms in the parenthesis under each of the aforementioned 3 aspects of interests are there to reflect the different roles of the main aspects from the viewpoint of creating artificial intelligence. These are not to be taken literally in their philosophical meaning or historical context, but may provide some useful analogies for understanding the organization of the framework.

Towards Opentech AI Architecture

Building better AI systems is of economic significance and academic interest.  So, in sum, a small reminder about the purpose and goal of the AI Cube framework. The goal is that it could serve as a tool for grasping the big picture of AI and would therefore be able to accommodate multiple different types of AI systems in different application domains. The goal is not to create a framework for creation or analysis of Artificial General Intelligence (AGI) or “superintelligence”, but a framework for widening the capabilities of AI systems in comparison to current narrow and highly task specific “one-off” systems.

Another goal is that the framework could serve as an analysis framework for AI systems and their evolutionary progress. Finally, it could potentially also facilitate research, development and operation of future AI systems. As is, the framework might be useful for analysis of existing AI systems and progress on the field on very high abstraction level. However, it needs to be extended with a systematic and structured architecture framework and methodology in order to be useful for more detailed analysis, as well as for development and operation of new AI systems. This is the direction where we plan to go next by taking a closer look to the central concept of AI Genome introduced in the AI Cube framework. Open source AI systems on GitHub that can improve their performance on all the AI leaderboards, including industry-specific AI leaderboards, is the practical direction we aim to encourage.


2 thoughts on “Intelligence Evolution: Framing a Big Picture View”

  1. I do not perceive any means of assuring that your version of AI will Do No Harm. Did I miss it or do you not accept this ‘requirement?’ Else?

  2. Thank you for your question and a very good point. AI (as well as many other technologies) can be used to do either great good in right hands or great harm in wrong hands – ultimately it is about people (and their ethics), who decide how to apply any technology – also AI. The dilemma has been present since the invention of throwing stones towards other people. By analogy to throwing stones, the only thing changing is the destructive power of the stones as technology evolves. Co-developing AI as open technology shares the benefits of the technology widely (all have possibility to participate and exploit), but at the same time it opens easier access to technology for ALL (also for those with potentially harmful intentions).

    If we could be sure that AI is developed and applied only by parties with good intentions, keeping the development in secrecy could protect the society from harmful outcomes. However, if it turns out that AI is also developed and applied by those with questionable motives, having widespread knowledge on the technology may give the society better tools for minimizing the harm caused due to possible misuse of AI technology.

    Can you refer us to any good work or ideas on this topic, which we could incorporate in our thinking and refer to in this blog as part of our further work on Opentech AI?

Leave a Reply