Blog on Open Artificial Intelligence technology (Opentech AI)

Why? What to expect?

This blog is established as part of a research exchange co-operation between VTT Technical Research Centre of Finland and IBM Research – Almaden. The blog targets to capture the big picture of the research and development on the field of open artificial intelligence technology (Opentech AI), especially focusing on architecture, related ecosystems, progress and future directions. Overview of the topics discussed in this blog around Opentech AI are illustrated in the figure below.


We will use this blog to share hopefully interesting results and findings from the R&D work that we are doing on Opentech AI with colleagues in Finland and in US. As the blog is on Open Technologies AI (code + data + models + stacks + community + leaderboards), we would like to share it basically with whoever is interested.

In addition to blog postings, please note the “Opentech AI Resources” pages for more information on the topics covered. We will update the blog, as well as the resources pages, whenever we come across new interesting things that we think are worth sharing and discussing on the topic of Opentech AI. Comments, discussions and interaction around the postings and topics here are very welcome and greatly appreciated. You are also very welcome to contact us for further discussions and collaboration.

We will try to keep the blog light, interesting and informal. We try to include links for further information enabling you to dig in deeper into the topics of the postings. This is by no means a deep scientific forum – there are specific forums for that (e.g. Arxiv, which is highly relevant for R&D on Opentech AI).

Best regards,

Daniel & Jim


Towards an Architecture Framework for AI Systems

AI systems may radically differ from other software intensive systems – How to address that in system architecture and engineering?

From the viewpoint of software and system engineering, AI is today mainly seen and developed as evolution of smart/intelligent features of existing systems (e.g. enterprise systems/applications,  robotics and bots) enabled by machine learning, increasing computing power and availability of data. On the other hand, R&D on AI and cognitive architectures has produced framework implementations for general intelligent agents (e.g. SOAR and ACT-R), which have been used in building various AI systems – still largely relying on handcrafting the knowledge, behavior and learning of the system.

The way towards wider/general purpose AI systems envisions very different kind of systems; Systems autonomously maintaining themselves, operating, learning and interacting over extended periods as part of society and culture, as presented in the vision of Software Social Organisms. Acknowledging that advanced AI systems may highly differ from the existing software intensive systems (in terms of applicable methodologies, underlying technologies, organization, development, training, operation, maintenance, governance and way of interaction) suggests that an architecture framework for AI systems would be beneficial for analysis and development these systems – both for industry and academia. In this post, we take a step towards such an architecture framework for AI systems and discuss the potential benefits it could provide for the R&D community.

In previous post the AI Cube Framework was introduced as a big picture and high abstraction level analysis tool for AI systems. In that somewhat multidimensional and complex framework the central concept of AI genome was also introduced. In this post, we describe the concept of AI genome in more detail deriving an initial outline of an architecture framework for AI systems (targeting towards conformance with ISO/IEC/IEEE 42010:2011). Specifically, the stakeholders and system life-cycle stages need to be shown and explained. Therefore, an alternative view on AI genome is presented in the Figure 1. below.


Figure 1. AI Genome; The main aspects of interests and stakeholders around evolution of AI systems.

The Figure 1 illustrates the AI genome in more detail, presenting the four main aspects of interest in evolution of AI systems and identifies related stakeholders of each aspect;

  1. AI Research and Development  – Continuously evolving research community of AI related sciences including researchers and developers making their results and assets available as baseline for building AI systems. These include e.g. knowledge in form of scientific publications, software and computing platforms, algorithms, data sets as well as processes for building and operating AI systems.
  2. AI Craft/Production – AI System stakeholders during its life-cycle; Professionals involved in building an AI System in various life-cycle phases (Concept, Design, Development, Deployment, Training, Operation & Learning, Maintenance & Versioning, Undeployment, Re-deployment, Deletion) and methodologies/processes that those professionals apply. We know much about agile and lean software development and reinforcement learning, but are these processes and methodologies applicable for building AI systems that interact in natural environment, where an individual bug or learning iteration might cause e.g. loss of lives or substantial economic losses?
    • There is room and need for more research in applicable processes and methodologies for AI system crafting/production, with safety and quality as guiding principles instead of speed and development cost.
    • Another major area, which we know very little of today, is governance or institutional control of AI systems – presenting an opportunity for more research.
  3. AI Systems – AI System collaborators (comparable to users of existing information, communication and automation technology systems), which are interacting with the AI system during its operational life-cycle phase to achieve a common goal or gain some value. Perhaps the closest example of this kind of stakeholder is driver of an “autonomous” car, who is collaborating with his car to get from place A to place B safely (truly autonomous car would not need a driver – only a collaborator who gives the destination where to go safely). The AI System – collaborator interface presents many topics to consider in development of AI systems:
    • Natural multi-modal interaction between the AI system and it’s collaborators (e.g. UX of conversational interface)
    • Expressing the capabilities of an AI system in a way that is understandable for collaborators (who are not technology experts but maybe e.g. healthcare professionals in their daily work)
    • Sharing of responsibility and control between provider/manufacturer and collaborators in situations where capabilities of AI systems are exceeded or the system fails. (For example, you can imagine great examples of this kind of situations when collaborating with “autonomic” cars – can I sleep and trust the car to handle every situation that we might encounter? At what point the car hands out the control to the driver – is it already too late at that point to avoid an accident?)
  4. Environment – The natural environment as a stakeholder, often considered under umbrella of environmental and social responsibility in different organizations. Regarding AI systems, the related requirements are related to energy, material, ethical and cultural impacts of the system. In addition, an AI System might have positive or negative value for the environment. At first this might seem irrelevant/distant, but think again:
    • Energy usage of a computer cluster matching human capability in some task vs. human energy usage. Is the energy produced by burning coal or by harnessing wind/solar/geothermal, and what is the related environmental footprint?
      • Perhaps it would be wise to measure also energy and environmental performance of AI Systems with human equivalent in addition to just measuring performance in a task. Metrics are not there yet but are not hard to come up with (e.g. grams of CO2/flop and grams of CO2/task). Perhaps the challenge for AI systems development is to match human performance in these metrics as well.
    • Materials used in building AI systems; availability, recycling, waste, alternative uses, toxicity, safety. Related especially to embodiment and hardware used for building AI systems.
    • Ethics when using human or other biological intelligence as a source for data and training of AI systems? Ethics of AI, especially in autonomous agents interacting in the natural environment (instead of closed game worlds or simulations)
    • Culture of humans may be impacted in many ways via introduction of new AI systems (e.g. convincing image/video/story generation by AI systems to match human performance vs. fake news?) It might be worthwhile to consider possible cultural impacts as well when designing new AI systems?

The Figure 1 also presents the main subsystems of an AI system;

  • Embodiment – The physical material structure of the AI system enabling implementation of other subsystems and bounding interfaces with the natural environment. Even though often embodied AI refers to robotic applications and instances of AI systems, all AI systems are embodied in one way or another (or they are fairy tale). AI System might be implemented as system of interconnected computers running software and algorithms across a network (Cloud based AI) or as a single autonomous robot with embedded sensors and actuators (Robotic AI). Also hybrid distributed embodiments are possible, especially in IoT and real-time sensitive AI applications, where the computing is preferred near the sensing and actuation point of natural environment, while part of the computing might also take place in separate computing cluster.  Also many new kinds of embodiments are likely to be seen in future AI systems as a result of energy efficiency challenges of current computing architectures and progress in R&D of materials and computing (e.g. quantum computing, neuromorphic computing and optical computing).
  • Perception and Actuation – “Sensorimotor” subsystem of the AI system including all mechanisms related to sensing and actuation of the system with the environment (via the embodiment subsystem). This may include functionalities for one or more sensing modalities (e.g. Vision (graphics), Audition (sound), Tactition vibration/motion) and Equilibrioception (balance)), as well as for actuation modalities (e.g. text generation, sound/speech generation and locomotion).
  • Memory – Subsystem including all different memory mechanisms of an AI System. This may include both short-term/working memory and long-term memory (possibly as further subsystems).  For example, the short-term/working memory may further be divided to sensory and context memory subsystems, and the long-term memory may further be divided to declarative, procedural, episodic and associative associative memory systems.
  • Behavior – Subsystem including all behavior related mechanisms of the AI System. These may include e.g. attention, goals, deliberation, decision making/action selection, reactions and reasoning mechanisms.
  • Cognition – Subsystem including all higher cognition related mechanisms accumulating the internal world model of the system, and possibly adapting the behavior of the AI System over time. These may include self- and context awareness, learning, meta-cognition, motivation, emotion and reward mechanisms.

The subsystem organization presented here does not propose any specific embodiment, perception, actuation, memory, behavior or cognition system organization, but has the purpose of being able to serve analysis and design of AI systems with various organizations in these subsystems. However, we refer to read more about recent development and proposals on standard model of the mind and review on cognitive architectures closely related to the three subsystems of memory, behavior and cognition.

Furthermore, Figure 1 illustrates the role of internal world model and internal data processing subsystems of an AI system;

  • Internal world model – The natural environment including human culture as the AI system comprehends it internally. All knowledge contained by the AI System as a result of training and learning while in operation. This includes for example all objects, concepts, other intelligent entities and rules/policies, which the AI system may use in operation throughout all the subsystems. The internal world model may be symbolic (e.g. ontology based) and explicitly accessible for human monitoring and governance, or it may be fully internal non symbolic and not accessible for humans (e.g. emergent architectures)
  • Internal data processing – The internal continuous data processing and analysis pipeline of the AI System defining the mechanism converting data flow from perception subsystem into information, knowledge, wisdom as modifications of the internal world model and behavior. For example, the data processing involved in using the sensors (data), to identify an approaching object classifying it as a car (information), identifying the potential danger of an approaching car (knowledge), comprehending the danger of the situation resulting to action of moving away from the estimated trajectory of the moving car (wisdom).

As illustrated in Figure 1, the external source of data for AI Systems is natural environment including the subculture/domain of collaborators of the system, which can be perceived via interactions with the environment. These interactions may include for example conversation with collaborators and perceiving recorded human culture in various formats (e.g. videos, images, text and audio).

The approach taken here for outlining an architecture framework for AI systems is a hybrid model driven approach combining agent-orientation and holistic two-dimensional system orientation, where system thinking is applied both to the external environment and to the internal organization of the AI system. The agent-orientation is applied by defining AI system as an autonomous physically bounded entity/organism interacting with its environment. The model driven approach is adopted to enable defining the AI System and the subsystems independently of the technological platform used for it’s implementation, based purely on stakeholder requirements towards the system (~Platform Independent Model – PIM). The approach enables also designing multiple different technology specific system architectures and implementations (~Platform Specific Model – PSM) from one PIM.

The main benefit of the architecture framework would be bringing forward, transparent and into consideration the internal world model, internal data processing, embodiment, perception, actuation, cognitive, behavior and memory mechanisms of an AI system already in the concept and design phase of the system life-cycle. This would also enable gathering and consideration of all stakeholder requirements towards these mechanisms and the overall system early in the development process. In current AI system implementations these mechanisms are often only partly considered, defined and transparent, but those would serve as exemplars for analysis within the framework outlined.

From the viewpoint of analyzing progress in AI research, technology and systems, the outline of architecture framework presented divides AI systems into 7 subsystems and identifies the basic life-cycle phases related to these. This enables finding, defining and mapping AI system performance metrics, challenges and leaderboards in modular fashion, corresponding to the subsystems and life-cycle phases identified. In other words, the architecture framework can be used to create corresponding metrics, challenges and leaderboard framework for evaluating and monitoring performance and progress in various AI subsystems – indicating the R&D progress on the field widely from the viewpoint of AI systems engineering. The architecture framework outlined also raises questions regarding need for new metrics, challenges and leaderboards on aspects such as:

  • How to measure the breadth/depth of knowledge acquired by an AI system (breadth/depth of internal world model)?
  • How to measure degree of autonomy of an AI system? (related: How to measure/estimate life-time costs of an AI system?)
  • How to measure adaptation/evolution of an AI system? (related: How to measure/estimate life-time costs of an AI system?)
  • How to measure modal capabilities of an AI system?
  • How to measure cross-modal understanding of an AI system? (to avoid parallel systems per modality)
  • How to measure energy efficiency, ethical compliance, cultural impact and material footprint of AI systems? (related: Environmental and social responsibility of organizations involved in using, building, providing and operating AI systems)

We believe that the approach and outlined architecture framework for AI Systems presented here is novel in a way that it combines multiple architectural approaches, used widely independently of each other today, towards an architectural approach and framework for improving analysis and design of highly complex AI systems. If you think differently, please let us know (with pointers to similar works or alternative views on the topic) – those are welcome and greatly appreciated. If you are interested to collaborate on our way forward on AI system architectures and Opentech AI in general – please do not hesitate to contact us!


Opentech AI Workshop

March 14th at the IBM Finland HQ in Helsinki, we are inviting around 100 people interested in industry-specific opentech AI, from AI for healthcare, retail, finance, energy, IoT, manufacturing, education, and other application areas.  More coming soon…

Please email Jim Spohrer <spohrer@us.ibm.com> and/or Daniel Pakkala <Daniel.Pakkala@vtt.fi>  if you are interested in learning more about the workshop.  We are looking for collaborators who know of open AI challenge leaderboards or capabilities that should be discussed at the Opentech AI Workshop.  A leaderboard is a public website that provides information about an entities standing or ranking, with respect to a specific best effort attempt to solve a challenge or task.  Open leaderboards advance science, since they allow others to replicate the result by providing access to code, data, and compute resources.

Another way of formulating the question is: Where can AI be applied in an industry specific manner (a task with open access data and code) to benchmark and to improve industry standard performance, and grow more opportunities for value creation?  First understanding where leaderboards already exist is important (e.g., Wikipedia Question Answering), and second understanding where to create new leeaderboads to help drive progress is an AI-era methodology to master (e.g., Breast Cancer Tumor Proliferation Challenge).   “Leaderboard solvers” are like the hammers, and where government and industry-leaders create strategically placed “leaderboard nails” is a methodology to master.  This methodology will become increasingly important to master, as the pace of AI progress advances, driven in part by  Moore’s Law (increasing compute power to solve classes of problems),  zettabytes of data (the new oil), AI-related model marketplaces (Bot Asset Exchange, Acumos, etc.), algorithms (deep learning and reasoning), open source code (TensorFlow/Keras, Caffe, PyTorch, etc. on Github),  commercialization-friendly licenses (Apache 2.0), and open governance (Linux Foundation, Apache Foundation, etc.).  These are some factors combining today that organizations need to better understand.

Opportunities for AI challenge leaderboards include: AI Progress Metrics as mapped by Electronic Frontiers Foundation, AI Safety and other Industry Focus Areas as mapped by Partnership on AI,  AI Leadership Diversity and Inclusiveness as mapped by AI4All, AI Activities and Progress as mapped by AI Index Report,  etc.

Scroll down here to see industry-specific examples of AI challenge leaderboards: https://opentechai.blog/applications/

Intelligence Evolution: Framing a Big Picture View

In my previous post on AI related sciences,I addressed multidisciplinarity of AI as a source of complexity in seeing a big picture of AI.  In that previous post, pathways between diverse fields of research were illustrated in a Venn diagram of overlapping circles. Beyond these pathways, a framework capturing the fundamental aspects underlying all the related fields of research and AI, would serve as a tool for grasping a big picture view of AI even better. It could also facilitate analyzing long-term progress of AI R&D.

In this post, we will share a first version of such a framework around AI, which we have formulated as a cube in order to capture and frame a big picture view of AI, and to establish a “world view” for further work on Opentech AI architecture. By sharing it, we hope to encourage feedback and comments, especially on missed fundamental aspects of intelligence evolution. The goal of the framework is to serve as an analysis framework for AI systems and their evolution – potentially facilitating also research, development and operation of future AI systems in various application domains and across industry-specific tasks and leaderboards.

Framing and visualizing a big picture view of AI in a single picture, without leaving something potentially relevant out, is not an easy task. However, Figure 1 below is an attempt to do so in form of “AI Cube” framework. The goal of the framework is to frame the complex and multidimensional context of AI in a way that could accommodate diverse approaches to AI, as well as capture the relevant aspects of research, development, implementation, operation and use of AI systems across industries.


Figure 1. The AI Cube framework for analysis of AI systems and their evolution.

Let’s walk through the AI Cube framework illustrated in Figure 1 and explain some of the reasoning behind it. Consider time.  Time matters to in so many ways, from individual entities (phenotypes), individual species (genotypes), the universe and all species (evolutionary time),as well as ideas and progress in individual disciplines (a community of academics). So starting with time, the framework illustrated in Figure 1 highlights two different dimensions for time; evolutional (shown evolving into the page) and generational (shown evolving from left-to-right across the page) time dimension. This two dimensionality of time reveals the evolutional gap between natural and artificial intelligence; Intelligence as a natural phenomena has at least 3.7 (Last Universal Ancestor; all species based on DNA) to 4.5 (history of Earth; the Universe) billion years of evolution history in an environment dating way back to 13.7 billion years and the Big Bang. In comparison, history of the artificial intelligence goes back only about 60 years until the Dartmouth Workshop and shares the same environment. Accordingly, the evolutional time gap between natural and artificial intelligence is in the scale of billions (109) years. Whereas time and timing are certainly relevant for both natural and artificial intelligence, equally important is the environment and context where perceptions, actions and sequences of those – interactions – can take place. In Figure 1, shown going from bottom-to-top of page is context – things in the environment of evolving AI systems.  For example, all the data of Wikipedia was necessary to build IBM’s Watson Jeopardy! As computing capabilities, accessible data sets, code, algorithms, and models increase in diversity, the environment and context of each AI system becomes increasingly complex.  The rest of the post will walk you through the main dimensions, concepts and aspects of interests in the AI Cube framework.  For related work, that explores natural and artificial intelligence, see the work of Hernández-Orallo (2016).

Hernández-Orallo J (2016) The measure of all minds: evaluating natural and artificial intelligence. Cambridge University Press.

Main Dimensions and Concepts

Due to the difference of scale between evolutional and generational time related to intelligence, the framework treats these two scales of time as two separate main dimensions. The evolutional time allows addressing the evolution of artificial intelligence in comparison to evolution of natural intelligence, where different species (types of AI systems) and their populations (instances of AI systems) may exist in evolutional time in the environment. The generational time allows to address the co-existence of AI systems in shared environment with natural intelligence (including living humans, human culture and existing human artifacts) at present or any given time.

The third main dimension in the framework is context, which allows addressing an AI system or family of AI systems (~species) co-existence and interaction with the relevant processes, systems and entities in the environment. This enables for example value, requirements and impact analysis of AI systems in different phases of the system life-cycle (in planning, design, development or deployed). The relevant context of an AI system or family of AI systems is always a subset of the environment. The environment is considered to contain all life, materials, entities and artifacts, as well as the biological, physical, social and artificial processes and systems, which those may compose at given time. Accordingly, the environment is seen to be in constant change as a result of all the processes and systems interacting within it.

  • For example, creation/emergence of an AI system at time (t) will therefore result into new artificial system and entity being present in the environment at time (t+1).

To address the evolutionary view of AI through time in the environment, a central concept of AI Genome (analog to biological genome enabling natural intelligence) is introduced in the framework. The concept of AI Genome is seen as a set of knowledge, tools, assets and capabilities transferred over generation of AI systems and humans for building and operating AI systems in their shared environment. Accordingly, the concept of AI genome codes for co-evolution of human and artificial intelligence over generations of both. The concept of AI Genome also narrows down the co-evolution to take place between human and artificial intelligence – within human culture – as humans are the only known naturally intelligent species trying to develop and apply artificial intelligence.  For related work, that explores AI/machine genome in the context of practopoiesis, see the work of Nicolic (2015, 2017).

Nikolić D (2015) Practopoiesis: Or how life fosters a mind. Journal of theoretical biology. 373:40-61.

Nikolić D (2017) Why deep neural nets cannot ever match biological intelligence and what to do about it?. International Journal of Automation and Computing. 14(5):532-41.

The Aspects of Interest

Intelligence evolution, both natural and artificial, is happening all around us today.  How can the AI Cube help us better understand this evolution?  Let’s summarize key aspects of interest: (1) environment, (2) R&D processes, (3) systems and (4) craft.

Regarding the main four aspects of interest around the co-evolution of human and artificial intelligence (AI Genome) illustrated in Figure 1, the first obvious aspect of interest is the environment. The environment enables, tests and limits the co-evolutionary process of human and artificial intelligence (evolution of AI Genome) via interaction with all life, materials, entities and artifacts, as well as the biological, physical, social and artificial processes and systems within it. Via dynamicity, uncertainties and adaptation over time, the environment introduces multidimensional and continuously increasing complexity. The overall complexity of the environment can also be seen as source of motivation for building AI systems in order to increase stability and predictability of the environment for humans.

  • The natural environment (biological, physical and social processes and systems) has always introduced natural complexity and challenges for evolution of natural intelligence. In addition to natural environment, also artificial environment (artifacts and artificial systems) created via human activity introduce increasingly more complexity to the environment, which could be descried as artificial complexity.

The challenge for the co-evolution of artificial and human intelligence (AI Genome) within the framework can be described as survival, well-fare/well-being and ethical co-existence in the environment of continuously increasing complexity, uncertainty and adaptation.

The three other main aspects of interest around the co-evolution of human and artificial intelligence are:

  • AI Research and Development as a process creating new knowledge, tools and assets to develop AI systems.
  • AI Systems created and used potentially providing value in their environment, and at the same time serving as experiments (~phenotype) for evolution of AI Genome (~genotype).
  • AI Craft/Production as a set of capabilities to build and operate AI systems – by individuals or human organizations.

These three aspects are easy to comprehend when comparing those to current state of art and practice of building AI systems; a multidisciplinary research and development community exists for AI (AI Research and Development), skills like programming, IT, project management, marketing, etc. exist for production of AI systems in many organizations and some individuals even might hold skills to create AI systems (AI Craft/Production). However, the framework does not exclude other kinds of more futuristic interpretations of the same aspects, such as automated self-evolution (AI Research & Development), automated self-generation (AI Craft/Production) and fully autonomic AI systems (AI Systems), if it serves the need of applying the framework for analysis of future evolution, progress and impact of AI.

Caveat: The ancient Greek based Aristotlean and creation related terms in the parenthesis under each of the aforementioned 3 aspects of interests are there to reflect the different roles of the main aspects from the viewpoint of creating artificial intelligence. These are not to be taken literally in their philosophical meaning or historical context, but may provide some useful analogies for understanding the organization of the framework.

Towards Opentech AI Architecture

Building better AI systems is of economic significance and academic interest.  So, in sum, a small reminder about the purpose and goal of the AI Cube framework. The goal is that it could serve as a tool for grasping the big picture of AI and would therefore be able to accommodate multiple different types of AI systems in different application domains. The goal is not to create a framework for creation or analysis of Artificial General Intelligence (AGI) or “superintelligence”, but a framework for widening the capabilities of AI systems in comparison to current narrow and highly task specific “one-off” systems.

Another goal is that the framework could serve as an analysis framework for AI systems and their evolutionary progress. Finally, it could potentially also facilitate research, development and operation of future AI systems. As is, the framework might be useful for analysis of existing AI systems and progress on the field on very high abstraction level. However, it needs to be extended with a systematic and structured architecture framework and methodology in order to be useful for more detailed analysis, as well as for development and operation of new AI systems. This is the direction where we plan to go next by taking a closer look to the central concept of AI Genome introduced in the AI Cube framework. Open source AI systems on GitHub that can improve their performance on all the AI leaderboards, including industry-specific AI leaderboards, is the practical direction we aim to encourage.


Navigating the Sciences Jungle Around AI

As often stated in multiple sources, AI is a highly interdisciplinary field of research and development leaning on many other sciences (Neuroscience, Linguistics, Anthropology, Philosophy, Psychology, Cognitive Science, Systems Science, Computer Science, Mathematics, Biology, Physics and Social Science). When trying to get grasp of the big picture of AI (like we are) one can really feel this first hand by easily getting lost in a dense jungle of publications coming from variety of different fields, each flavored with different approaches and methodologies of research.

Having experienced this jungle during the past weeks, and finally coming up with some kind of mental map for navigating in it, I thought I could share it in this post. Maybe there are others out there navigating in this same jungle, who could be interested in it as well. It is by no means the only way to see the world around AI, but at least one attempt to have a bit of structure and pathways in the jungle. The map is presented in form of Venn -diagram in the figure below.


The map was created by approaching AI from a viewpoint that in it’s core is system science, computer science and mathematics, which are applied with a common goal of realizing machines mimicking natural intelligence, which interface and interact with other systems in their environment. The map presents numbered interfaces of different sciences overlapping with AI, each representing a pathway between the two sciences. Let’s explore and characterize those pathways a bit:

1. Cognitive science is also an interdisciplinary (Neuroscience, Linguistics, Anthropology, Philosophy, Psychology and AI) field of research studying the cognitive processes of a mind. Cognitive science produces knowledge about the cognitive processes related to intelligence and behavior and has over the past 40 years contributed many cognitive architectures describing and implementing these processes using computing systems of different kinds. This interface is mainly characterized by AI implementing cognitive architectures within intelligent agents, multi-agent systems and other intelligent systems for various applications.

1.1 Apart from Cognitive Science and complete cognitive architectures, Neuroscience has also influenced AI research directly by inspiring AI research to develop and apply different kinds of artificial neurons and artificial neural networks inspired by the structure and function of their biological counterparts in human/mammal brains.

1.2 Apart from Cognitive Science and complete cognitive architectures, Linguistics has also been applied in AI research directly especially in natural language processing, understanding and generation. As a field of research since the 1950’s, natural language processing has used many different approaches from largely hand crafted rather rigid and formal systems to softer statistical probability based systems applying machine learning. In recent years various word vectors (e.g. Word2vec) based deep learning approaches have shown state-of-the art performance in natural language processing tasks and challenges.

2. The interface of AI and biology is twofold. Firstly, intelligence is a natural (biological) phenomena, which AI aims to mimic by creating artificial constructs capable of intelligent behavior observed and judged by their biologically intelligent role models (e.g. humans). Secondly, the AI research and development is bio-inspired as a whole and new discoveries within biology may also trigger new discoveries within AI (e.g potential new findings related to cells, neurons and brain).

3. The interface of AI and physics is also twofold. Firstly, the capability of AI R&D to create artificially intelligent constructs is dependent on availability and applicability of physical materials and energy for building and operating the constructs. Secondly, the physical environment and physical laws are highly relevant for AI applications that operate in the physical word (e.g. AI robotics applications) and may also be relevant to be simulated in those applications that operate in the cyber world only.

4. The interface of AI and Social Science is mainly related to acceptance, impact, need and use of AI applications by human individuals and organisations. Also the value of the AI applications is defined by humans and human organisations in social context.

So what have we learned from our visit to the AI related sciences jungle and why is it relevant to identify the relations between AI and related sciences? Firstly, the progress and success of AI exploitation is highly dependent on ,and influenced by, the progress of R&D on the related fields, which all are still active fields of research with potential for new discoveries relevant for AI R&D.

Secondly, multiple parallel tracks of AI research and development are proceeding in parallel and are not necessarily mutually inter-operable from the viewpoint of building new AI systems and applications (we will continue on this issue later in a separate post). For now, let’s just highlight that the lately mainstream machine learning/deep learning R&D branch of AI has not (at least yet) presented a unified cognitive architecture, as stated in the recent review on cognitive architectures.

Finally, it seems that majority of AI R&D is very task specific and often carried out in isolation (e.g. in game worlds) from the “real world” use environment, problems and social context. The path of transforming a state of the art solution from a game world into an application in a real wold may turn out to be a quite long and complex one to take. Accordingly, we hope to see more data sets, challenges and leader boards with closer link to real world data, application potential, context and constraints.

AI is more than machine/deep learning

How do we approach it?

Artificial Intelligence (AI) was first established as a scientific discipline back in 1956, when it was given the name it still has today at the Dartmouth Workshop. For over 60 years, AI has been researched and developed through multiple hype cycles with up periods of heavy investment and down cycles of lower investment (a.k.a AI winters). There has been, and still are, multiple philosophies and camps that approach AI quite differently. For example, one can categorize the approaches to AI by dividing those according to thinking vs. acting and human like vs. rational.

In our work we do not (at least consciously) position ourselves into a specific school of thought but rather approach AI as open technologies that emulate natural intelligence on a range of tasks exhibited by people, animals, and organizations across a wide range of contexts. Referring to the categorization example, this includes all the categories, as humans are known to be able to both think and act, as well as think rationally and irrationally/creatively.

The mainstream of AI research and development is currently focused in machine learning and its subset of deep learning, which has recently enabled conceptually narrow AI applications to reach or outperform human capabilities in some tasks. Examples of this kind of narrow AI tasks include playing chess (IBM Deep Blue, 1996), Jeopardy! (IBM Watson, 2011) and Go (Google AlphaGo, 2016).

However, on tasks requiring general or conceptually wider intelligence, including learning and reasoning over wide variety of abstract concepts and everyday objects in a given context, the research and development is still in its infancy and not in the level of human intellectual performance. This field of AI, known as Artificial General Intelligence (AGI), focuses in research and development of artificial intelligence that could perform any intellectual task that human can. The AGI R&D is closely related to Cognitive Science; AGI R&D is focused in studying machine realization of human cognition studied by Cognitive Science.

The current state and progress of AI could be summarized by stating that AI is matching human performance in conceptually narrow tasks and the quest is towards a wider AI, perhaps one day matching the general intellectual capability of humans. Interesting ideas, theories, presentations and publications on this subject can be found from e.g. Danko Nikolic, Joscha Bach and Peter Gärdenfors.

Our approach to Opentech AI in our R&D work and in this blog could be characterized also as a quest for wider AI. We acknowledge the power of machine learning and deep learning in narrow AI tasks, but at the same time remain open and curious about all approaches and solutions that have potential for improving the intellectual wideness in AI applications. AI is often contradicted and compared against humans, however the combination of human and machine intelligence has been found to be a very powerful combination and is setting the direction of AI applications towards cognitive intelligent assistants for improving human capability and productivity.