This blog is established as part of a research exchange co-operation between VTT Technical Research Centre of Finland and IBM Research – Almaden. The blog targets to capture the big picture of the research and development on the field of open artificial intelligence technology (Opentech AI), especially focusing on architecture, related ecosystems, progress and future directions. Overview of the topics discussed in this blog around Opentech AI are illustrated in the figure below.
We will use this blog to share hopefully interesting results and findings from the R&D work that we are doing on Opentech AI with colleagues in Finland and in US. As the blog is on Open TechnologiesAI (code + data + models + stacks + community + leaderboards), we would like to share it basically with whoever is interested.
In addition to blog postings, please note the “Opentech AI Resources” pages for more information on the topics covered. We will update the blog, as well as the resources pages, whenever we come across new interesting things that we think are worth sharing and discussing on the topic of Opentech AI. Comments, discussions and interaction around the postings and topics here are very welcome and greatly appreciated. You are also very welcome to contact us for further discussions and collaboration.
We will try to keep the blog light, interesting and informal. We try to include links for further information enabling you to dig in deeper into the topics of the postings. This is by no means a deep scientific forum – there are specific forums for that (e.g. Arxiv, which is highly relevant for R&D on Opentech AI).
We are facing the breakthrough of the data – it is so significant that I believe future historians will name the 21st century as the “era of data (and the world’s sixth extinction)”. My recent work on open data made me wonder if we are riding on the crest of a wave or standing between a chasm and a mountain.
The IDC 2020 study shows the biggest challenges for companies on the path toward incorporation of artificial intelligence: Data and information management; regulatory change; cost and budget concerns; scarcity of talent in data science, engineering, and solution development; and challenges in security and privacy. Numerous companies are still operating through separated processes, technologies, teams, and projects. This makes difficult to solve challenges and understand the value of investing in data, information management, and artificial intelligence. In AI projects, goals are often not set, or the projects do not scale. (Hamel 2021)
Why do companies face these challenges? In a white paper published by IDC (2017), David Reinsel, John Gantz, and John Rydning predict globally produced and consumed data (Figure 1). Prediction has even grown due the COVID-19 pandemic, and it seems obvious that the amount of data will increase by a factor of 10 between 2014 and 2025. The amount of data will likely reach 180 zettabytes. (Holst 2021; Reinsel et al. 2017)
However, only a small piece of this data is retained. According to IDC, the global storage capacity in 2021 will be about 6.8 zettabytes. (IDC 2020; Bhat 2018; Vijesh et al. 2021)
Companies are facing a new challenge that could be described with another well-known example: Industrial mass production. Climate related discussion and running out of storage space has caused a change in lifestyle and consumption patterns. We began to question what is truly important. Nevertheless, we are struggling with a waste and unused goods. Data seems to be acting similarly.
We need to understand what data is important. The company that is capable to produce and utilize valuable data will stand out. The barriers revealed in the IDC study approve that only a few have resources for this (Hamel 2021).
While discussing about valuable data, it must be clarified that valuable data does not bring value to the company. Data-analysis is needed to provide valuable information from data. The process is challenging, as the data is extremely variable in form and properties. In addition, you must solve how to process data, where to use information and how. Predictions from the data are increasingly executed by machine learning methods, however data collection has become one of the bottlenecks in the development of machine learning. Collecting, selecting, and organizing data is the most time-consuming part of the project. As a result, the resources to generate valuable information from the data are decreased. (Roh et al. 2021; Ismail et al. 2019; Vijesh 2021; Ogbuke et al. 2020; Azimi et al. 2020).
In addition, transferring the developed methods from the academic environment to industry is challenging. The conditions of industrial production are changing, and each factory is unique. Long-term production cycles, individual processes, data accuracy, format and speed require solutions that fits in environment and can be retrained. As a conclude, the locally produced solutions are challenging to reuse. (Azimi et al. 2020 p.582; Zeiser et al. 2021 p. 599)
Companies are trying to solve their data challenges by hiring experts as if they were hiring home cleaners for their mansion. What does open data have to offer in this world’s biggest race?
Let’s proceed with the idea of industrial mass production and waste. Companies’ virtual storages and additional storages are bursting with unused goods. One company hires one virtual cleaner, second can hire hundreds of virtual cleaners, third could hire entire virtual cleaning company, and few hire an army of virtual cleaning companies.
Does the hiring company need to understand how cleaning is done? What are these goods in the storage and how are they utilized? Do you think that the companies with only a few cleaners will disappear under the trash?
According to Jed Sundwall (2018), “People don’t want data, they just want answers.” This is probably the truth – but how can one cleaner be as fast as an entire cleaning company? How this affects to the quality?
There is no shortcut to happiness. Quality is poor and there is no way to be equally fast. For this reason, you must lure the neighborhood to help you: People who work with open or semi-open data.
I have been examined open data that is suitable for developing machine learning applications in industry. Examination revealed that companies protect their data strictly, even though they experience a lack of resources. However, these resources are critical to be able to understand and utilize the data. At the same time, there is a lack of high-quality educational data in the development of machine learning (Roh et al. 2021). For example, if “perfect” and error free data is used while training machine learning model, the model transfer into a real and unstable industrial surrounding might become infeasible.
Companies are not willing to share data but are they interested in cooperative utilization? What sort of data is more beneficial to competitors than to a company whose data is publicly researched, organized, and processed into information? Do applications need to be public? Does the company see who has applied the data and how? How likely similar data will be available sooner or later? Companies are only interested in results. How do we ensure that publishing data is more attractive than improper use or destruction?
There is no open channel to meet the needs of industrial companies and researchers.
While data producers have a responsibility for data quality and data access rights, data dealers are important in terms of data usability. It appears that even reputable sources have significant shortcomings to enable efficient data reuse. Therefore, even those who want to provide their data for further use and do the groundwork with high quality, face difficulties in transmitting the data. There are services and several ongoing projects (for example GAIA-X and IDS) which are enabling safe data transfer. However, these services focus on fluent data transfer instead of motivating partners, researchers, and individual persons to provide, search and utilize data between stakeholders. Hackathons, data search engines, data collections and data banks try to serve solutions to these fields. Still, there seems to be lots of inconsistency between stakeholders demands and services. If stakeholders are not fully served it leads to incomplete utilization of data or even losing potential data. Data marketing is essential if we like to maximize the potential of the data or use open data systematically in research. It’s unfortunate if the only mistake in near-perfect data production is that the results gets trashed.
Growing amount of data, fundamental challenges and inconsistencies in data discoverability indicate that majority of the potential of open data is unused. I have been searching for data with three main methods: data search engines, data collections and data banks. The sources I have used can be found here. You can also examine the reviewed features these sources provide for sorting and evaluating data. Usable data sources that I have collected can be found here. There is wide range of data, as the needs of the industry vary extremely. The most interesting data sets are widely reusable (for example error detection, human movements). The goal was to find comprehensive data; thus, the potential of the data has been mostly assessed based on context and descriptions. There remains the most important question:
Is the open data high quality?
Valuable data is context-specific: Data quality refers to how well the data meets user requirements. However, this does not mean that there is only one or a few value-generating solutions available for the data. Pruning the data properties at too early a stage has a detrimental effect on the data reuse potential. (Sundwall 2018, Zeiser et al. 2021)
According to Zeiser et al. (2021), data collected from processes is useful only when it contains both metadata (such as process description and input parameters) and process expertise. Table 1 shows the quality requirements for industrial machine learning models. It seems that requirements of industrial machine learning models are based in many respects on the properties of the available data. (Azimi et al. 2020 p.580; Zeiser et al. 2021 p. 599)
The lack of understanding on the quality requirements for the industrial machine learning models is reflecting in the varying quality of open data. However, the lack of quality is also affecting to commercial data. For this reason, collecting, selecting, and organizing data takes a plenty of time no matter the source. That doesn’t sound like an inspiring result, does it?
Do not forget the neighborhood. Always remember to be thankful for someone cleaning up the storage for you.
Azimi, S. & Pahl, C. (2020). A Layered Quality Framework for Machine Learning-driven Data and Information Models. Proceedings of the 22nd International Conference on Enterprise Information Systems (ICEIS 2020). Vol 1, pp. 579-587. ISSN: 2184-4992. Available: DOI: 10.5220/0009472305790587
Ogbuke, N., Yusuf, Y.Y., Dharma, K. & Mercangoz, B.A. (2020) Big data supply chain analytics: ethical, privacy and security challenges posed to business, industries and society. Production Planning & Control. Available: https://doi.org/10.1080/09537287.2020.1810764
Roh, Y., Heo, G. & Whang, S. E. (2021). A Survey on Data Collection for Machine Learning: A Big Data – AI Integration Perspective. IEEE Transactions on Knowledge and Data Engineering. vol. 33, no. 4, pp. 1328-1347. Available: DOI: 10.1109/TKDE.2019.2946162
Vijesh J.C., Raj J.S. & Smys S. (2021) Big Data Analytics: Tools, Challenges, and Scope in Data-Driven Computing. In: Raj J.S. (eds) International Conference on Mobile Computing and Sustainable Informatics. ICMCSI 2020. EAI/Springer Innovations in Communication and Computing. Springer, Cham. https://doi.org/10.1007/978-3-030-49795-8_67
Zeiser, A., van Stein, B. & Bäck, T. (2021). Requirements towards optimizing analytics in industrial processes. Procedia Computer Science. Vol 184, pp. 597-605. ISSN: 1877-0509. Available: https://doi.org/10.1016/j.procs.2021.03.074
The 52nd Hawaii International Conference on System Sciences (HICSS-52), organised last week, included 2 interesting papers taking the field of AI system architectures couple steps forward.
Firstly, a paper titled “Machine Learning in Artificial Intelligence: Towards a Common Understanding” by Kühl, N., Goutier, M., Hirt, R., Satzger, G. The paper addresses the relation and contribution of machine learning in artificial intelligence by reviewing related literature, and by presenting a conceptual framework clarifying the role of machine learning in building artificial intelligent agents. The paper paves the way for multi-disciplinary discussion and further research on the topic.
Secondly a paper titled “Digital Service: Technological Agency in Service Systems” by authors of this blog (Pakkala, D. and Spohrer, J.). The paper provides a walk-through on technical systems of digitalization and AI applications and highlights the development towards increasingly autonomous technical systems, which hold technological agency of service providers in interaction and value co-creation with service users . The paper introduces and defines two new abstractions (digital service and digital service membrane) with UML modeling examples better connecting the fields of service science, artificial intelligence, system thinking, software engineering and information systems.
Both papers are certainly a worth a read if you are interested in acquiring a holistic and multi-disciplinary view on AI systems architecture.
AI systems may radically differ from other software intensive systems – How to address that in system architecture and engineering?
From the viewpoint of software and system engineering, AI is today mainly seen and developed as evolution of smart/intelligent features of existing systems (e.g. enterprise systems/applications, robotics and bots) enabled by machine learning, increasing computing power and availability of data. On the other hand, R&D on AI and cognitive architectures has produced framework implementations for general intelligent agents (e.g. SOAR and ACT-R), which have been used in building various AI systems – still largely relying on handcrafting the knowledge, behavior and learning of the system.
The way towards wider/general purpose AI systems envisions very different kind of systems; Systems autonomously maintaining themselves, operating, learning and interacting over extended periods as part of society and culture, as presented in the vision of Software Social Organisms. Acknowledging that advanced AI systems may highly differ from the existing software intensive systems (in terms of applicable methodologies, underlying technologies, organization, development, training, operation, maintenance, governance and way of interaction) suggests that an architecture framework for AI systems would be beneficial for analysis and development these systems – both for industry and academia. In this post, we take a step towards such an architecture framework for AI systems and discuss the potential benefits it could provide for the R&D community.
In previous post the AI Cube Framework was introduced as a big picture and high abstraction level analysis tool for AI systems. In that somewhat multidimensional and complex framework the central concept of AI genome was also introduced. In this post, we describe the concept of AI genome in more detail deriving an initial outline of an architecture framework for AI systems (targeting towards conformance with ISO/IEC/IEEE 42010:2011). Specifically, the stakeholders and system life-cycle stages need to be shown and explained. Therefore, an alternative view on AI genome is presented in the Figure 1. below.
Figure 1. AI Genome; The main aspects of interests and stakeholders around evolution of AI systems.
The Figure 1 illustrates the AI genome in more detail, presenting the four main aspects of interest in evolution of AI systems and identifies related stakeholders of each aspect;
AI Research and Development – Continuously evolving research community of AI related sciences including researchers and developers making their results and assets available as baseline for building AI systems. These include e.g. knowledge in form of scientific publications, software and computing platforms, algorithms, data sets as well as processes for building and operating AI systems.
AI Craft/Production – AI System stakeholders during its life-cycle; Professionals involved in building an AI System in various life-cycle phases (Concept, Design, Development, Deployment, Training, Operation & Learning, Maintenance & Versioning, Undeployment, Re-deployment, Deletion) and methodologies/processes that those professionals apply. We know much about agile and lean software development and reinforcement learning, but are these processes and methodologies applicable for building AI systems that interact in natural environment, where an individual bug or learning iteration might cause e.g. loss of lives or substantial economic losses?
There is room and need for more research in applicable processes and methodologies for AI system crafting/production, with safety and quality as guiding principles instead of speed and development cost.
Another major area, which we know very little of today, is governance or institutional control of AI systems – presenting an opportunity for more research.
AI Systems – AI System collaborators (comparable to users of existing information, communication and automation technology systems), which are interacting with the AI system during its operational life-cycle phase to achieve a common goal or gain some value. Perhaps the closest example of this kind of stakeholder is driver of an “autonomous” car, who is collaborating with his car to get from place A to place B safely (truly autonomous car would not need a driver – only a collaborator who gives the destination where to go safely). The AI System – collaborator interface presents many topics to consider in development of AI systems:
Natural multi-modal interaction between the AI system and it’s collaborators (e.g. UX of conversational interface)
Expressing the capabilities of an AI system in a way that is understandable for collaborators (who are not technology experts but maybe e.g. healthcare professionals in their daily work)
Sharing of responsibility and control between provider/manufacturer and collaborators in situations where capabilities of AI systems are exceeded or the system fails. (For example, you can imagine great examples of this kind of situations when collaborating with “autonomic” cars – can I sleep and trust the car to handle every situation that we might encounter? At what point the car hands out the control to the driver – is it already too late at that point to avoid an accident?)
Environment – The natural environment as a stakeholder, often considered under umbrella of environmental and social responsibility in different organizations. Regarding AI systems, the related requirements are related to energy, material, ethical and cultural impacts of the system. In addition, an AI System might have positive or negative value for the environment. At first this might seem irrelevant/distant, but think again:
Energy usage of a computer cluster matching human capability in some task vs. human energy usage. Is the energy produced by burning coal or by harnessing wind/solar/geothermal, and what is the related environmental footprint?
Perhaps it would be wise to measure also energy and environmental performance of AI Systems with human equivalent in addition to just measuring performance in a task. Metrics are not there yet but are not hard to come up with (e.g. grams of CO2/flop and grams of CO2/task). Perhaps the challenge for AI systems development is to match human performance in these metrics as well.
Materials used in building AI systems; availability, recycling, waste, alternative uses, toxicity, safety. Related especially to embodiment and hardware used for building AI systems.
Ethics when using human or other biological intelligence as a source for data and training of AI systems? Ethics of AI, especially in autonomous agents interacting in the natural environment (instead of closed game worlds or simulations)
Culture of humans may be impacted in many ways via introduction of new AI systems (e.g. convincing image/video/story generation by AI systems to match human performance vs. fake news?) It might be worthwhile to consider possible cultural impacts as well when designing new AI systems?
The Figure 1 also presents the main subsystems of an AI system;
Embodiment – The physical material structure of the AI system enabling implementation of other subsystems and bounding interfaces with the natural environment. Even though often embodied AI refers to robotic applications and instances of AI systems, all AI systems are embodied in one way or another (or they are fairy tale). AI System might be implemented as system of interconnected computers running software and algorithms across a network (Cloud based AI) or as a single autonomous robot with embedded sensors and actuators (Robotic AI). Also hybrid distributed embodiments are possible, especially in IoT and real-time sensitive AI applications, where the computing is preferred near the sensing and actuation point of natural environment, while part of the computing might also take place in separate computing cluster. Also many new kinds of embodiments are likely to be seen in future AI systems as a result of energy efficiency challenges of current computing architectures and progress in R&D of materials and computing (e.g. quantum computing, neuromorphic computing and optical computing).
Perception and Actuation – “Sensorimotor” subsystem of the AI system including all mechanisms related to sensing and actuation of the system with the environment (via the embodiment subsystem). This may include functionalities for one or more sensing modalities (e.g. Vision (graphics), Audition (sound), Tactition vibration/motion) and Equilibrioception (balance)), as well as for actuation modalities (e.g. text generation, sound/speech generation and locomotion).
Memory – Subsystem including all different memory mechanisms of an AI System. This may include both short-term/working memory and long-term memory (possibly as further subsystems). For example, the short-term/working memory may further be divided to sensory and context memory subsystems, and the long-term memory may further be divided to declarative, procedural, episodic and associative associative memory systems.
Behavior – Subsystem including all behavior related mechanisms of the AI System. These may include e.g. attention, goals, deliberation, decision making/action selection, reactions and reasoning mechanisms.
Cognition – Subsystem including all higher cognition related mechanisms accumulating the internal world model of the system, and possibly adapting the behavior of the AI System over time. These may include self- and context awareness, learning, meta-cognition, motivation, emotion and reward mechanisms.
The subsystem organization presented here does not propose any specific embodiment, perception, actuation, memory, behavior or cognition system organization, but has the purpose of being able to serve analysis and design of AI systems with various organizations in these subsystems. However, we refer to read more about recent development and proposals on standard model of the mind and review on cognitive architectures closely related to the three subsystems of memory, behavior and cognition.
Furthermore, Figure 1 illustrates the role of internal world model and internal data processing subsystems of an AI system;
Internal world model – The natural environment including human culture as the AI system comprehends it internally. All knowledge contained by the AI System as a result of training and learning while in operation. This includes for example all objects, concepts, other intelligent entities and rules/policies, which the AI system may use in operation throughout all the subsystems. The internal world model may be symbolic (e.g. ontology based) and explicitly accessible for human monitoring and governance, or it may be fully internal non symbolic and not accessible for humans (e.g. emergent architectures)
Internal data processing – The internal continuous data processing and analysis pipeline of the AI System defining the mechanism converting data flow from perception subsystem into information, knowledge, wisdom as modifications of the internal world model and behavior. For example, the data processing involved in using the sensors (data), to identify an approaching object classifying it as a car (information), identifying the potential danger of an approaching car (knowledge), comprehending the danger of the situation resulting to action of moving away from the estimated trajectory of the moving car (wisdom).
As illustrated in Figure 1, the external source of data for AI Systems is natural environment including the subculture/domain of collaborators of the system, which can be perceived via interactions with the environment. These interactions may include for example conversation with collaborators and perceiving recorded human culture in various formats (e.g. videos, images, text and audio).
The approach taken here for outlining an architecture framework for AI systems is a hybrid model driven approach combining agent-orientation and holistic two-dimensional system orientation, where system thinking is applied both to the external environment and to the internal organization of the AI system. The agent-orientation is applied by defining AI system as an autonomous physically bounded entity/organism interacting with its environment. The model driven approach is adopted to enable defining the AI System and the subsystems independently of the technological platform used for it’s implementation, based purely on stakeholder requirements towards the system (~Platform Independent Model – PIM). The approach enables also designing multiple different technology specific system architectures and implementations (~Platform Specific Model – PSM) from one PIM.
The main benefit of the architecture framework would be bringing forward, transparent and into consideration the internal world model, internal data processing, embodiment, perception, actuation, cognitive, behavior and memory mechanisms of an AI system already in the concept and design phase of the system life-cycle. This would also enable gathering and consideration of all stakeholder requirements towards these mechanisms and the overall system early in the development process. In current AI system implementations these mechanisms are often only partly considered, defined and transparent, but those would serve as exemplars for analysis within the framework outlined.
From the viewpoint of analyzing progress in AI research, technology and systems, the outline of architecture framework presented divides AI systems into 7 subsystems and identifies the basic life-cycle phases related to these. This enables finding, defining and mapping AI system performance metrics, challenges and leaderboards in modular fashion, corresponding to the subsystems and life-cycle phases identified. In other words, the architecture framework can be used to create corresponding metrics, challenges and leaderboard framework for evaluating and monitoring performance and progress in various AI subsystems – indicating the R&D progress on the field widely from the viewpoint of AI systems engineering. The architecture framework outlined also raises questions regarding need for new metrics, challenges and leaderboards on aspects such as:
How to measure the breadth/depth of knowledge acquired by an AI system (breadth/depth of internal world model)?
How to measure degree of autonomy of an AI system? (related: How to measure/estimate life-time costs of an AI system?)
How to measure adaptation/evolution of an AI system? (related: How to measure/estimate life-time costs of an AI system?)
How to measure modal capabilities of an AI system?
How to measure cross-modal understanding of an AI system? (to avoid parallel systems per modality)
How to measure energy efficiency, ethical compliance, cultural impact and material footprint of AI systems? (related: Environmental and social responsibility of organizations involved in using, building, providing and operating AI systems)
We believe that the approach and outlined architecture framework for AI Systems presented here is novel in a way that it combines multiple architectural approaches, used widely independently of each other today, towards an architectural approach and framework for improving analysis and design of highly complex AI systems. If you think differently, please let us know (with pointers to similar works or alternative views on the topic) – those are welcome and greatly appreciated. If you are interested to collaborate on our way forward on AI system architectures and Opentech AI in general – please do not hesitate to contact us!
March 14th at the IBM Finland HQ in Helsinki, we are inviting around 100 people interested in industry-specific opentech AI, from AI for healthcare, retail, finance, energy, IoT, manufacturing, education, and other application areas. More coming soon…
Please email Jim Spohrer <firstname.lastname@example.org> and/or Daniel Pakkala <Daniel.Pakkala@vtt.fi> if you are interested in learning more about the workshop. We are looking for collaborators who know of open AI challenge leaderboards or capabilities that should be discussed at the Opentech AI Workshop. A leaderboard is a public website that provides information about an entities standing or ranking, with respect to a specific best effort attempt to solve a challenge or task. Open leaderboards advance science, since they allow others to replicate the result by providing access to code, data, and compute resources.
Another way of formulating the question is: Where can AI be applied in an industry specific manner (a task with open access data and code) to benchmark and to improve industry standard performance, and grow more opportunities for value creation? First understanding where leaderboards already exist is important (e.g., Wikipedia Question Answering), and second understanding where to create new leeaderboads to help drive progress is an AI-era methodology to master (e.g., Breast Cancer Tumor Proliferation Challenge). “Leaderboard solvers” are like the hammers, and where government and industry-leaders create strategically placed “leaderboard nails” is a methodology to master. This methodology will become increasingly important to master, as the pace of AI progress advances, driven in part by Moore’s Law (increasing compute power to solve classes of problems), zettabytes of data (the new oil), AI-related model marketplaces (Bot Asset Exchange, Acumos, etc.), algorithms (deep learning and reasoning), open source code (TensorFlow/Keras, Caffe, PyTorch, etc. on Github), commercialization-friendly licenses (Apache 2.0), and open governance (Linux Foundation, Apache Foundation, etc.). These are some factors combining today that organizations need to better understand.
In my previous post on AI related sciences,I addressed multidisciplinarity of AI as a source of complexity in seeing a big picture of AI. In that previous post, pathways between diverse fields of research were illustrated in a Venn diagram of overlapping circles. Beyond these pathways, a framework capturing the fundamental aspects underlying all the related fields of research and AI, would serve as a tool for grasping a big picture view of AI even better. It could also facilitate analyzing long-term progress of AI R&D.
In this post, we will share a first version of such a framework around AI, which we have formulated as a cube in order to capture and frame a big picture view of AI, and to establish a “world view” for further work on Opentech AI architecture. By sharing it, we hope to encourage feedback and comments, especially on missed fundamental aspects of intelligence evolution. The goal of the framework is to serve as an analysis framework for AI systems and their evolution – potentially facilitating also research, development and operation of future AI systems in various application domains and across industry-specific tasks and leaderboards.
Framing and visualizing a big picture view of AI in a single picture, without leaving something potentially relevant out, is not an easy task. However, Figure 1 below is an attempt to do so in form of “AI Cube” framework. The goal of the framework is to frame the complex and multidimensional context of AI in a way that could accommodate diverse approaches to AI, as well as capture the relevant aspects of research, development, implementation, operation and use of AI systems across industries.
Figure 1. The AI Cube framework for analysis of AI systems and their evolution.
Let’s walk through the AI Cube framework illustrated in Figure 1 and explain some of the reasoning behind it. Consider time. Time matters to in so many ways, from individual entities (phenotypes), individual species (genotypes), the universe and all species (evolutionary time),as well as ideas and progress in individual disciplines (a community of academics). So starting with time, the framework illustrated in Figure 1 highlights two different dimensions for time; evolutional (shown evolving into the page) and generational (shown evolving from left-to-right across the page) time dimension. This two dimensionality of time reveals the evolutional gap between natural and artificial intelligence; Intelligence as a natural phenomena has at least 3.7 (Last Universal Ancestor; all species based on DNA) to 4.5 (history of Earth; the Universe) billion years of evolution history in an environment dating way back to 13.7 billion years and the Big Bang. In comparison, history of the artificial intelligence goes back only about 60 years until the Dartmouth Workshop and shares the same environment. Accordingly, the evolutional time gap between natural and artificial intelligence is in the scale of billions (109) years. Whereas time and timing are certainly relevant for both natural and artificial intelligence, equally important is the environment and context where perceptions, actions and sequences of those – interactions – can take place. In Figure 1, shown going from bottom-to-top of page is context – things in the environment of evolving AI systems. For example, all the data of Wikipedia was necessary to build IBM’s Watson Jeopardy! As computing capabilities, accessible data sets, code, algorithms, and models increase in diversity, the environment and context of each AI system becomes increasingly complex. The rest of the post will walk you through the main dimensions, concepts and aspects of interests in the AI Cube framework. For related work, that explores natural and artificial intelligence, see the work of Hernández-Orallo (2016).
Hernández-Orallo J (2016) The measure of all minds: evaluating natural and artificial intelligence. Cambridge University Press.
Main Dimensions and Concepts
Due to the difference of scale between evolutional and generational time related to intelligence, the framework treats these two scales of time as two separate main dimensions. The evolutional time allows addressing the evolution of artificial intelligence in comparison to evolution of natural intelligence, where different species (types of AI systems) and their populations (instances of AI systems) may exist in evolutional time in the environment. The generational time allows to address the co-existence of AI systems in shared environment with natural intelligence (including living humans, human culture and existing human artifacts) at present or any given time.
The third main dimension in the framework is context, which allows addressing an AI system or family of AI systems (~species) co-existence and interaction with the relevant processes, systems and entities in the environment. This enables for example value, requirements and impact analysis of AI systems in different phases of the system life-cycle (in planning, design, development or deployed). The relevant context of an AI system or family of AI systems is always a subset of the environment. The environment is considered to contain all life, materials, entities and artifacts, as well as the biological, physical, social and artificial processes and systems, which those may compose at given time. Accordingly, the environment is seen to be in constant change as a result of all the processes and systems interacting within it.
For example, creation/emergence of an AI system at time (t) will therefore result into new artificial system and entity being present in the environment at time (t+1).
To address the evolutionary view of AI through time in the environment, a central concept of AI Genome (analog to biological genome enabling natural intelligence) is introduced in the framework. The concept of AI Genome is seen as a set of knowledge, tools, assets and capabilities transferred over generation of AI systems and humans for building and operating AI systems in their shared environment. Accordingly, the concept of AI genome codes for co-evolution of human and artificial intelligence over generations of both. The concept of AI Genome also narrows down the co-evolution to take place between human and artificial intelligence – within human culture – as humans are the only known naturally intelligent species trying to develop and apply artificial intelligence. For related work, that explores AI/machine genome in the context of practopoiesis, see the work of Nicolic (2015, 2017).
Nikolić D (2015) Practopoiesis: Or how life fosters a mind. Journal of theoretical biology. 373:40-61.
Nikolić D (2017) Why deep neural nets cannot ever match biological intelligence and what to do about it?. International Journal of Automation and Computing. 14(5):532-41.
The Aspects of Interest
Intelligence evolution, both natural and artificial, is happening all around us today. How can the AI Cube help us better understand this evolution? Let’s summarize key aspects of interest: (1) environment, (2) R&D processes, (3) systems and (4) craft.
Regarding the main four aspects of interest around the co-evolution of human and artificial intelligence (AI Genome) illustrated in Figure 1, the first obvious aspect of interest is the environment. The environment enables, tests and limits the co-evolutionary process of human and artificial intelligence (evolution of AI Genome) via interaction with all life, materials, entities and artifacts, as well as the biological, physical, social and artificial processes and systems within it. Via dynamicity, uncertainties and adaptation over time, the environment introduces multidimensional and continuously increasing complexity. The overall complexity of the environment can also be seen as source of motivation for building AI systems in order to increase stability and predictability of the environment for humans.
The natural environment (biological, physical and social processes and systems) has always introduced natural complexity and challenges for evolution of natural intelligence. In addition to natural environment, also artificial environment (artifacts and artificial systems) created via human activity introduce increasingly more complexity to the environment, which could be descried as artificial complexity.
The challenge for the co-evolution of artificial and human intelligence (AI Genome) within the framework can be described as survival, well-fare/well-being and ethical co-existence in the environment of continuously increasing complexity, uncertainty and adaptation.
The three other main aspects of interest around the co-evolution of human and artificial intelligence are:
AI Research and Development as a process creating new knowledge, tools and assets to develop AI systems.
AI Systems created and used potentially providing value in their environment, and at the same time serving as experiments (~phenotype) for evolution of AI Genome (~genotype).
AI Craft/Production as a set of capabilities to build and operate AI systems – by individuals or human organizations.
These three aspects are easy to comprehend when comparing those to current state of art and practice of building AI systems; a multidisciplinary research and development community exists for AI (AI Research and Development), skills like programming, IT, project management, marketing, etc. exist for production of AI systems in many organizations and some individuals even might hold skills to create AI systems (AI Craft/Production). However, the framework does not exclude other kinds of more futuristic interpretations of the same aspects, such as automated self-evolution (AI Research & Development), automated self-generation (AI Craft/Production) and fully autonomic AI systems (AI Systems), if it serves the need of applying the framework for analysis of future evolution, progress and impact of AI.
Caveat: The ancient Greek based Aristotlean and creation related terms in the parenthesis under each of the aforementioned 3 aspects of interests are there to reflect the different roles of the main aspects from the viewpoint of creating artificial intelligence. These are not to be taken literally in their philosophical meaning or historical context, but may provide some useful analogies for understanding the organization of the framework.
Towards Opentech AI Architecture
Building better AI systems is of economic significance and academic interest. So, in sum, a small reminder about the purpose and goal of the AI Cube framework. The goal is that it could serve as a tool for grasping the big picture of AI and would therefore be able to accommodate multiple different types of AI systems in different application domains. The goal is not to create a framework for creation or analysis of Artificial General Intelligence (AGI) or “superintelligence”, but a framework for widening the capabilities of AI systems in comparison to current narrow and highly task specific “one-off” systems.
Another goal is that the framework could serve as an analysis framework for AI systems and their evolutionary progress. Finally, it could potentially also facilitate research, development and operation of future AI systems. As is, the framework might be useful for analysis of existing AI systems and progress on the field on very high abstraction level. However, it needs to be extended with a systematic and structured architecture framework and methodology in order to be useful for more detailed analysis, as well as for development and operation of new AI systems. This is the direction where we plan to go next by taking a closer look to the central concept of AI Genome introduced in the AI Cube framework. Open source AI systems on GitHub that can improve their performance on all the AI leaderboards, including industry-specific AI leaderboards, is the practical direction we aim to encourage.
Having experienced this jungle during the past weeks, and finally coming up with some kind of mental map for navigating in it, I thought I could share it in this post. Maybe there are others out there navigating in this same jungle, who could be interested in it as well. It is by no means the only way to see the world around AI, but at least one attempt to have a bit of structure and pathways in the jungle. The map is presented in form of Venn -diagram in the figure below.
The map was created by approaching AI from a viewpoint that in it’s core is system science, computer science and mathematics, which are applied with a common goal of realizing machines mimicking natural intelligence, which interface and interact with other systems in their environment. The map presents numbered interfaces of different sciences overlapping with AI, each representing a pathway between the two sciences. Let’s explore and characterize those pathways a bit:
1. Cognitive science is also an interdisciplinary (Neuroscience, Linguistics, Anthropology, Philosophy, Psychology and AI) field of research studying the cognitive processes of a mind. Cognitive science produces knowledge about the cognitive processes related to intelligence and behavior and has over the past 40 years contributed many cognitive architectures describing and implementing these processes using computing systems of different kinds. This interface is mainly characterized by AI implementing cognitive architectures within intelligent agents, multi-agent systems and other intelligent systems for various applications.
1.1 Apart from Cognitive Science and complete cognitive architectures, Neuroscience has also influenced AI research directly by inspiring AI research to develop and apply different kinds of artificial neurons and artificial neural networks inspired by the structure and function of their biological counterparts in human/mammal brains.
1.2 Apart from Cognitive Science and complete cognitive architectures, Linguistics has also been applied in AI research directly especially in natural language processing, understanding and generation. As a field of research since the 1950’s, natural language processing has used many different approaches from largely hand crafted rather rigid and formal systems to softer statistical probability based systems applying machine learning. In recent years various word vectors (e.g. Word2vec) based deep learning approaches have shown state-of-the art performance in natural language processing tasks and challenges.
2. The interface of AI and biology is twofold. Firstly, intelligence is a natural (biological) phenomena, which AI aims to mimic by creating artificial constructs capable of intelligent behavior observed and judged by their biologically intelligent role models (e.g. humans). Secondly, the AI research and development is bio-inspired as a whole and new discoveries within biology may also trigger new discoveries within AI (e.g potential new findings related to cells, neurons and brain).
3. The interface of AI and physics is also twofold. Firstly, the capability of AI R&D to create artificially intelligent constructs is dependent on availability and applicability of physical materials and energy for building and operating the constructs. Secondly, the physical environment and physical laws are highly relevant for AI applications that operate in the physical word (e.g. AI robotics applications) and may also be relevant to be simulated in those applications that operate in the cyber world only.
4. The interface of AI and Social Science is mainly related to acceptance, impact, need and use of AI applications by human individuals and organisations. Also the value of the AI applications is defined by humans and human organisations in social context.
So what have we learned from our visit to the AI related sciences jungle and why is it relevant to identify the relations between AI and related sciences? Firstly, the progress and success of AI exploitation is highly dependent on ,and influenced by, the progress of R&D on the related fields, which all are still active fields of research with potential for new discoveries relevant for AI R&D.
Secondly, multiple parallel tracks of AI research and development are proceeding in parallel and are not necessarily mutually inter-operable from the viewpoint of building new AI systems and applications (we will continue on this issue later in a separate post). For now, let’s just highlight that the lately mainstream machine learning/deep learning R&D branch of AI has not (at least yet) presented a unified cognitive architecture, as stated in the recent review on cognitive architectures.
Finally, it seems that majority of AI R&D is very task specific and often carried out in isolation (e.g. in game worlds) from the “real world” use environment, problems and social context. The path of transforming a state of the art solution from a game world into an application in a real wold may turn out to be a quite long and complex one to take. Accordingly, we hope to see more data sets, challenges and leader boards with closer link to real world data, application potential, context and constraints.
Artificial Intelligence (AI) was first established as a scientific discipline back in 1956, when it was given the name it still has today at the Dartmouth Workshop. For over 60 years, AI has been researched and developed through multiple hype cycles with up periods of heavy investment and down cycles of lower investment (a.k.a AI winters). There has been, and still are, multiple philosophies and camps that approach AI quite differently. For example, one can categorize the approaches to AI by dividing those according to thinking vs. acting and human like vs. rational.
In our work we do not (at least consciously) position ourselves into a specific school of thought but rather approach AI as open technologies that emulate natural intelligence on a range of tasks exhibited by people, animals, and organizations across a wide range of contexts. Referring to the categorization example, this includes all the categories, as humans are known to be able to both think and act, as well as think rationally and irrationally/creatively.
The mainstream of AI research and development is currently focused in machine learning and its subset of deep learning, which has recently enabled conceptually narrow AI applications to reach or outperform human capabilities in some tasks. Examples of this kind of narrow AI tasks include playing chess (IBM Deep Blue, 1996), Jeopardy! (IBM Watson, 2011) and Go (Google AlphaGo, 2016).
However, on tasks requiring general or conceptually wider intelligence, including learning and reasoning over wide variety of abstract concepts and everyday objects in a given context, the research and development is still in its infancy and not in the level of human intellectual performance. This field of AI, known as Artificial General Intelligence (AGI), focuses in research and development of artificial intelligence that could perform any intellectual task that human can. The AGI R&D is closely related to Cognitive Science; AGI R&D is focused in studying machine realization of human cognition studied by Cognitive Science.
The current state and progress of AI could be summarized by stating that AI is matching human performance in conceptually narrow tasks and the quest is towards a wider AI, perhaps one day matching the general intellectual capability of humans. Interesting ideas, theories, presentations and publications on this subject can be found from e.g. Danko Nikolic, Joscha Bach and Peter Gärdenfors.
Our approach to Opentech AI in our R&D work and in this blog could be characterized also as a quest for wider AI. We acknowledge the power of machine learning and deep learning in narrow AI tasks, but at the same time remain open and curious about all approaches and solutions that have potential for improving the intellectual wideness in AI applications. AI is often contradicted and compared against humans, however the combination of human and machine intelligence has been found to be a very powerful combination and is setting the direction of AI applications towards cognitive intelligent assistants for improving human capability and productivity.