It is proposed that, given the current state of development of cognitive science, the greatest synergies between this field and artificial intelligence arise when one adopts a high level of abstraction, and that a focus on artificial general intelligencedesign is advocated.
We posit that, given the current state of development of cognitive science, the greatest synergies between this field and artificial intelligence arise when one adopts a high level of abstraction. On the one hand, we suggest, cognitive science embodies some interesting, potentially general principles regarding cognition under limited resources, and AI systems that violate these principles should be treated with skepticism. But on the other hand, attempts to precisely emulate human cognition in silicon are hampered by both their ineffectiveness at exploiting the power of digital computers, and the current paucity of algorithm-level knowledge as to how human cognition takes place. We advocate a focus onartificial general intelligencedesign. This means building systems capturing the salient high-level features of human intelligence (e.g., goal-oriented behavior, sophisticated learning, self-reflection, etc...), yet with software architectures and algorithms specifically designed for effective performance on modern computing hardware. We give several illustrations of this broad principle drawn from our work, including the adaptation of estimation of distribution algorithms in evolutionary programming for complex procedure learning. Level of Organization and Possible Mappings In David Marr’s seminal decomposition, any informationprocessing system may be understood at three nearly independent levels: (1) computational theory , a description of the problems the system attempts to solve; (2) representations and algorithms ; and (3) implementation , the physical instantiation of the system’s representations and algorithms (Marr 1982). Taking this as a springboard, we may characterize approaches to AI vis a vis human cognition by which levels they deem appropriate for mappings between human cognition and AI. One extreme consists of AI approaches that don’t draw mappings between natural and artificial cognition at any of these levels. The most notable examples of this, at present, are purely abstract mathematical theories of intelligence, such as the work of Schmidhuber (2005b; 2005a) and Hutter (2005). However, their approaches rest on the assumption of vast levels of computing power greater than anything that will be physically achievable in the foreseeable future. It is possible that such approaches will be augmented in the future by more plausible heuristics for dealing with limited computational power, but until this occurs they have little relevance to practical AI. At the other end of the spectrum are claims such as Kurzweil’s that the most effective path to strong AI is to “reverse-engineer” the human brain to as precise a level as is necessary to replicate its functioning (Kurzweil 2000). Others such as Jeff Hawkins have taken a slightly more moderate approach, arguing for understanding the brain fully and then creating loosely brain-like AI architectures inspired by what one has learned from neuroscience, without necessarily trying to imitate the details of human brain structure and function (Hawkins & Blakeslee 2004). Both Kurzweil and Hawkins, in different ways, are suggesting a mapping between humans and AI’s at Marr’s level of representations and algorithms. Connectionist AI, generally speaking, is a different sort of attempt to create a mapping at this level, while moving a bit further up the hierarchy, and dealing with representations and algorithms that emulate brain structure and function on only a coarse level. For instance, while the backpropagation algorithm is not an accurate model of human brain dynamics, it has been claimed that subsymbolic approaches such as backpropagation neural nets can emulate brain function significantly better than AI algorithms such as genetic programming or backward-chaining inference (Rumelhart, McClelland, & PNP Research Group 1986; Mareschal & Johnson 2002). Explicitly logic and expert rule based approaches to AI such as Cyc (Lenat & Guha 1990), SOAR (Newell 1990), and ACT-R (Anderson 1993) also attempt to map between humans and AIs at the level of representations and algorithms, but are based on an essentially different notion of what the important representations and algorithms for human cognition are. If one believes that the human brain fundamentally represents knowledge as logical propositions and understands the world and itself via logical inference, then this sort of AI system is the correct approach to take. To oversimplify a bit, we may say that advocates of these logic-based systems place the brain’s neuronal network at Marr’s implementation level, arguing that neurons are just the brain’s particular way of implementing logic; whereas advocates of neural net AI place it at the level of representations and algorithms. Our own view is distinct from all of these. Like the designers of Cyc, SOAR, and theorists such as Pei Wang (1995) and Marcus Hutter (2005), our interest is in artificial general intelligence(AGI) strong AI systems that confront the world autonomously, learn their own representations for real-world situations, reflect on themselves, and solve a variety of complex problems. Furthermore, we believe that the human mind/brain contains valuable clues for making AGI work under conditions of limited computational resources. However, we believe that, given the current highly incomplete state of neuroscience, the correct way to map between AI and human cognition is at Marr’s level of computational theory. We are neutral as to how directly the brain’s neuronal network structure and dynamics relate to its cognitive representations and algorithms, and also as to how closely the brain’s knowledge representation resembles formal logic and how closely its dynamics resemble logical inference. These are very important questions, but neuroscience has not yet progressed far enough to give us the answers. No one knows how an abstract proposition like “Every boy has a dog that every girl likes to call by a special name” is represented or manipulated or learned through experience in the human brain, and until we know this, we won’t know the extent to which the conceptual premises of the most popular neural net or logic based approaches to AI are correct. The best approach to strong AI at present, we suggest, is to learn what we can from the brain about what sort of high level architecture and general dynamics and representations are useful for achieving general intelligence under conditions of limited computational resources and then fill in the algorithmic level with representations and algorithms that make sense in terms of the mathematics, computer science, and computer hardware and software that we know. This attitude leads into an integrative approach to AI, in which one takes a general architecture loosely inspired by human cognition, and then uses it to bind together components drawn from various areas of mathematics and computer science. Importantly however, it also leads to an approach that is different from classic “multi-agent” AI paradigms like the Society of Mind (Minsky 1986), because one of high-level lessons that we draw from contemporary neuroscience is that the brain is not a society but rather a tightly interlinked connection of components that are exquisitely tuned for realtime interaction with much more intense feedback and interplay than exists between the relatively separate individuals that form the parts of a society. The Value of Current Cognitive Science to AI We now offer a few more comments on the general value that we find present-day cognitive science offers AI. Prior to the last few decades the traditional disciplines of psychology and neuroscience offered relatively little guidance to would-be AGI designers. The state of the art has improved massively since then, with the emergence of cognitive science and cognitive neuroscience. It is, however, not yet sufficient to give detailed prescriptions for the construction of AGI systems. Still, we believe, these bodies of knowledge can provide substantial inspiration for AGI design. Cognitive science provides very clear advice on Marr’s computational level, regarding what the overall conceptual architecture an AGI system should be like (in terms of the tasks solved by various subcomponents), if it is going to cognize in a manner even vaguely resembling that of human beings. We know what the major regions of the brain do, and also have a basic decomposition of human cognition into a list of interacting yet significantly distinct faculties. Great strides have also been taken on understanding the implementation level of individual neurons, extending to what might be termed the “lower representational level” (cf. Eliasmith & Anderson 2003). On the other hand, while cognitive science also provides a variety of suggestions regarding concrete mechanisms for carrying out tasks such as perception, learning, and memory, the take-home message here is more clouded, for two reasons. First, there is often no general consensus on the correctness of these mechanisms. Second, it’s not always clear that emulating human psychological or neural behavior is a practical approach to implementing intelligence on radically unbrainlike contemporary hardware. Fundamentally, modern cognitive science has recognized that there is more to human cognition than its high-level architecture and low-level mechanisms. However, the cognitive sciences to date have had relatively little to say about the crucial intermediate level of intelligence, corresponding to the “algorithmic and upper representational level” in Marr’s scheme. As Marvin Minsky recently proclaimed, “neuroscience has no theory for the middle level” (Hedberg 2005). This is the main reason that there is, as yet, no extremely persuasive prescriptive guidance to AGI designers from cognitive science. We know which primary components a mind should have, and some low-level mechanisms that can facilitate them. At the same time little is known on how the different parts