2009 AGI Summer School

List of Lecture Topics and Background Readings



Each session is 1.5 hours.  Each lecture is 1 session unless otherwise specified.


Many lectures specify readings along with them.  Some readings are explicitly specified as optional, mostly not because they’re peripheral but just because they’re long.  The readings list given here is very partial and will be completed as the Summer School approaches.


Note that a number of lectures touch on the theory of Probabilistic Logic Networks, and so the book



will be made available to registered students.  But this will be considered optional reading.


Lectures are grouped here by the lecturer.  See the summer school schedule for information on the temporal ordering of the lectures.



Dr. Joscha Bach





Man as Machine

The traditional (western) philosophy of mind has traditional difficulty with understanding the human mind in the context of natural sciences. Instead, a subjectivist, hermeneutical perspective is preferred. Here, we will discuss how the mind can be studied as an information processing system; we will look at different approaches and focus on the appropriate level of abstraction. We will see if and how Artificial Intelligence can provide us with a deeper understanding of human cognition, and we will confront several of the counter-arguments.


Understanding motivation, emotion and mental representation using computational models (2 sessions)

Cognition goes beyond problem solving. We will look at the question of how a cognitive agent arrives at problems to solve, i.e., we will ask: how does motivation work? How are motivation and emotion related? And how can mental representations refer to real-world events and objects? These questions are fundamental to understanding and realizing an artificial intelligent system, and we will discuss how we can build computational systems capable of these feats.


The MicroPSI architecture

MicroPSI is a cognitive architecture intended to model the integration of reasoning, perception, memory and motivation. It is implemented in Java and uses a neurosymbolic representation for computational agents, that can either be situated in simulation environments, or that can be used to control robots.


Dr. Allan Combs





The Brain as a Neuroscientist Sees It (2 sessions)

An introduction to how cognitive neuroscientists think about intelligence and consciousness in the brain, including an introduction to brain anatomy as well as cellular level neurodynamics and their implications for the nature of intelligence.


The Nature of Consciousness

A multidisciplinary review of the phenomenon of consciousness, covering neuroscience, philosophy, AI, nonlinear dynamics and other perspectives.


Nonlinear Dynamics and the Mind

An overview of nonlinear dynamics and chaos and their relation to brain science and the nature of intelligence.  Discussion of implications for artificial intelligence.


Dr. Hugo de Garis


Evolvable Neural Networks

Application of neural net evolution to learning components of intelligent systems, including object identification, face recognition, movement learning and other abiitiies.


Humanoid Robotics for AGI

A Review of the Nao humanoid robotics platform and its use for AGI


Dr. Nil Geisweiller




For probabilistic logic lectures (materials to be supplied to students in advance):




Introduction to Probabilistic Logic Networks (with Ben Goertzel)

The basic principles of the Probabilistic Logic Networks reasoning framework will be presented.  The “indefinite probabilities” framework will be briefly reviewed.  The basic PLN inference rules will be described, including rules for first order and higher order inference. 


Probabilistic Logic: Spatial, Temporal and Intensional Inference

We will show how to utilize existing spatial and temporal reasoning methods, such as the Region Connection Calculus and Allen's Interval Algebra, to carry out spatial and temporal inference in Probabilistic Logic Networks and other probabilistic logic systems.   We will explain how intensional inheritance (inheritance in terms of patterns and properties rather than subsets) works in the Probabilistic Logic Networks framework, and give a few supportive examples.





Program Representation for General Intelligence (with Ben Goertzel)

We will discuss the representation of programs in a manner compatible with automated program learning, including issues regarding the transformation of specailized and general programs into hierarchical normal forms in which syntactic and semantic properties are well-correlated.



Competent Program Evolution Using Probabilistic Modeling Based Evolution, (The MOSES Algorithm) 

We will explain the different parts that compose MOSES and how they work: demes, normalization, probabilistic modeling and representation building. We will show different examples and explain them, and possibly conclude by presenting some potential variations and improvements.




Imitation and Reinforcement Learning in Virtually Embodied Agents Using Program Evolution


The use of hillclimbing and MOSES to learn programs controlling virtual agents in online virtual worlds is described.  Specific examples are given involving the learning of programs controlling pets in the Multiverse and OpenSim virtual worlds.





Dr. Ben Goertzel


AGI versus Narrow AI

A review of the history of the AI field, and the foundational theory of intelligence, culminating in the clarification of the distinction between Artificial General Intelligence and task-focused “narrow AI.”  Overview of current software systems aimed at AGI, including OpenCogPrime, NARS, SOAR, LIDA and ACT-R.


Review of Past and Present AGI Research, at http://www.agi-08.org/conference/ (optional)



Mathematics of Universal and General Intelligence (1/2 session)

This lecture briefly reviews universal machine learning theories, including Solomonoff's Algorithimic Probability Theory, Hutter's AIXI algorithm and Schimdhuber's Goedel Machine.  Relevance of these theories to pragmatic general intelligence is also covered.




The OpenCog Prime Design for AGI (1/2 session)

Overview of OpenCogPrime, a specific architecture for general intelligence, incorporating PLN probabilistic inference, MOSES evolutionary program learning, ECAN economic attention allocation and other aspects, and designed for implementation within the integrative OpenCog framework.




Natural Language Processing for AGI

A review of current methods in computational natural language understanding and generation, how they relate to human language processing, and current ideas regarding how they might be modified in order to achieve generally-intelligent language processing.  Hands-on examples will be given using the RelEx and link-parser NLP systems.




Customizing Virtual Worlds for AGI (1/2 session)

Review of current virtual world technology, with a focus on the improvements that must be made to it in order to make it truly adequate for AGI development, including integration with robot simulators and expansion of physics engines to include bead physics.





Dr. Matthew Ikle


Managing Uncertainty with Indefinite Probabilities

A review of  methods for managing uncertainty in AI systems, including fuzzy set theory and logic, possibility theory, traditional probability theory, and (the main focus) imprecise probabilities and indefinite probabilities.




Dr. Joel Pitt



The OpenCog Software Framework (with Ben Goertzel) (2 sessions)

OpenCog aims to provide research scientists and software developers with a common platform to build and share artificial intelligence programs. The long-term goal of OpenCog is acceleration of the development of beneficial AGI, a goal which includes developing tools and protocols for AGI safety.  This lecture will describe the OpenCog software framework from a theoretical perspective, and also provide hands-on guidance to working with the code.




Attractor Neural Networks and Economic Attention Networks

Attractor neural networks, such as the Hopfield net, are able to store memories as the attractor of neuron activation patterns. The theory of the Hopfield net, including the several variations that allow continuous learning and keyed retrieval, will be briefly described


The remainder of the lecture will introduce the idea of economic attention networks, exploring how the focus of attention for an intelligent system can be controlled using conserved quantities that are subjected to economic ideas such as tax, rent, and wages. How ECAN is implemented within OpenCog as a number of cooperating MindAgents will be explained. An example of ECAN within OpenCog, that emulates the behaviour of Hopfield net, will also be introduced along discussion about “glocal” memories.






Dr. Pei Wang




Approaches to Defining and Evaluating General Intelligence (1 session)

This one-day talk will introduce the major approaches in building general-purpose AI systems, compare them with human intelligence, analyze their theoretical assumptions, and evaluate their potential and limitation.



A Logical Model of Intelligence (3 sessions)

NARS (Non-Axiomatic Reasoning System) is designed to serve as the core of general-purpose intelligent systems. It is built according to the belief that "intelligence" is the capability for a system to adapt to its environment while working with insufficient knowledge and resources. This four-day talk will introduce the major components of NARS, and discuss their properties.