2009 AGI Summer School

List of Lecture Topics andBackground Readings

 

 

Each session is 1.5hours.  Each lecture is 1 sessionunless otherwise specified.

 

Many lectures specifyreadings along with them.  Somereadings are explicitly specified as optional, mostly not because theyÕreperipheral but just because theyÕre long. The readings list given here is very partial and will be completed asthe Summer School approaches.

 

Note that a number oflectures touch on the theory of Probabilistic Logic Networks, and so the book

 

 

will be made available toregistered students.  But this willbe considered optional reading.

 

Lectures are grouped here bythe lecturer.  See the summerschool schedule for information on the temporal ordering of the lectures.

 

 

Dr. Joscha Bach

 

Readings

TBA

 

Manas Machine

Thetraditional (western) philosophy of mind has traditional difficulty withunderstanding the human mind in the context of natural sciences. Instead, asubjectivist, hermeneutical perspective is preferred. Here, we will discuss howthe mind can be studied as an information processing system; we will look atdifferent approaches and focus on the appropriate level of abstraction. We willsee if and how Artificial Intelligence can provide us with a deeperunderstanding of human cognition, and we will confront several of thecounter-arguments.

 

Understandingmotivation, emotion and mental representation using computational models (2sessions)

Cognitiongoes beyond problem solving. We will look at the question of how a cognitiveagent arrives at problems to solve, i.e., we will ask: how does motivationwork? How are motivation and emotion related? And how can mentalrepresentations refer to real-world events and objects? These questions arefundamental to understanding and realizing an artificial intelligent system,and we will discuss how we can build computational systems capable of thesefeats.

 

TheMicroPSI architecture

MicroPSI is a cognitivearchitecture intended to model the integration of reasoning, perception, memoryand motivation. It is implemented in Java and uses a neurosymbolicrepresentation for computational agents, that can either be situated insimulation environments, or that can be used to control robots.

 

Dr. Allan Combs

 

Readings                                                                                  

TBA

 

TheBrain as a Neuroscientist Sees It (2 sessions)

An introduction to howcognitive neuroscientists think about intelligence and consciousness in the brain,including an introduction to brain anatomy as well as cellular levelneurodynamics and their implications for the nature of intelligence.

 

The Nature ofConsciousness

A multidisciplinaryreview of the phenomenon of consciousness, covering neuroscience, philosophy,AI, nonlinear dynamics and other perspectives.

 

Nonlinear Dynamicsand the Mind

An overview of nonlineardynamics and chaos and their relation to brain science and the nature ofintelligence.  Discussion ofimplications for artificial intelligence.

 

Dr. Hugo de Garis

 

Evolvable NeuralNetworks

Application of neural netevolution to learning components of intelligent systems, including objectidentification, face recognition, movement learning and other abiitiies.

 

Humanoid Robotics forAGI

A Review of the Nao humanoidrobotics platform and its use for AGI

 

Dr. Nil Geisweiller

 

Readings:

 

For probabilistic logiclectures (materials to be supplied to students in advance):

 

 

 

Introductionto Probabilistic Logic Networks (with Ben Goertzel)

Thebasic principles of the Probabilistic Logic Networks reasoning framework willbe presented.  The ÒindefiniteprobabilitiesÓ framework will be briefly reviewed.  The basic PLN inference rules will be described, includingrules for first order and higher order inference. 

 

ProbabilisticLogic: Spatial, Temporal and Intensional Inference

Wewill show how to utilize existing spatial and temporal reasoning methods, suchas the Region Connection Calculus and Allen's Interval Algebra, to carry outspatial and temporal inference in Probabilistic Logic Networks and otherprobabilistic logic systems.  We will explain how intensional inheritance (inheritance in terms ofpatterns and properties rather than subsets) works in the Probabilistic LogicNetworks framework, and give a few supportive examples.

 

Readings

 

 

ProgramRepresentation for General Intelligence (with Ben Goertzel)

Wewill discuss the representation of programs in a manner compatible withautomated program learning, including issues regarding the transformation ofspecailized and general programs into hierarchical normal forms in whichsyntactic and semantic properties are well-correlated.

Readings

 

CompetentProgram Evolution Using Probabilistic Modeling Based Evolution, (The MOSESAlgorithm) 

Wewill explain the different parts that compose MOSES and how they work: demes,normalization, probabilistic modeling and representation building. We will showdifferent examples and explain them, and possibly conclude by presenting somepotential variations and improvements.

Readings

 

 

Imitationand Reinforcement Learning in Virtually Embodied Agents Using Program Evolution

 

Theuse of hillclimbing and MOSES to learn programs controlling virtual agents inonline virtual worlds is described. Specific examples are given involving the learning of programscontrolling pets in the Multiverse and OpenSim virtual worlds.

Readings

 

 

 

Dr. Ben Goertzel

 

AGI versus Narrow AI

A review of the history ofthe AI field, and the foundational theory of intelligence, culminating in theclarification of the distinction between Artificial General Intelligence andtask-focused Ònarrow AI.Ó  Overviewof current software systems aimed at AGI, including OpenCogPrime, NARS, SOAR,LIDA and ACT-R.

Readings

Review of Past and PresentAGI Research, at http://www.agi-08.org/conference/(optional)

 

 

Mathematicsof Universal and General Intelligence (1/2 session)

Thislecture briefly reviews universal machine learning theories, includingSolomonoff's Algorithimic Probability Theory, Hutter's AIXI algorithm andSchimdhuber's Goedel Machine. Relevance of these theories to pragmatic general intelligence is alsocovered.

Readings

 

 

The OpenCog PrimeDesign for AGI (1/2 session)

Overview of OpenCogPrime, aspecific architecture for general intelligence, incorporating PLN probabilisticinference, MOSES evolutionary program learning, ECAN economic attentionallocation and other aspects, and designed for implementation within theintegrative OpenCog framework.

Readings:

 

 

Natural LanguageProcessing for AGI

A review of current methodsin computational natural language understanding and generation, how they relateto human language processing, and current ideas regarding how they might bemodified in order to achieve generally-intelligent language processing.  Hands-on examples will be given usingthe RelEx and link-parser NLP systems.

Readings:

 

 

Customizing VirtualWorlds for AGI (1/2 session)

Review of current virtualworld technology, with a focus on the improvements that must be made to it inorder to make it truly adequate for AGI development, including integration withrobot simulators and expansion of physics engines to include bead physics.

Readings:

 

 

 

Dr. Matthew Ikle

 

Managing Uncertaintywith Indefinite Probabilities

A review of  methods for managing uncertainty in AIsystems, including fuzzy set theory and logic, possibility theory, traditionalprobability theory, and (the main focus) imprecise probabilities and indefiniteprobabilities.

Readings

 

 

Dr. Joel Pitt

 

 

TheOpenCog Software Framework (with Ben Goertzel) (2 sessions)

OpenCogaims to provide research scientists and software developers with a commonplatform to build and share artificial intelligence programs. The long-termgoal of OpenCog is acceleration of the development of beneficial AGI, a goalwhich includes developing tools and protocols for AGI safety.  This lecture will describe the OpenCogsoftware framework from a theoretical perspective, and also provide hands-onguidance to working with the code.

Readings

 

 

AttractorNeural Networks and Economic Attention Networks

Attractorneural networks, such as the Hopfield net, are able to store memories as theattractor of neuron activation patterns. The theory of the Hopfield net,including the several variations that allow continuous learning and keyedretrieval, will be briefly described

 

Theremainder of the lecture will introduce the idea of economic attentionnetworks, exploring how the focus of attention for an intelligent system can becontrolled using conserved quantities that are subjected to economic ideas suchas tax, rent, and wages. How ECAN is implemented within OpenCog as a number ofcooperating MindAgents will be explained. An example of ECAN within OpenCog,that emulates the behaviour of Hopfield net, will also be introduced alongdiscussion about ÒglocalÓ memories.

 

Readings

 

 

 

Dr. Pei Wang

 

Readings:

 

Approachesto Defining and Evaluating General Intelligence (1 session)

Thisone-day talk will introduce the major approaches in building general-purpose AIsystems, compare them with human intelligence, analyze their theoreticalassumptions, and evaluate their potential and limitation.

Readings

 

ALogical Model of Intelligence (3 sessions)

NARS(Non-Axiomatic Reasoning System) is designed to serve as the core ofgeneral-purpose intelligent systems. It is built according to the belief that"intelligence" is the capability for a system to adapt to itsenvironment while working with insufficient knowledge and resources. Thisfour-day talk will introduce the major components of NARS, and discuss theirproperties.

Readings