Deep Mind and Go and Hybrid AI Architectures

I like Gary Marcus’s article on Deep Mind and their recent awesome Go-playing AI software, which showed the ability to play at a top-notch level (not yet beating the world champion, but beating the European champion, which is a lot better than I can do….)

One good point Gary  makes in his article is that, while Deep Mind’s achievement is being trumpeted in the media as a triumph of “deep learning”, in fact  their approach to Go is based on integrating deep learning with other AI methods (game tree search) — i.e. it’s a hybrid approach, which tightly integrates two different AI algorithms in a context-appropriate way.

OpenCog is a more complicated, richer hybrid approach, which incorporates deep learning along with a lot of other stuff….. While Gary Marcus and I don’t agree on everything we do seem to agree that an integrated approach combining fairly heterogeneous learning algorithms/representations in a common framework is likely to do better than any one golden algorithm….

Almost no one doubts that deep learning is part of the story of human-like cognition (that’s been known since the 1960s actually)….  What Gary and I (among others) doubt is that deep learning is 100% or 80% or 50% of the story… probably according to my guess, it’s more like 15% of the story…

Go is a very tough game but in the end a strictly delimited domain.   To handle the everyday human world, which is massively more complex than Go in so many ways, will require a much more sophisticated hybrid architecture.  In OpenCog we have such an architecture.  How much progress Deep Mind is making toward such an architecture I don’t pretend to know, and their Go playing software — good as it is at what it does — doesn’t really give any hints in this regard.