«

»

Deep Mind and Go and Hybrid AI Architectures

I like Gary Marcus’s article on Deep Mind and their recent awesome Go-playing AI software, which showed the ability to play at a top-notch level (not yet beating the world champion, but beating the European champion, which is a lot better than I can do….)

One good point Gary  makes in his article is that, while Deep Mind’s achievement is being trumpeted in the media as a triumph of “deep learning”, in fact  their approach to Go is based on integrating deep learning with other AI methods (game tree search) — i.e. it’s a hybrid approach, which tightly integrates two different AI algorithms in a context-appropriate way.

OpenCog is a more complicated, richer hybrid approach, which incorporates deep learning along with a lot of other stuff….. While Gary Marcus and I don’t agree on everything we do seem to agree that an integrated approach combining fairly heterogeneous learning algorithms/representations in a common framework is likely to do better than any one golden algorithm….

Almost no one doubts that deep learning is part of the story of human-like cognition (that’s been known since the 1960s actually)….  What Gary and I (among others) doubt is that deep learning is 100% or 80% or 50% of the story… probably according to my guess, it’s more like 15% of the story…

Go is a very tough game but in the end a strictly delimited domain.   To handle the everyday human world, which is massively more complex than Go in so many ways, will require a much more sophisticated hybrid architecture.  In OpenCog we have such an architecture.  How much progress Deep Mind is making toward such an architecture I don’t pretend to know, and their Go playing software — good as it is at what it does — doesn’t really give any hints in this regard.

10 comments

  1. Matthew Gattuso (NY) says:

    Hey Ben,

    From reading this article, i can see that Deep Mind is very much still being used. I want to know if AGI has been figured out, or at least better understood through “other means”, i’m sure you know what i’m talking about. If not, that is very much unfortunate. I guess part of the problem is understanding what triggers emotions, which, in turn down the chain will influence behavior. I’m sure that one has been a real ball of wires to figure out. I think that IBM Watson has been a useful platform. I dunno. Others would know more. Anyway, you could write back on FB.

  2. zarzuelazen says:

    Totally agree, Ben, it’s not the individual techniques (reinforcement learning supervised deep learning, monte-carlo tree search etc.) that are making it so successful, it’s way these techniques have combined into a single integrated system!

    I’m going to repeat what I posted on ‘Overcoming Bias’ about this, because the system here is basically a 3-level architecture (it’s a hierarchial system that operates at 3 levels of abstraction).

    “The recent breakthrough of Google’s ‘GO’ playing AI that has beaten the European Go champion is further evidence for AGI soon.

    Although the machine relies on some GO-specfic tricks ( for planning), there are also general-purpose techniques being deployed that would be relevant to AGI.

    First, the architecture consisted of a ‘3-level split’ in terms of inference at 3 different levels of abstraction. This confirms to me that just 3 levels of abstraction in inference are all that’s needed for AGI!

    1st level: Domain-specific knowledge (short-term tactics). Used database of 30 million GO moves to train ‘value network’ of best moves . General purpose deep learning.

    2nd level: Learning (medium-term tactics) ‘policy network’ trained to select moves. General purpose deep-learning.

    3rd level: Planning (long-term strategy). Used a GO-specific trick here (Monte-Carlo Tree Search) to select strategy. Not general-purpose, so it’s the weak link in the architecture (but still very effective for GO playing).

    See the 3 levels? It confirms that In general terms, all AGIs will operate with a hierarchial architecture that uses these 3 levels:

    1st level: Domain-knowledge unit (values)
    2nd level: Pattern-recognition unit (policies)
    3rd level: Planning/Concept-learning unit(signals)

    Neural networks (deep learning) can be said to have ‘solved’ levels 1 and 2 (at least for problems where there’s lots of training data).

    So we are really only awaiting a solution to the 3rd level (planning/concept-learning unit), for which no general solution has yet been found.”

    Now here’s an entertaining piece of philosophical speculation I added:

    “I can see a direct match between the 3 levels of AlphaGo, 3 types of inference and 3 general properties of the universe! 😀

    Mathematics >>> Heuristic search
    Physics >>> Policy network
    Mind >>>> Value network

    Mathematics can be considered the ‘heuristic search’ mechanism of reality; it is the means through which reality explores all branches of ‘possible worlds’.

    Physics can be considered a ‘policy network’ which decides what possible worlds should become ‘real’ (i.e have their pathways explored in depth)

    Finally, mind can be considered a ‘value network’ ; it is the ‘evaluation function’ of reality, which prunes the tree of possible worlds to the more limited set of ‘real worlds’ – it ‘terminates’ the search function.

    Direct match to 3 optimal types of inference for each of the 3 functions!

    Search function >> Categorization >>
    Coherence values (complex numbers)

    Policy network >>> Pattern recognition >>> Bayesian probability values (0-1)

    Value network >>> Symbolic reasoning >>> Boolean algebra (T/F values)”

  3. Aleksandar Dimov says:

    I agree with you, but I don’t think that their way of approaching the field is not effective, exactly opposite.
    For me it’s obvious that the team of DeepMind is testing some parts of bigger system.
    Can be seen from the progress that they made from atari games last year to current AlphaGO. Thous are example of testing components. Anyway the work and the progress for one year is very impressive and important for attracting more capital. It is very likely that they will tackle some commercial problem the next one or two year.

  4. Sean O'Connor says:

    I’ve been thinking a bit more about the applications of Google’s DeepMind Go algorithm. I think it fits in very well with computational biochemistry for drug design as well as engineering areas where simulators are used such as analog electronics.
    I did an evolutionary algorithm add on for an electronics simulator that could modify component values to optimize a given circuit. Impossible to get anyone in that community to take is seriously though. Pity, because it actually worked well. With a little support it could have been taken further.
    With the deep learning algorithm I would say that you could create a system that could design circuits from scratch, rather than just adjusting component values in a given circuit. And almost certainly the designs would be better than a human could achieve.
    I would say the only prerequisite is that a simulator is available for the physical system you want to design.

  5. buy madden coins says:

    Thanks extremely beneficial. Will certainly share site with my good friends

  6. nfl coins says:

    You’ve gotten awesome knowlwdge in this article

  7. nba mt coins says:

    I appreciate the details on your websites. Thanks a ton!.

  8. cs:go says:

    Simply wanted to mention I am thrilled that i happened onto your web page!.

  9. dota 2 says:

    Thanks with regard to offering this type of terrific info

  10. pokemongo says:

    Great site you possess right here

Leave a Reply to dota 2 Cancel reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>