Google DeepMind’s new Video Game AI

People are asking me today about the importance of Google DeepMind’s new video-game AI demo — see e.g. the Guardian article glowingly titled

Google develops computer program capable of learning tasks independently: ‘Agent’ hailed as first step towards true AI as it gets adept at playing 49 retro computer games and comes up with its own winning strategies

First off: Yeah, it’s certainly cool!!!

This is progress beyond DeepMind’s former Atari 2600 game demo, and I don’t want to pooh-pooh the amount of work and brilliance that goes into making something like this ACTUALLY WORK !! But this doesn’t especially feel to me like some sort of breakthrough, just good solid work in deep reinforcement learning… In particular, it doesn’t convince me that deep RL as DeepMind has habitually practiced it is an adequate approach to AGI….

The key issue is that these video games are very constrained domains, so that a deep reinforcement learning system can handle them without being able to interpret data in a broader context. This kind of constrained domain can obviously be handled by methods that wouldn’t work for broader, messier, more contextuality-rich real-world domains….

So to me “learning independently in a very constrained world” is not entirely the same problem as “learning independently in a big rich messy world” — the former can be done by a system that ignores a lot of stuff any real-world AI has to pay a lot of attention to…. The universe of simple video games they’ve dealt with in these fascinating experiments, are in the end still a very small and special domain for which a software system can be heavily tuned in various ways (i.e. even if the tuning is independent of which specific game is being learned, it can still be particular to the domain of “games of this type”).

Personally I think DeepMind would need to add a lot of extra stuff to their AGI approach as I understand it (at least, a lot of stuff beyond anything they’ve discussed publicly, and beyond anything hinted at by this demo), to approach AGI seriously. I think the kinds of deep learning and deep reinforcement learning they’ve focused on are only a moderate-sized fraction of the story of what’s needed for human-level cognition. To me, this demo doesn’t show that they have embraced the complexity and multi-aspect nature of what’s needed for human-level AGI. (Maybe they HAVE, behind the scenes, but this video doesn’t show it….)

Another way to put my point is: To really hit a home run their system will need to be able to learn to deal with new DOMAINS, not just new games within a domain for which their system has already been tuned and tweaked and human-configured….

One thing this demo does show is that, since their acquisition by Google, DeepMind is pushing ahead with AGI-oriented R&D according to their own tastes and directions, rather than simply applying their brainpower to Google’s search and ads products and so forth. That’s very nice to see 😉