Is Google Secretly Creating an AGI? (Reasons Why I Doubt It)
From time to time someone suggests to me that Google "must be" developing a powerful Artificial General Intelligence in-house. I recently had the opportunity to visit Google and chat with some of their research staff, including Peter Norvig, their Director of Research. So I thought I'd share my perspective on Google+AGI based on the knowledge currently at my disposal.
First let me say that I definitely see where the Google+AGI speculation comes from. It's not just that they've hired a bunch of AI PhD's and have a lot of money and computers. It's that their business leaders have taken to waxing eloquent about the glorious future of artificial intelligence. For instance, on the blog
http://memepunks.blogspot.com/2006/05/google-ai-twinkle-in-larry-pages-eye.html
we find some quotes from Google co-founder Larry Page:
"People always make the assumption that we're done with search. That's very far from the case. We're probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything ... some people could call that artificial intelligence.
...
a lot of our systems already use learning techniques
...
The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly ... You could ask 'what should I ask Larry?' and it would tell you."
Page, in the same talk quoted there, noted that technology has a tendency to change faster than expected, and that an AI could be a reality in just a few years.
Exciting rhetoric indeed!
Anyway, earlier this week I gave a talk at Google, to a group of in-house researchers and engineers, on the topic of artificial general intelligence. I was rather overtired and sick when I gave the talk, so it wasn't anywhere near one of my best talks on AGI and Novamente. Blecch. Parts of it were well delivered; but I didn't pace myself as well as usual, so I wound up rushing past some of the interesting points and not giving my usual stirring conclusion.... But some of the younger staff were pretty interested anyway; and there were some fun follow-up conversations.
Peter Norvig (their Director of Research), an all-around great researcher and writer and great guy, gave the intro to my talk. I had chatted with Peter a bit earlier; and had mentioned to him that some folks I knew in the AGI community suspected Google to have a top-secret AGI project.
So anyway, Peter gave the following intro to my talk [I am paraphrasing here, not quoting exactly ... but I've tried to stay true to what he said, as accurately as possible given the constraints of my all-too-human memory]:
"There has been some talk about whether Google has a top-secret project aimed at building a thinking machine. Well, I'll tell you what happened. Larry Page came to me and said 'Peter, I've been hearing a lot about this Strong AI stuff. Shouldn't we be doing something in that direction?' So I said, okay. I went back to my desk and logged into our project management software. I had to write some scripts to modify it because it didn't go far enough into the future. But I modified it so that I could put, 'Human-level intelligence' on the row of the planning spreadsheet corresponding to the year 2030. And, that wasn't up there an hour before someone else added another item to the spreadsheet, time-stamped 90 days after that: 'Human-level intelligence: Macintosh port' "
Well ... soooo ... apparently Norvig, at least in a semi-serious tongue-in-cheek moment, thinks we're about 23 years from being able to create a thinking machine....
He may be right of course -- or he may even be over-optimistic, who knows -- but a cynical side of me can't help thinking: "Hey, Ben! Peter Norvig is even older than you are! Maybe placing the end goal 23 years off is just a way of saying 'Somebody else's problem!'."
Norvig says he views himself as building useful tools that will accelerate the work of future AGI researchers, along with everyone else....
Of course, I do appreciate Google's useful tools! Google's various tools have been quite a relief as compared to the incompetently-architected, user-unfriendly software released by some other major software firms.
And, while from a societal perspective I wish Google would put their $$ and hardware behind AGI; from the perspective of my small AGI business Novamente LLC, their current attitude is surely preferable...
[I could discourse a while about Google's ethics slogan "Don't Be Evil" as a philosophy of Friendly AI ... but I'll resist the urge...]
When I shared the above story with one of my AGI researcher friends (who shall here remain anonymous), he agreed with my sentiments, and shared the following story with me..
"In [month deleted] I had an interview in Google's new [location deleted] office
... and they were much more interested in my programming skill than in my research. Of course, we didn't find a match.
Even if Google wants to do AGI, given their current technical culture,
they won't get it right, at least at the beginning. As far as AGI is
concerned, Google has more than enough money and engineers, but less
than enough thinkers. They will produce some cute toolbox with smart
algorithms supported by a huge amount of raw data, which will be
interesting, but far from AGI."
Summing up ... as the above anecdotes suggest, my overall impression was that Google is not making any serious effort at AGI. If they are, then either
Of course, neither of these is an impossibility -- "we don't know what we don't know," etc. But honestly, I rate both of those options as pretty unlikely.
Could they launch an AGI effort? Most surely: they could, at any point. The cost to them of doing so would be trivially small, relative to the overall resources at their disposal. Maybe this blog post will egg them into doing so! (yeah, right...)
But I think the point my above-quoted friend made, after his Google interview, was quite astute. Google's technical culture is coding-focused, and their approach to AI is data-focused (textual data, and data regarding clicks on ads, and geospatial data coming into Google Earth, etc.). To get hired at Google you have to be a great coder -- just being a great AGI theorist wouldn't be enough, for example. I don't think AGI is mainly a coding problem, nor mainly a data-based problem ... nor do I think it's a problem that can effectively be solved via a "great coding + lots of data" mentality. I think AGI is a deep conceptual problem that has more to do wth understanding cognition than with churning out great code and effectively utilizing masses of data. Of course, lots of great software engineering will be required to create an AGI (and we're happy to have a few super-engineers within Novamente LLC, for example), and lots of data too (e.g. in the Novamente case we plan to start our systems out with perceptual and social data from virtual worlds like Second Life; and then later on feed them knowledge from Wikipedia and other textual sources). But if the focus of an "AGI" team is on coding and data, rather than on grokking the essence of cognition, AGI is not going to be the result.
So, IMO, for Google to create an AGI would require them not only to bypass the relative AGI skepticism represented by the Peter Norvig story above -- but also to operate an AGI project based on a significantly different culture than the one that has worked for Google so far, in their development of (in some cases, really outstandingly useful) narrow-AI applications.
All in all my impression after getting to know Google's in-house research program a little better, is about the same as it was beforehand. However, I did make an explicit effort to look for evidence disconfirming my prior hypotheses -- and I didn't really find any. If anyone has evidence that the impressions I've given here are mistaken, I'd certainly be happy to hear it.
OK, well, it's time to wind up this blog post and get back to my own effort to create AGI -- with far less money and computers than Google, but -- at least -- a focus on (and, I believe, a clear understanding of) the essence of the problem....
Sure, it would be nice to have the resources of a Google or M$ or IBM backing up Novamente! But, the thing is, you don't become a big company like those by focusing on grokking the essence of cognition -- you become a big company like those by focusing on practical stuff that makes money quickly, like code and data and user interfaces ... and if AI plays a role in this, it's problem-specific narrow-AI, such as Google has done so well with.
As Larry Page recognizes, AGI will certainly have massive business value, due to its incredible potential for delivering useful services to people in a huge number of contexts. But the culture and mentality needed to create AGI seems to be different from the one needed to rapidly create a large and massively profitable company. My prediction is that if Google ever does get an AGI, they will buy it rather than build it.
First let me say that I definitely see where the Google+AGI speculation comes from. It's not just that they've hired a bunch of AI PhD's and have a lot of money and computers. It's that their business leaders have taken to waxing eloquent about the glorious future of artificial intelligence. For instance, on the blog
http://memepunks.blogspot.com/2006/05/google-ai-twinkle-in-larry-pages-eye.html
we find some quotes from Google co-founder Larry Page:
"People always make the assumption that we're done with search. That's very far from the case. We're probably only 5 percent of the way there. We want to create the ultimate search engine that can understand anything ... some people could call that artificial intelligence.
...
a lot of our systems already use learning techniques
...
The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly ... You could ask 'what should I ask Larry?' and it would tell you."
Page, in the same talk quoted there, noted that technology has a tendency to change faster than expected, and that an AI could be a reality in just a few years.
Exciting rhetoric indeed!
Anyway, earlier this week I gave a talk at Google, to a group of in-house researchers and engineers, on the topic of artificial general intelligence. I was rather overtired and sick when I gave the talk, so it wasn't anywhere near one of my best talks on AGI and Novamente. Blecch. Parts of it were well delivered; but I didn't pace myself as well as usual, so I wound up rushing past some of the interesting points and not giving my usual stirring conclusion.... But some of the younger staff were pretty interested anyway; and there were some fun follow-up conversations.
Peter Norvig (their Director of Research), an all-around great researcher and writer and great guy, gave the intro to my talk. I had chatted with Peter a bit earlier; and had mentioned to him that some folks I knew in the AGI community suspected Google to have a top-secret AGI project.
So anyway, Peter gave the following intro to my talk [I am paraphrasing here, not quoting exactly ... but I've tried to stay true to what he said, as accurately as possible given the constraints of my all-too-human memory]:
"There has been some talk about whether Google has a top-secret project aimed at building a thinking machine. Well, I'll tell you what happened. Larry Page came to me and said 'Peter, I've been hearing a lot about this Strong AI stuff. Shouldn't we be doing something in that direction?' So I said, okay. I went back to my desk and logged into our project management software. I had to write some scripts to modify it because it didn't go far enough into the future. But I modified it so that I could put, 'Human-level intelligence' on the row of the planning spreadsheet corresponding to the year 2030. And, that wasn't up there an hour before someone else added another item to the spreadsheet, time-stamped 90 days after that: 'Human-level intelligence: Macintosh port' "
Well ... soooo ... apparently Norvig, at least in a semi-serious tongue-in-cheek moment, thinks we're about 23 years from being able to create a thinking machine....
He may be right of course -- or he may even be over-optimistic, who knows -- but a cynical side of me can't help thinking: "Hey, Ben! Peter Norvig is even older than you are! Maybe placing the end goal 23 years off is just a way of saying 'Somebody else's problem!'."
Norvig says he views himself as building useful tools that will accelerate the work of future AGI researchers, along with everyone else....
Of course, I do appreciate Google's useful tools! Google's various tools have been quite a relief as compared to the incompetently-architected, user-unfriendly software released by some other major software firms.
And, while from a societal perspective I wish Google would put their $$ and hardware behind AGI; from the perspective of my small AGI business Novamente LLC, their current attitude is surely preferable...
[I could discourse a while about Google's ethics slogan "Don't Be Evil" as a philosophy of Friendly AI ... but I'll resist the urge...]
When I shared the above story with one of my AGI researcher friends (who shall here remain anonymous), he agreed with my sentiments, and shared the following story with me..
"In [month deleted] I had an interview in Google's new [location deleted] office
... and they were much more interested in my programming skill than in my research. Of course, we didn't find a match.
Even if Google wants to do AGI, given their current technical culture,
they won't get it right, at least at the beginning. As far as AGI is
concerned, Google has more than enough money and engineers, but less
than enough thinkers. They will produce some cute toolbox with smart
algorithms supported by a huge amount of raw data, which will be
interesting, but far from AGI."
Summing up ... as the above anecdotes suggest, my overall impression was that Google is not making any serious effort at AGI. If they are, then either
- they have trained dozens of their scientific staff to be really good actors, or
- it is a super-top-secret effort within Google Irkutsk or wherever, that the Google Mountain View research staff don't know about
Of course, neither of these is an impossibility -- "we don't know what we don't know," etc. But honestly, I rate both of those options as pretty unlikely.
Could they launch an AGI effort? Most surely: they could, at any point. The cost to them of doing so would be trivially small, relative to the overall resources at their disposal. Maybe this blog post will egg them into doing so! (yeah, right...)
But I think the point my above-quoted friend made, after his Google interview, was quite astute. Google's technical culture is coding-focused, and their approach to AI is data-focused (textual data, and data regarding clicks on ads, and geospatial data coming into Google Earth, etc.). To get hired at Google you have to be a great coder -- just being a great AGI theorist wouldn't be enough, for example. I don't think AGI is mainly a coding problem, nor mainly a data-based problem ... nor do I think it's a problem that can effectively be solved via a "great coding + lots of data" mentality. I think AGI is a deep conceptual problem that has more to do wth understanding cognition than with churning out great code and effectively utilizing masses of data. Of course, lots of great software engineering will be required to create an AGI (and we're happy to have a few super-engineers within Novamente LLC, for example), and lots of data too (e.g. in the Novamente case we plan to start our systems out with perceptual and social data from virtual worlds like Second Life; and then later on feed them knowledge from Wikipedia and other textual sources). But if the focus of an "AGI" team is on coding and data, rather than on grokking the essence of cognition, AGI is not going to be the result.
So, IMO, for Google to create an AGI would require them not only to bypass the relative AGI skepticism represented by the Peter Norvig story above -- but also to operate an AGI project based on a significantly different culture than the one that has worked for Google so far, in their development of (in some cases, really outstandingly useful) narrow-AI applications.
All in all my impression after getting to know Google's in-house research program a little better, is about the same as it was beforehand. However, I did make an explicit effort to look for evidence disconfirming my prior hypotheses -- and I didn't really find any. If anyone has evidence that the impressions I've given here are mistaken, I'd certainly be happy to hear it.
OK, well, it's time to wind up this blog post and get back to my own effort to create AGI -- with far less money and computers than Google, but -- at least -- a focus on (and, I believe, a clear understanding of) the essence of the problem....
Sure, it would be nice to have the resources of a Google or M$ or IBM backing up Novamente! But, the thing is, you don't become a big company like those by focusing on grokking the essence of cognition -- you become a big company like those by focusing on practical stuff that makes money quickly, like code and data and user interfaces ... and if AI plays a role in this, it's problem-specific narrow-AI, such as Google has done so well with.
As Larry Page recognizes, AGI will certainly have massive business value, due to its incredible potential for delivering useful services to people in a huge number of contexts. But the culture and mentality needed to create AGI seems to be different from the one needed to rapidly create a large and massively profitable company. My prediction is that if Google ever does get an AGI, they will buy it rather than build it.
12 Comments:
According to Google's internal papers one of their top goals is to have the world's top AI laboratoy:
http://blog.outer-court.com/archive/2006-10-26-n80.html
A response to Phillipp's comment:
Yes, that does seem to be one of their goals ... but as seems clear from Norvig's comments, what this means in practice is narrow AI not AGI ...
I read a blog entry recently (which I an't seem to find again) that suggested part of the reason for Google's incredible secrecy (the first rule at Google is that you're not allowed to talk about working for Google) and overt hipness (free health food, toys & pets at work, etc.) is to distract attention from the fact that Google is actually an incredibly mundane and conservative employer.
There is no such secrecy about working at google. I know people who do and they don't hide it. I even did an internship there while working on my PhD and it is on my CV.
What is very true is that google from within is not much different from a coding sweat shop, once the free food, perks and extreme arrogance are removed from the equation.
meh, google schmoogle, strong AI is still not going to happen
[quote] I was rather overtired and sick when I gave the talk, so it wasn't anywhere near one of my best talks on AGI and Novamente. Blecch. Parts of it were well delivered; but I didn't pace myself as well as usual, so I wound up rushing past some of the interesting points and not giving my usual stirring conclusion.... [/quote]
Have you blogged about this stirring conclusion? Is there somewhere I can read about it?
"They will produce some cute toolbox with smart algorithms supported by a huge amount of raw data, which will be interesting, but far from AGI."
Isn't that what they have already? (Google Search)...
Cheer up, maybe they'll buy you out
After reading Norvig and watching too many engEDU videos, my personal observation has been one of very narrow AI in focus.
I think they'll stay that way until AGI matures with something concrete. I'm sure Peter is keeping his feelers out like I am. :) Did you get invited or did you invite yourself?
Anyway, I don't see them taking risks. My observation has been one of proven emergent technology buy-outs. Often just for the people as they try and grow with demand.
Interesting intro Peter gave though. Doesn't rule out they aren't working on it, just that they aren't working on human-level intelligence just yet. :)
I don't think so either. My reasons are based on Google Translate.
El barco attravesta una cerradura
One would have thought that if AGI was being specifically worked on the translations produced would show some evidence of context. After all if I am to talk about canals I need the concept éclusia. I would have thought this was patently obvious.
Google is however working in a large number of fields. As far as I can see they are separating NLP from AI. Big mistake! They are however working on the use of NL to answer queries.
Ben's talk of May 30, 2007 (minus the introduction) is now available at
http://video.google.com/videoplay?docid=4740557046246483319
I heard the podcast interview on G'Day World. One of the points you brought up was that every time somebody comes up with a new AI package, the rest of the world says, OK, that's not AI, that's just new software. Part of my sequence of Turing test extensions was face recognition. Our brains are uniquely wired to recognize faces. (Two small dots and a arc underneath - how could that be a face? A peculiar Martian feature - gotta be a face.)
Sure enough, computers now do face recognition with remarkable accuracy. They just don't do it the way we do. (Assuming, of course, that we know how we do it.)
Can we separate the concept of intelligence from that of consciousness? Does either imply the other? Animals are surely intelligent, most certainly conscious (at least by some standard). But they're non-verbal. So perhaps we can have consciousness and intelligence without words. But we'd have to have some way of abstracting. When Cat sees Rat, what goes on inside each? Does instinct replace intelligence in that domain? That probably depends on what we mean by "instinct".
Professionals in sports (including martial arts) operate at an instinctual (or at least, subconscious) level. If you have to thnk about your next move, you're too late. Even day-to-day actions, like driving or riding a bicycle, happen at a subconscious level.
Which may lead to a definition of "conscious" that includes "I'm thinking about it" (cf. Descartes). All that's left is to pin down the definition of "thinking".
When a mathematician solves a particularly knotty problem, he feels some sort of pleasure in the achievement. I doubt Mathematica has any kind of analog to that. Does the mathematician's pleasure, then, have anything to do with the intelligence that solved the problem? Or does it inhabit some other domain?
Yes indeedy, "Sure enough, computers now do face recognition with remarkable accuracy." Except they actually don't:
http://www.notbored.org/face-misrecognition.html
Like all the rest of the hype about the self-delusion which manifests itself as the field of so-called "artificial intelligence," this claim about computer facial recognition is pure bullshit.
Let's debunk the lies that will be told in a frantically failed effort to support this latest insupportable failure of the myth misnamed AI, before they're even told:
[1] "This article is 5 years old." Great. So where are all the amazing facial recognition apps? Why does your ATM still need a password? Why doesn't it just recognize your face? This is not just a lie, it's a stupid lie.
[2[ "This is one isolated example." Wonderful. Where are all the successful examples? Show us the FBI's Ten Most Wanted who were recently apprehended because of facial recognition by computer. Whoops! Can't do it. This is not just a foolish lie, it's an obviously foolish lie.
[3] "It's early days yet..." I thought you said that "sure enough, computers now do face recognition with remarkable accuracy." Were you lying then, or are you lying now?
[4] "You're too ignorant to comment because you don't have a PhD in artificial intelligence, so you have no right to discuss AI." I also don't have a PhD in dowsing. That doesn't mean I can't comment on whether there's a total lack of evidence that dowsing works. Show me the evidence for AI. Prove it. Put up or shut up. Show me a program that can read a novel and then tersely and sensibly summarize it. Show me a computer app that can find what I want on google when I tell it what in general I'm looking for. Show me a computer program I can tell "Mary saw the puppy in the window and wanted it," and then ask, "Which did Mary want -- the puppy or the window?" and get an answer that makes sense.
You can't. There is no such app. Google search is still so desperately stupid that when I searched for a spaghetti western called Ein Einkommen Kurtz Zuruck, the contextual Google ads changed to offers to help get me an EIN -- Employer Identification Number. Brilliant. Yeah, great AI there, google.
There is no evidence that AI is anything but a reality distortion field in the minds of dupes and gullible chumps. AI (so-called) has been a colossal failure from day one, and I'm not the only one who realizes it:
http://www.skeptic.com/the_magazine/featured_articles/v12n02_AI_gone_awry.html
And now let's debunk the futile rebuttals to this article before they're made, since they'e just as tiresome as the lies told in a failed effort to prop up the rotting corpse called AI and portray it as live and kicking:
(A) "This article doesn't debunk AI!" Yes it does. Moreover, it does it in the leading AI researchers' own words. Learn to read. When Marvin Minsky says "The field of AI is brain dead," you can't spin that. Stop insulting our intelligence.
(B) "The guy who wrote this article is just a cynic." Who cares? He cites a great deal of evidence that AI _just_ *doesn't work*. Show us the evidence that it does, or shut up.
(C) "This guy is just nitpicking." That's a lie. He dissects AI and shows that each of its sub-fields has failed spectacularly to live up to even the more rudimentary of its promises.
"Artificial Intelligence" is so far from the ground-reality of computation that it ought to be dismissed like the term "phlogiston." -- Bruce Sterling, September 2007
Post a Comment
<< Home