Who coined the term “AGI”?

In the last few years I’ve been asked increasingly often if I invented the term “AGI” – the answer is “not quite!”

I am indeed the one responsible for spreading the term around the world  … and I did sort of commission its creation!   But I didn’t actually coin the phrase….

(Yeah, terminology is ultimately a pretty boring issue.  But in practice, it makes a lot more difference than it “should” ….  So it’s mildly interesting to me to look at how it originates and spreads.)

The fairly undramatic story is as follows.  In 2002 or so, Cassio Pennachin and I were editing a book on approaches to powerful AI, with broad capabilities at the human level and beyond, and we were struggling for a title.  The provisional title was “Real AI” but I knew that was too controversial.  So I emailed a bunch of friends asking for better suggestions.  Shane Legg, an AI researcher who had worked for me previously, came up with Artificial General Intelligence.   I didn’t love it tremendously but I fairly soon came to the conclusion it was better than any of the alternative suggestions.   So Cassio and I used the term for the book title (the book “Artificial General Intelligence” was eventually published by Springer in 2005), and I began using it more broadly.

A few years later, someone brought to my attention that a Maryland researcher named Mark Gubrud (who, coincidentally, works not too far away from my own location in Rockville, though we never met until he came to the Second AGI Conference in 2010 — this is typical of DC: lots and lots of interesting stuff going on, but not so much communication within the science/engineering community as one finds in some other places) had used the term in a 1997 article on the future of technology and associated risks.  If you know of some yet earlier use, let me know!

At the time Ray Kurzweil used the term “strong AI” for (perhaps not exactly, but pretty closely) what I now refer to as AGI, which is okay, but seemed to Cassio and me to have two problems.

  • First of all, the antonym of “strong AI” is “weak AI”, and calling existing AI systems  “weak” seems needlessly insulting to the systems’ creators – as well as to the AI systems themselves, since e.g. Deep Blue is a rather strong chess player!  Kurzweil worked around this issue by using “strong AI” and “narrow AI” as antonyms, but this struck us as a suboptimal usage.  I believe Ray still likes the term “strong AI”, but now also refers to “AGI” sometimes too.
  • Secondly, “strong AI” has a specific meaning in the philosophy of AI – it refers to John Searle’s hypothesis that, if an intelligent system behaves as if it has a mind, then we should assume it actually has a mind.

The strong points of the term “AGI” are the obvious connection with the recognized term “AI”, and the connection with the “g factor” of general intelligence, well known in the psychology field (it’s what IQ tests are supposed to measure).  There are also some weak points though.

“Artificial” isn’t really appropriate – since AGI isn’t just about building systems to be our tools or “artifices.”  I guess it’s about using artifice to build general intelligences though!

“General” is problematic, because no real-world intelligence will ever be totally general.  Every real-world system will have some limitations, and be better at solving some kinds of problems than others.  (But still, of course, some systems can have more generality to their intelligence than others — and in one of my papers presented at AGI-10, I gave a formal definition of “generality of intelligence” that distinguishes it from “degree of intelligence”.)

“Intelligence,” finally, is a rather poorly defined term, and except in highly abstract mathematical contexts fairly divorced from real-world systems, nobody has a totally clear idea what it means.  Shane Legg and Marcus paper wrote a fun paper that collects 70+ different definitions of intelligence from various branches of the science and engineering literature.

But in spite of these shortcomings, I like the term “AGI” well enough.   It seems to be catching on reasonably well, both in the scientific community and in the futurist media.   Long live AGI!