FAQ about AGI, Singularity, Etc.

Q: When do you think AGI will be created at the current rate?

A: The “current rate” is hard to define, given the reality of exponential acceleration, and the difficulty of estimating the exponent for acceleration of AGI development. My best guess is that, if there’s no massive and well-done funding infusion, we’ll get it sometime between 2020 and 2035. (By “AGI” I mean human-level AGI, though not necessarily precise human intelligence emulation.)

Q: Do you think this could be sped up / When is the soonest we can expect?

A: I suspect it could be sped up and success achieved in 3-5 years from now, with a massive funding infusion orchestrated very intelligently.

Q: How much money would you need to create AGI as soon as possible? When could you do it by?

A: With a few million USD, I believe that within 3-5 years I could lead an OpenCog team to create a working prototype early stage AGI robot toddler, that would be sufficiently impressive to render garnering dramatic additional funding relatively unproblematic. But this $$ would need to be devoted purely to AGI without other strings attached. On the other hand, with hundreds of millions of dollars one could create an AGI Manhattan Project of sorts, which could potentially get us all the way to human adult level AGI in the same time frame — IF it were run very effectively.

Q: If a wealthy corporation, a powerful government agency or an individual billionaire decided they wanted to create AGI, how long do you think it would take them? Do you think they could do this without people knowing and thus keep the benefits of AGI to themselves? Maybe even change the world to suit them better without us knowing?

A: I think it’s very unlikely that a serious human-level AGI project could effectively be kept secret all the way to completion. My feel is — To achieve AGI rapidly would require too many resources to be done in secrecy; but if it’s done more slowly, information is sure to leak out along the way.

Q: Aren’t you worried about AGI destroying the world, killing off humanity, etc.?

A: I think those bad outcomes are possible. I don’t want them to happen. I see no reason to believe they’re extremely likely. There are also many other scary risks not directly related to AGI, such as nuclear warfare, bioweapons, nanotech, etc.; and AGI may play a role in mitigating them. We need to understand AGI a lot better to really grok the risks and opportunities AGI affords; and I don’t think we can understand AGI really well until we’ve built some pretty smart early-stage AGIs.

Q: Do you believe a Singularity will arrive? If so, when?

A: I believe a Singularity will probably arrive, though other outcomes like a massively destructive world war or bioengineered plague, etc., seem possible. A technological stagnation also seems possible though I rate it highly unlikely. Ray Kurzweil’s 2045 estimate of the Singularity date seems plausible, though I think it could happen as soon as 2020 or as late as, say, 2100 — depending on various scientific and social factors. Note that we might achieve human-level AGI, radical healthspan extension and other cool stuff well before a Singularity — especially if we choose to throttle AGI development rate for a while in order to increase the odds of a beneficial Singularity.