FAQ about AGI, Singularity, Etc.

Q: When do you think AGI will be created at the current rate?

A: The “current rate” is hard to define, given the reality of exponential acceleration, and the difficulty of estimating the exponent for acceleration of AGI development. My best guess is that, if there’s no massive and well-done funding infusion, we’ll get it sometime between 2020 and 2035. (By “AGI” I mean human-level AGI, though not necessarily precise human intelligence emulation.)

Q: Do you think this could be sped up / When is the soonest we can expect?

A: I suspect it could be sped up and success achieved in 3-5 years from now, with a massive funding infusion orchestrated very intelligently.

Q: How much money would you need to create AGI as soon as possible? When could you do it by?

A: With a few million USD, I believe that within 3-5 years I could lead an OpenCog team to create a working prototype early stage AGI robot toddler, that would be sufficiently impressive to render garnering dramatic additional funding relatively unproblematic. But this $$ would need to be devoted purely to AGI without other strings attached. On the other hand, with hundreds of millions of dollars one could create an AGI Manhattan Project of sorts, which could potentially get us all the way to human adult level AGI in the same time frame — IF it were run very effectively.

Q: If a wealthy corporation, a powerful government agency or an individual billionaire decided they wanted to create AGI, how long do you think it would take them? Do you think they could do this without people knowing and thus keep the benefits of AGI to themselves? Maybe even change the world to suit them better without us knowing?

A: I think it’s very unlikely that a serious human-level AGI project could effectively be kept secret all the way to completion. My feel is — To achieve AGI rapidly would require too many resources to be done in secrecy; but if it’s done more slowly, information is sure to leak out along the way.

Q: Aren’t you worried about AGI destroying the world, killing off humanity, etc.?

A: I think those bad outcomes are possible. I don’t want them to happen. I see no reason to believe they’re extremely likely. There are also many other scary risks not directly related to AGI, such as nuclear warfare, bioweapons, nanotech, etc.; and AGI may play a role in mitigating them. We need to understand AGI a lot better to really grok the risks and opportunities AGI affords; and I don’t think we can understand AGI really well until we’ve built some pretty smart early-stage AGIs.

Q: Do you believe a Singularity will arrive? If so, when?

A: I believe a Singularity will probably arrive, though other outcomes like a massively destructive world war or bioengineered plague, etc., seem possible. A technological stagnation also seems possible though I rate it highly unlikely. Ray Kurzweil’s 2045 estimate of the Singularity date seems plausible, though I think it could happen as soon as 2020 or as late as, say, 2100 — depending on various scientific and social factors. Note that we might achieve human-level AGI, radical healthspan extension and other cool stuff well before a Singularity — especially if we choose to throttle AGI development rate for a while in order to increase the odds of a beneficial Singularity.


2 pings

  1. http://mawaqif-mod7ika.com/profile.php?u=ReginaldGCF says:

    All these dance their way to our collective memory.
    Further, prior to movig out of the town, make some research on difference restaurants.
    Rounding back at number 8 of the list is group personal training.

  2. fp-happiness.net says:

    The Ideal Place to Install a Wireless Router Device.
    After this setup is complete, then you will be able to install
    two PC systems all at the same time connected using only one Ethernet cord.
    Although they were interested about the quality and
    packages of their cable – fp-happiness.net
    television, but the price is the point.

  3. Pete says:

    Great goods from you, man. I’ve take into account your stuff prior to and you are just extremely great.
    I actually like what you’ve acquired right here, really like what you’re stating and the way in which
    by which you assert it. You are making it entertaining and you continue to take care of to stay it sensible.
    I can not wait to learn much more from you.
    This is really a tremendous web site.

  4. Elliot Sumner says:

    I’m sure The Transcendental Object at the End of Time has a date in mind.

    I wonder if it has a shared calendar

  1. FAQ on AGI & Singularity added to site » goertzel.org says:

    […] created a page giving brief answers to some nontechnical questions about AGI/Singularity I get asked a […]

  2. Can Futurists Predict the Year of the Singularity? | Simon White says:

    […] believes AGI is possible well within Kurzweil’s timeframe. The singularity is harder to predict, he says on his personal website, estimating the date anywhere between 2020 and […]

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>