«

»

OpenAI — quick thoughts

People keep asking me for comments about OpenAI.   Rather than pasting the same stuff into dozens of emails, I’ll just put my reply here…

(New links regarding OpenAI are appearing online frequently so I won’t try to link to the most interesting ones at the particular moment I’m writing this.  Use Google and investigate yourself if you wish 😉

Generally obviously OpenAI is a super-impressive initiative.   I mean —  a BILLION freakin’ dollars, for open-source AI, wow!!

So now we have an organization with a pile of money available and a mandate to support open-source AI, and a medium-term goal of AGI … and they seem fairly open-minded and flexible/adaptive about how to pursue their mandate, from what I can tell…

It seems their initial initiative is toward “typical 2015 style deep learning”, and that their board of advisors is initially strongly biased toward this particular flavor of AI.   So they are largely initially thinking about “big data / deep NN” type AI …   This should have some useful short-term consequences, such as probably the emergence of open source computer vision tools that are truly competitive with commercial systems.

However, it is worth noting that they are planning on spending their billion $$ over a period of 10 yrs or more.

So — Right now the OpenAI leadership is pumped about deep learning NN, in part because of recent successes with such algorithms by big companies.   But their perspective on AI is obviously broader than that.   if some other project — say, OpenCog — shows some exciting successes, for sure they will notice, and I would guess will be open to turning their staff in the direction of the successes — and potentially to funding external OSS teams that look exciting enough..

So, overall, from a general view obviously OpenAI is a Very Good Thing.

Open source and AI Safety

Also, I do find it heartening that the tech-industry gurus behind OpenAI have come to the realization that open-sourcing advanced AI is the best approach to maximizing practical “AI Safety.”    I haven’t always agreed with Elon Musk’s pronouncements on AI safety in the past, but I can respect that he has been seriously thinking through the issues, and this time I think he has come to the right conclusion…

I note that Joel Pitt and I wrote an article a few years ago, articulating the argument for open-source as the best practical path to AI safety.   Also, I recently wrote an essay pointing out the weaknesses in Nick Bostrom’s arguments for a secretive, closed, heavily-regulated approach to AGI development.   It seems the OpenAI founders basically agree and are putting their money where their mouth is.

OpenAI and OpenCog and other small OSS AI initiatives

Now, what about OpenAI and OpenCog, the open-source AGI project I co-founded in 2008 and have been helping nurse along ever since?

Well, these are very different animals.   First, OpenCog is aimed specifically and squarely at artificial General intelligence — so its mandate is narrower than that of OpenAI.   Secondly and most critically, as well as aiming to offer a platform to assist broadly with AGI development, OpenCog is centered on a specific cognitive architecture (which has been called CogPrime) created based on decades of thinking and prototyping regarding advanced AGI.

That is, OpenCog is focused on a particular design for a thinking machine, whereas OpenAI is something broader — an initiative aimed at doing all sorts of awesome AI R&D in the open source.

From a  purely OpenCog-centric point of view, the value of OpenAI would appear to be mainly: Something with a significant potential to smooth later phases of OpenCog development.

Right now OpenCog is in-my-biased-opinion-very-very-promising but still early-stage — it’s not very easy to use and (while there are some interesting back-end AI functionalities) we don’t have any great demos.   But let’s suppose we get beyond this point — as we’re pushing hard to do during the next year — and turn OpenCog into a system that’s a pleasure to work with, and does piles of transparently cool stuff.   If we get OpenCog to this stage — THEN at that point, it seems OpenAI would be a very plausible source to pile resources of multiple sorts into developing and applying and scaling-up OpenCog…

And of course, what holds for OpenCog also would hold for other early-stage non-commercial AI projects.   OpenAI, with a financial war-chest that is huge from an R&D perspective (though not so huge compared to say, a military budget or the cost of building a computer chip factory), holds out a potential path for any academic or OSS AI project to transition from the stage of “exciting demonstrated results” to the stage of “slick, scalable and big-time.”

Just as currently commercial AI startups can get acquired by Google or Facebook or IBM etc. — similarly, in future non-commercial AI projects may get boosted by involvement from OpenAI or other similar big-time OSS AI organizations.   The beauty of this avenue is, of course  that– unlike  in the case of acquisition of a startup by a megacorporation — OpenAI jumping on board some OSS project won’t destroy the ability of the project founders to continue to work on the project and communicate their work freely.

Looking back 20 years from now, the greatest value of the Linux OS may be seen to be its value as an EXEMPLAR for open-source development — showing the world that OSS can get real stuff done, and thus opening the door for AI and other advanced software, hardware and wetware technologies to develop in an OSS manner.

Anyway those are some of my first thoughts on OpenAI; I’ll be curious how things develop, and  may write something more once more stuff happens … interesting times yadda yadda!! …

7 comments

1 ping

  1. Tim Tyler says:

    The JET article was quite an interesting take-down. I felt as though I was reading a lot of my own thoughts. Maybe you are taking those folk too seriously, though.

    Some of the OpenAI folk seem pretty pro-regulation too. For example, see the article “Machine intelligence, part 2 – Sam Altman”. Regulation, regulation, regulation.

  2. Peter Morgan says:

    Very nice article Ben, thank you.

  3. Steve Richfield says:

    From your POV my biggest concern would be that prospective investors would think you SO outclassed by a billion-dollar competitor that they would choose to put their money into OTHER things, creating a sort of AI Winter in the wake of OpenAI.

    They wouldn’t invest in OpenAI because there has been SO much money already committed that they couldn’t purchase much equity for a finite amount of money.

    Then, if OpenAI fails to produce a really good demo in the next couple of years that goes way beyond simple recognition, there could be an AI Winter, the likes of which followed the Perceptron.

    Of course I am doing OTHER things – MUCH more narrow, but an AI Winter could freeze out everyone – even including poor little Dr. Eliza.

    Steve

  4. cheap madden nfl coins says:

    Excellent Web site, Maintain the very good job. With thanks!.

  5. 2k16 vc coins says:

    Many thanks really practical. Will certainly share website with my friends

  6. counter strike says:

    Extremely individual friendly website. Great info readily available on couple of gos to

  7. game news says:

    Wonderful webpage you’ve here

  1. NPO法人OpenAI: WBAI同様にAGIのオープン開発をめざす大きな動き | 全脳アーキテクチャ・イニシアティブ says:

    […] AGIの開発をオープンに進めることで人類の共有財産としていくことは、私たち「Let’s build a brain together」をスローガンとする全脳アーキテクチャの設立趣旨とも合致し基本的に歓迎できる動きと考えています。 ちなみにAGI学会会長のBen Goertzel氏も早速にブログ「OpenAI – quick thought」にて改めてオープン開発の重要さを指摘しています。 […]

Leave a Reply to 2k16 vc coins Cancel reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>