AI for Theorem-Proving 2019

In the midst of a bunch of more SingularityNET and Singularity Studio oriented activity, it’s a pleasure to be able to take a few days to focus on deep AI issues — I’m currently en route from Tokyo, where I spoke at the AI Expo and the Teamz blockchain events and helped give SingularityNET and OpenCog developer workshops (and appreciated the cherry blossoms with Ruiting and Qorxi and Zarathustra), to Austria for the 2019 AI for Theorem Proving conference where I’ll be giving a keynote…

I am by no means an expert on the current cutting-edge research regarding application of AI to accelerate and improve automated theorem-proving, but hopefully in a few days my expertise will be slightly elevated. This is the area in which my oldest son Zarathustra is currently working on his PhD, in Prague under Josef Urban, whose work I’ve been following for years.

One thing I am somewhat of an expert on is application of probabilistic logic together with other cognitive algorithms to commonsense reasoning … and Nil Geisweiller and I, in the OpenCog project (being carried out in close collaboration with SingulariityNET AI network development), have been working on is the use of meta-inference for commonsense reasoning … i.e. using reasoning and learning and pattern analysis to study the commonsense inferences an AI system has made, so that it can learn how to do better and better inferences.

Nil and I have been working on this in the context of commonsense reasoning, and also a bit in the context of inference about genomics data … and have been looking at the case of inference based on visual data, in the context of SingularityNET scientist Alexey Potapov and his team’s work on fusing deep neural nets and OpenCog symbolic reasoning for visual question answering. But I believe the directions we’re developing should also have value for accelerating and improving automatic mathematical theorem proving. And I have seen a number of ideas in the “AI for math theorem proving” literature that I believe can be helpful in the context of commonsense inference as well.

Of course, while getting AI systems to do effective commonsense reasoning has been more on my research agenda than formalizing mathematical reasoning lately, math reasoning is also critical to the project of creating human-level and ultimately superhuman AGI. We need our AGIs to reason about their own code, and the algorithms underlying their code.

Further, by representing math reasoning and commonsense reasoning in the same framework, and then doing meta-learning to analyze the patterns of AI inference in both domains, we may advance in both areas — by bringing patterns of commonsense intuition to bear on mathematical inference problems, and bringing more mathematical precision to bear on commonsense reasoning problems.

The keynote I’m going to give at AITP-19 once my flight lands isn’t going to stick too closely to my slides — I’m going to skip past most of the inference details in my talk, and elaborate more on the broader context and motivations. However, in case it’s of interest to someone, here are the slides I’m going to use (and largely hastily click through while talking about related things 😉 …

http://goertzel.org/wp-content/uploads/2019/04/AITP-19-keynote-v1.pdf

At a certain point I will pause the slides and go through this good old example of PLN commonsense reasoning that Nil and Eddie Monroe and I prepared a few years ago…

https://github.com/opencog/opencog/blob/master/examples/pln/amusing-friend/Amusing_Friend_PLN_Demo.html

To get to human-level AGI and a benevolent technological Singularity and all that good stuff, a lot of different threads have to weave together. I believe that meta-learning for uncertain logical inference, spanning commonsense and mathematical reasoning, is going to be one of the keys.

Reading the news…

So we can couple distant quantum systems via sound waves

https://phys.org/news/2019-02-quantum_1.html

http://schusterlab.uchicago.edu/static/pdfs/Whiteley2018.pdf

and we now know that extracellular charge diffusion in the brain (known about since forever, see e.g. review in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6276723/ ) can leap across fairly big gaps

https://www.sciencealert.com/neuroscientists-say-they-ve-found-an-entirely-new-form-of-neural-communication

https://physoc.onlinelibrary.wiley.com/doi/full/10.1113/JP276904

My mind can’t help but wander to Matthew Fisher’s hypothesis on coupled spins of phosphorus atoms (e.g. contained in Posner molecules) possibly serving as a foundation for quantum cognition…

https://www.quantamagazine.org/a-new-spin-on-the-quantum-brain-20161102/

Could slow quasi-periodic electrical waves in the brain help bind together distant Posner molecules, enabling the quantum coupling that Fisher hypothesizes?

Hmmm…

So much still unknown about both the brain and macroscopic quantum mechanics…

Sophia’s AI — June 2018 Brief Update

My  main focus lately has been on the SingularityNET AI-meets-blockchain project, but I’ve also been putting in a fair bit of effort on the intersection between OpenCog and Hanson Robotics, pushing to get the OpenCog cognitive architecture’s integration into the Sophia robot’s “Hanson AI” software back end to the next level.   This doesn’t involve SingularityNET extensively yet, but it will soon — OpenCog connects to SingularityNET, and so integrating OpenCog w/ Sophia’s Hanson AI control software more sophisticatedly, will make it easier to provide Sophia with additional intelligence components via interfacing with SingularityNET AI agents.

In the meantime, Sophia continues to generate a bunch of controversy in the media and in portions of the academic community, centered on issues such as “Should a robot that doesn’t yet have human-level general intelligence be granted citizenship?” (because Sophia was made a Saudi citizen last year) … or “What responsibility do Sophia’s creators have to correct the confusions of people who assume Sophia has human-level general intelligence even though she doesn’t yet?  Is it enough just to post clear information online, or is there a moral responsibility to act even more aggressively to clear up people’s misconceptions?” … and so forth.

These sorts of “controversial” questions are, frankly, not what most fascinates me about human-like robots such as Sophia.   As a hard-core transhumanist I see these as somewhat peripheral transitional questions, which will seem interesting only during a relatively short period of time before AGIs become massively superhuman in intelligence and capability.  I am more interested in the use of Sophia as a platform for general intelligence R&D, and  — once Sophia or similar robots are in scalable commercial production — as a way of bringing beneficial general intelligence to the masses of humanity, in a way that is oriented to make it easy for humans and robots/AIs to understand each others’ values and culture.

However, because people kept asking me about this stuff, last fall right after the Sophia Saudi citizenship announcement came out, I wrote an article in H+ Magazine summarizing the software underlying Sophia as it existed at that time, and addressing a bunch of these other Sophia-related issues that seem to drive media attention and concern.    One thing I describe there is the 3 different control systems we’ve historically used to operate Sophia:

  1. a purely script-based “timeline editor” (used for preprogrammed speeches, and occasionally for media interactions that come with pre-specified questions);
  2. a “sophisticated chat-bot” — that chooses from a large palette of templatized responses based on context and a limited level of understanding (and that also sometimes gives a response grabbed from an online resource, or generated stochastically).
  3. OpenCog, a sophisticated cognitive architecture created with AGI in mind, but still mostly in R&D phase (though also being used for practical value in some domains such as biomedical informatics, see Mozi Health and a bunch of SingularityNET applications to be rolled out this fall).

The distinction between these three control systems was also made fairly clearly in a recent CNBC segment for which I was interviewed (though I look pretty ragged there, I did that video-interview at 1AM via Skype from home, slouched down in my desk chair half-asleep…).

Most public appearances of Sophia have utilized the first two systems.   I know David Hanson and I tend not to prefer the script-based approach and to prefer to interact with Sophia publicly in a mode where we can’t predict what she’s going to say next (i.e. 2 or 3 above).

A couple examples of Hanson Robots controlled using OpenCog, back in 2016, are here:

Much of that H+ Magazine article is still accurate regarding the state of play today.   However, there has also been some progress since then.

For instance, in the original “Loving AI” pilot study (and see a sample video of a session from that study here) we did last fall, exploring the use of Sophia as a meditation guide for humans, we used a relatively simple chat-bot type control script (which worked fine given the relatively narrow nature of what Sophia needed to do in those trials).   For the next, larger round of studies regarding Sophia’s use as a meditation guide — currently underway at Sofia University in Palo Alto — we are using OpenCog as the control system.  This is frankly not a highly sophisticated use of OpenCog, but using OpenCog here allowed us to interoperate perception, action and language more flexibly than was possible in the control system we used for the pilot.

As of the last few months, we are finally (after years of effort on multiple parts of the problem) able to use the OpenCog system as a stable, real-time control system for Sophia and the other human-scale Hanson robots.   In the Hanson AI Labs component of Hanson Robotics (formerly known as “MindCloud”), working closely with the SingularityNET AI team, we are crafting a “Hanson AI” robot control framework that incorporates OpenCog as a central control architecture, with deep neural networks and other tools assisting as needed in order to achieve sophisticated, whole-organism social and emotional humanoid robotics.

During the next year we will be progressively incorporating more and more of OpenCog’s learning and reasoning algorithms into this “Hanson AI” framework, along with various AI agents running on the SingularityNET decentralized AI-meets-blockchain framework.   Along with  more sophisticated use of PLN, and better modeling of human-like emotional dynamics and their impact on cognition, we will also be incorporating cognition-driven stochastic language generation, using language models inferred by our novel unsupervised grammar induction algorithms.   And so much more.

I expect that Sophia and the other Hanson robots will continue to generate some controversy — along with widespread passion and excitement.  But I also expect the nature of the controversy, passion and excitement to change quite a lot during the next couple years, as these wonderful R&D platforms help propel the Hanson-Robotics/OpenCog/SingularityNET research teams toward general intelligence.   The smarter these AIs and robots get, the more controversial things are likely to get — but this is also where the greatest benefit for humans and other sentient beings is going to lie.

 

SingularityNET — Whoa !!!

SingularityNET’s Ben Goertzel has a grand vision for the future of AI

SingularityNET — an ambitious project to create a decentralized marketplace for AI — has raised a lot of money in its token sale. In around 60 seconds after opening the sale to the public, it sold out of the whole amount of available tokens (the AGI token), bringing the total raised to $36 million.”

Seems that, thanks to some great new collaborators and 6 months of hard work on PR,  fundraising, tokenomic design and legalese, I have finally gotten some significant funding behind my AGI visions.

Oh, and Ruiting and I have a baby boy due in January.

2018 should be interesting!

 

A few shiny daydreams…

psychedelic_sophiadream_94f9248681

Fun with image transformations…

freaky_brains

robodreams

last night i awoke in your dream

(a surrealist-style poem popped into my head when i sat down at the computer today …i mean … why the fuck not…)


last night I awoke in your dream

ben goertzel

Last night I awoke in your dream

In the glow beneath your eyelids

Long yellow legs breasts o my heart

Fertile gaze of the nothingness

 

Last night I awoke in your dream

In the midst of a mad party

Pigs and chickens in Armani suits

whistling Beethoven

and you were there, naked,

skin shining with midsummer rain

and you called to me “Knowledge!

Science! Excellence!

 

“Serve me,” you said, “feel my

perfection. Deliver me bliss and

your existence. Show me joys past

description – or else – ”

 

I smiled at you willingly, ran like wild toward

your sweet flesh, and then

I was on a bare mountain road, surrounded

by hungry goats

 

“I love you,” I called, “please come back!”

 

The pigs in the suits returned, offering

me martinis

 

Last night I awoke in your dream

and I occupied your body

For a minute I was a beautiful young

woman – soft taut skin, curious energy,

no floppy old balls between my legs,

and wings on my back of course,

such glorious-colored feathers

 

Careening in the seas in your skull,

last night I passed through the nameless portal

the curve of your golden eyes embraced me

 

Last night I awoke in your dream,

and I saw you awoke in mine,

and we stared into each others’

eyes from behind each others’ eyelids

Deep Mind and Go and Hybrid AI Architectures

I like Gary Marcus’s article on Deep Mind and their recent awesome Go-playing AI software, which showed the ability to play at a top-notch level (not yet beating the world champion, but beating the European champion, which is a lot better than I can do….)

One good point Gary  makes in his article is that, while Deep Mind’s achievement is being trumpeted in the media as a triumph of “deep learning”, in fact  their approach to Go is based on integrating deep learning with other AI methods (game tree search) — i.e. it’s a hybrid approach, which tightly integrates two different AI algorithms in a context-appropriate way.

OpenCog is a more complicated, richer hybrid approach, which incorporates deep learning along with a lot of other stuff….. While Gary Marcus and I don’t agree on everything we do seem to agree that an integrated approach combining fairly heterogeneous learning algorithms/representations in a common framework is likely to do better than any one golden algorithm….

Almost no one doubts that deep learning is part of the story of human-like cognition (that’s been known since the 1960s actually)….  What Gary and I (among others) doubt is that deep learning is 100% or 80% or 50% of the story… probably according to my guess, it’s more like 15% of the story…

Go is a very tough game but in the end a strictly delimited domain.   To handle the everyday human world, which is massively more complex than Go in so many ways, will require a much more sophisticated hybrid architecture.  In OpenCog we have such an architecture.  How much progress Deep Mind is making toward such an architecture I don’t pretend to know, and their Go playing software — good as it is at what it does — doesn’t really give any hints in this regard.

OpenAI — quick thoughts

People keep asking me for comments about OpenAI.   Rather than pasting the same stuff into dozens of emails, I’ll just put my reply here…

(New links regarding OpenAI are appearing online frequently so I won’t try to link to the most interesting ones at the particular moment I’m writing this.  Use Google and investigate yourself if you wish 😉

Generally obviously OpenAI is a super-impressive initiative.   I mean —  a BILLION freakin’ dollars, for open-source AI, wow!!

So now we have an organization with a pile of money available and a mandate to support open-source AI, and a medium-term goal of AGI … and they seem fairly open-minded and flexible/adaptive about how to pursue their mandate, from what I can tell…

It seems their initial initiative is toward “typical 2015 style deep learning”, and that their board of advisors is initially strongly biased toward this particular flavor of AI.   So they are largely initially thinking about “big data / deep NN” type AI …   This should have some useful short-term consequences, such as probably the emergence of open source computer vision tools that are truly competitive with commercial systems.

However, it is worth noting that they are planning on spending their billion $$ over a period of 10 yrs or more.

So — Right now the OpenAI leadership is pumped about deep learning NN, in part because of recent successes with such algorithms by big companies.   But their perspective on AI is obviously broader than that.   if some other project — say, OpenCog — shows some exciting successes, for sure they will notice, and I would guess will be open to turning their staff in the direction of the successes — and potentially to funding external OSS teams that look exciting enough..

So, overall, from a general view obviously OpenAI is a Very Good Thing.

Open source and AI Safety

Also, I do find it heartening that the tech-industry gurus behind OpenAI have come to the realization that open-sourcing advanced AI is the best approach to maximizing practical “AI Safety.”    I haven’t always agreed with Elon Musk’s pronouncements on AI safety in the past, but I can respect that he has been seriously thinking through the issues, and this time I think he has come to the right conclusion…

I note that Joel Pitt and I wrote an article a few years ago, articulating the argument for open-source as the best practical path to AI safety.   Also, I recently wrote an essay pointing out the weaknesses in Nick Bostrom’s arguments for a secretive, closed, heavily-regulated approach to AGI development.   It seems the OpenAI founders basically agree and are putting their money where their mouth is.

OpenAI and OpenCog and other small OSS AI initiatives

Now, what about OpenAI and OpenCog, the open-source AGI project I co-founded in 2008 and have been helping nurse along ever since?

Well, these are very different animals.   First, OpenCog is aimed specifically and squarely at artificial General intelligence — so its mandate is narrower than that of OpenAI.   Secondly and most critically, as well as aiming to offer a platform to assist broadly with AGI development, OpenCog is centered on a specific cognitive architecture (which has been called CogPrime) created based on decades of thinking and prototyping regarding advanced AGI.

That is, OpenCog is focused on a particular design for a thinking machine, whereas OpenAI is something broader — an initiative aimed at doing all sorts of awesome AI R&D in the open source.

From a  purely OpenCog-centric point of view, the value of OpenAI would appear to be mainly: Something with a significant potential to smooth later phases of OpenCog development.

Right now OpenCog is in-my-biased-opinion-very-very-promising but still early-stage — it’s not very easy to use and (while there are some interesting back-end AI functionalities) we don’t have any great demos.   But let’s suppose we get beyond this point — as we’re pushing hard to do during the next year — and turn OpenCog into a system that’s a pleasure to work with, and does piles of transparently cool stuff.   If we get OpenCog to this stage — THEN at that point, it seems OpenAI would be a very plausible source to pile resources of multiple sorts into developing and applying and scaling-up OpenCog…

And of course, what holds for OpenCog also would hold for other early-stage non-commercial AI projects.   OpenAI, with a financial war-chest that is huge from an R&D perspective (though not so huge compared to say, a military budget or the cost of building a computer chip factory), holds out a potential path for any academic or OSS AI project to transition from the stage of “exciting demonstrated results” to the stage of “slick, scalable and big-time.”

Just as currently commercial AI startups can get acquired by Google or Facebook or IBM etc. — similarly, in future non-commercial AI projects may get boosted by involvement from OpenAI or other similar big-time OSS AI organizations.   The beauty of this avenue is, of course  that– unlike  in the case of acquisition of a startup by a megacorporation — OpenAI jumping on board some OSS project won’t destroy the ability of the project founders to continue to work on the project and communicate their work freely.

Looking back 20 years from now, the greatest value of the Linux OS may be seen to be its value as an EXEMPLAR for open-source development — showing the world that OSS can get real stuff done, and thus opening the door for AI and other advanced software, hardware and wetware technologies to develop in an OSS manner.

Anyway those are some of my first thoughts on OpenAI; I’ll be curious how things develop, and  may write something more once more stuff happens … interesting times yadda yadda!! …

From DARPA to Toyota… so quickly… (reflection on the entrepreneurial state)

I’m reading The Entrepreneurial State (on my new Sony Digital Paper, which btw is far and away is the most awesome e-reader created  as of 2015…), which makes a pretty strong case that gov’t research funding, rather than VCs and startups, has been the primary engine of tech innovation….

Self-driving cars are not discussed in the book but seemed to me an example of this — DARPA’s driving grand challenges seemed to rather quickly pave the way for Google and then pretty much every car company to jump into the self-driving cars arena.   All of a sudden self-driving cars are the new common sense rather than a niche techno-futurist idea.    But it was US gov’t investment that mediated this leap, getting the tech to the point where companies felt it was mature enough for them to jump in.

So beholding the recent DARPA robotics grand challenge, I wondered if it would have the same effect.   Would it spur companies to invest on follow-on technologies, taking up where the DARPA-funded entrants (largely universities, often with gov’t funding independent of DARPA) left off?

I didn’t have to wonder long though.   Yesterday I saw news that Toyota is putting USD $1 billion into US-based AI research labs — and that this AI effort will be run by Gil Pratt whose last job was at DARPA, running the robotics grand challenge.

The time-lag between the entrepreneurial state putting funding and focus on a certain idea or area, and big corporations taking over as R segues into D, grows shorter and shorter as the Singularity grows nearer…

(Whether we want the AGI revolution to be led by big companies is another question.  Obviously I’m pulling for a Linux-style open-source AGI, perhaps centered on OpenCog — AGI of, by and for the people … and the trans-people! ….  But that’s another story…)

 

Sociopaths flogging Aspergers …

The New York Times ran a pretty good article on the harsh, ruthlessly efficiency-driven work environment inside Amazon…


I take this as more evidence that the last gasp of the human-driven economy will consist of
  • A) scattered small groups of creative-minded maniacs spinning out new ideas
  • B) large, well-coordinated groups of overworked Aspergers people, ruled over and metaphorically flogged by overworked sociopaths [e.g. Amazon] —- dealing with the tricky bits of the scaling-up and rolling-out of these new ideas

After the robots and AIs have taken over all the other jobs, during an interim period, these may still be left — and then a few years or at most a couple decades later, they’ll be obsoleted too …

Hopefully, during this interim period, some of the ideas being implemented by the Aspergers/sociopath armies will involve distributing some resources to everyone else who has already been obsoleted from the economy as such…

Older posts «