AI for Theorem-Proving 2019

In the midst of a bunch of more SingularityNET and Singularity Studio oriented activity, it’s a pleasure to be able to take a few days to focus on deep AI issues — I’m currently en route from Tokyo, where I spoke at the AI Expo and the Teamz blockchain events and helped give SingularityNET and OpenCog developer workshops (and appreciated the cherry blossoms with Ruiting and Qorxi and Zarathustra), to Austria for the 2019 AI for Theorem Proving conference where I’ll be giving a keynote…

I am by no means an expert on the current cutting-edge research regarding application of AI to accelerate and improve automated theorem-proving, but hopefully in a few days my expertise will be slightly elevated. This is the area in which my oldest son Zarathustra is currently working on his PhD, in Prague under Josef Urban, whose work I’ve been following for years.

One thing I am somewhat of an expert on is application of probabilistic logic together with other cognitive algorithms to commonsense reasoning … and Nil Geisweiller and I, in the OpenCog project (being carried out in close collaboration with SingulariityNET AI network development), have been working on is the use of meta-inference for commonsense reasoning … i.e. using reasoning and learning and pattern analysis to study the commonsense inferences an AI system has made, so that it can learn how to do better and better inferences.

Nil and I have been working on this in the context of commonsense reasoning, and also a bit in the context of inference about genomics data … and have been looking at the case of inference based on visual data, in the context of SingularityNET scientist Alexey Potapov and his team’s work on fusing deep neural nets and OpenCog symbolic reasoning for visual question answering. But I believe the directions we’re developing should also have value for accelerating and improving automatic mathematical theorem proving. And I have seen a number of ideas in the “AI for math theorem proving” literature that I believe can be helpful in the context of commonsense inference as well.

Of course, while getting AI systems to do effective commonsense reasoning has been more on my research agenda than formalizing mathematical reasoning lately, math reasoning is also critical to the project of creating human-level and ultimately superhuman AGI. We need our AGIs to reason about their own code, and the algorithms underlying their code.

Further, by representing math reasoning and commonsense reasoning in the same framework, and then doing meta-learning to analyze the patterns of AI inference in both domains, we may advance in both areas — by bringing patterns of commonsense intuition to bear on mathematical inference problems, and bringing more mathematical precision to bear on commonsense reasoning problems.

The keynote I’m going to give at AITP-19 once my flight lands isn’t going to stick too closely to my slides — I’m going to skip past most of the inference details in my talk, and elaborate more on the broader context and motivations. However, in case it’s of interest to someone, here are the slides I’m going to use (and largely hastily click through while talking about related things 😉 …

https://goertzel.org/wp-content/uploads/2019/04/AITP-19-keynote-v1.pdf

At a certain point I will pause the slides and go through this good old example of PLN commonsense reasoning that Nil and Eddie Monroe and I prepared a few years ago…

https://github.com/opencog/opencog/blob/master/examples/pln/amusing-friend/Amusing_Friend_PLN_Demo.html

To get to human-level AGI and a benevolent technological Singularity and all that good stuff, a lot of different threads have to weave together. I believe that meta-learning for uncertain logical inference, spanning commonsense and mathematical reasoning, is going to be one of the keys.