Sophia’s AI — June 2018 Brief Update

My  main focus lately has been on the SingularityNET AI-meets-blockchain project, but I’ve also been putting in a fair bit of effort on the intersection between OpenCog and Hanson Robotics, pushing to get the OpenCog cognitive architecture’s integration into the Sophia robot’s “Hanson AI” software back end to the next level.   This doesn’t involve SingularityNET extensively yet, but it will soon — OpenCog connects to SingularityNET, and so integrating OpenCog w/ Sophia’s Hanson AI control software more sophisticatedly, will make it easier to provide Sophia with additional intelligence components via interfacing with SingularityNET AI agents.

In the meantime, Sophia continues to generate a bunch of controversy in the media and in portions of the academic community, centered on issues such as “Should a robot that doesn’t yet have human-level general intelligence be granted citizenship?” (because Sophia was made a Saudi citizen last year) … or “What responsibility do Sophia’s creators have to correct the confusions of people who assume Sophia has human-level general intelligence even though she doesn’t yet?  Is it enough just to post clear information online, or is there a moral responsibility to act even more aggressively to clear up people’s misconceptions?” … and so forth.

These sorts of “controversial” questions are, frankly, not what most fascinates me about human-like robots such as Sophia.   As a hard-core transhumanist I see these as somewhat peripheral transitional questions, which will seem interesting only during a relatively short period of time before AGIs become massively superhuman in intelligence and capability.  I am more interested in the use of Sophia as a platform for general intelligence R&D, and  — once Sophia or similar robots are in scalable commercial production — as a way of bringing beneficial general intelligence to the masses of humanity, in a way that is oriented to make it easy for humans and robots/AIs to understand each others’ values and culture.

However, because people kept asking me about this stuff, last fall right after the Sophia Saudi citizenship announcement came out, I wrote an article in H+ Magazine summarizing the software underlying Sophia as it existed at that time, and addressing a bunch of these other Sophia-related issues that seem to drive media attention and concern.    One thing I describe there is the 3 different control systems we’ve historically used to operate Sophia:

  1. a purely script-based “timeline editor” (used for preprogrammed speeches, and occasionally for media interactions that come with pre-specified questions);
  2. a “sophisticated chat-bot” — that chooses from a large palette of templatized responses based on context and a limited level of understanding (and that also sometimes gives a response grabbed from an online resource, or generated stochastically).
  3. OpenCog, a sophisticated cognitive architecture created with AGI in mind, but still mostly in R&D phase (though also being used for practical value in some domains such as biomedical informatics, see Mozi Health and a bunch of SingularityNET applications to be rolled out this fall).

The distinction between these three control systems was also made fairly clearly in a recent CNBC segment for which I was interviewed (though I look pretty ragged there, I did that video-interview at 1AM via Skype from home, slouched down in my desk chair half-asleep…).

Most public appearances of Sophia have utilized the first two systems.   I know David Hanson and I tend not to prefer the script-based approach and to prefer to interact with Sophia publicly in a mode where we can’t predict what she’s going to say next (i.e. 2 or 3 above).

A couple examples of Hanson Robots controlled using OpenCog, back in 2016, are here:

Much of that H+ Magazine article is still accurate regarding the state of play today.   However, there has also been some progress since then.

For instance, in the original “Loving AI” pilot study (and see a sample video of a session from that study here) we did last fall, exploring the use of Sophia as a meditation guide for humans, we used a relatively simple chat-bot type control script (which worked fine given the relatively narrow nature of what Sophia needed to do in those trials).   For the next, larger round of studies regarding Sophia’s use as a meditation guide — currently underway at Sofia University in Palo Alto — we are using OpenCog as the control system.  This is frankly not a highly sophisticated use of OpenCog, but using OpenCog here allowed us to interoperate perception, action and language more flexibly than was possible in the control system we used for the pilot.

As of the last few months, we are finally (after years of effort on multiple parts of the problem) able to use the OpenCog system as a stable, real-time control system for Sophia and the other human-scale Hanson robots.   In the Hanson AI Labs component of Hanson Robotics (formerly known as “MindCloud”), working closely with the SingularityNET AI team, we are crafting a “Hanson AI” robot control framework that incorporates OpenCog as a central control architecture, with deep neural networks and other tools assisting as needed in order to achieve sophisticated, whole-organism social and emotional humanoid robotics.

During the next year we will be progressively incorporating more and more of OpenCog’s learning and reasoning algorithms into this “Hanson AI” framework, along with various AI agents running on the SingularityNET decentralized AI-meets-blockchain framework.   Along with  more sophisticated use of PLN, and better modeling of human-like emotional dynamics and their impact on cognition, we will also be incorporating cognition-driven stochastic language generation, using language models inferred by our novel unsupervised grammar induction algorithms.   And so much more.

I expect that Sophia and the other Hanson robots will continue to generate some controversy — along with widespread passion and excitement.  But I also expect the nature of the controversy, passion and excitement to change quite a lot during the next couple years, as these wonderful R&D platforms help propel the Hanson-Robotics/OpenCog/SingularityNET research teams toward general intelligence.   The smarter these AIs and robots get, the more controversial things are likely to get — but this is also where the greatest benefit for humans and other sentient beings is going to lie.