Flogging Poor Searle Again
Someone emailed me recently about Searle's Chinese Room argument,
http://en.wikipedia.org/wiki/Chinese_room
a workhorse theme in the philosophy of AI that normally bores me to tears.
But though the Chinese room bores me, part of my reply to the guy's question wound up interesting me slightly so I thought I'd repeat it here.
I won't recapitulate the Chinese room argument here; if you don't know it please follow the above link to Wikipedia.
The issue I'll raise here ties in with the question of whether recent theoretical developments regarding "AI with massive amounts of processing power" have any relevance to pragmatic AI.
As an example of this sort of theoretical research, check out:
http://www.hutter1.net/
which describes among other things an AI system called AIXI that uses infinitely much computational resources and achieves a level of intelligence greater than or equal to that of any other possible AI system. There are also approximations to AIXI such as AIXItl that use only insanely rather than infinitely much computational resources.
My feeling is that one should think about, not just
Intelligence = complexity of goals that a system can achieve
but also
Efficient intelligence = Sum over goals a system can achieve of: (complexity of the goal)/(amount of space and time resources required to achieve the goal)
According to these definitions, AIXI has zero efficient intelligence, and AIXItl has extremely low efficient intelligence. The challenge of AI in the real world is in achieving efficient intelligence not just raw intelligence.
Also, according to these definitions, the Bekenstein bound places a limit on the maximal efficient intelligence of any system in the physical universe.
Now, back to the Chinese room (hmm, writing this blog post is making me hungry ... after I'm done typing it I'm going to head out for some Kung Pao chicken!!)....
A key point is: The scenario Searle describes is likely not physically possible, due to the unrealistically large size of the rulebook.
And even if Searle's scenario somehow comes out physically plausible (e.g. maybe Bekenstein is wrong due to currently unknown physics), it certainly involves systems totally unlike any that we have ever encountered. Our terms like "intelligence" and "understanding" and "mind" were not created for dealing with massive-computational-resources systems of this nature.
The structures that we associate with intelligence (will, focused awareness, etc.) in a human context, all come out of the need to do intelligent processing within modest space and time requirements.
So when someone says the feel like the {Searle+rulebook} system isn't really understanding Chinese, what they really mean (I argue) is: It isn't understanding Chinese according to the methods we are used to, which are methods adapted to deal with modest space and time resources.
This ties in with the relationship between intensity-of-consciousness and degree-of-intelligence.
(Note that I write about intensity of consciousness rather than presence of consciousness. I tend toward panpsychism but I do accept that "while all animals are conscious, some animals are more conscious than others" (to pervert Orwell). I have elaborated on this perspective considerably in my 2006 book The Hidden Pattern.)
In real life, these seem often to be tied together, because the cognitive structures that correlate with intensity of consciousness are useful ones for achieving intelligent behaviors.
However, Searle's scenario is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality (understanding Chinese) that is NOT associated with any intensity-of-consciousness.
But I suggest that this pathology is due to the unrealistically large amount of computing resources that the rulebook requires.
I.e., it is finitude of resources that causes intelligence and intensity-of-consciousness to be correlated. The fact that this correlation breaks in a pathological, physically-impossible case that requires dramatically much resources, doesn't mean too much...
What it means is that "understanding", as we understand it, has to do with structures and dynamics of mind that arise due to having to manifest efficient intelligence, not just intelligence.
That is really the moral of the Chinese room.
http://en.wikipedia.org/wiki
a workhorse theme in the philosophy of AI that normally bores me to tears.
But though the Chinese room bores me, part of my reply to the guy's question wound up interesting me slightly so I thought I'd repeat it here.
I won't recapitulate the Chinese room argument here; if you don't know it please follow the above link to Wikipedia.
The issue I'll raise here ties in with the question of whether recent theoretical developments regarding "AI with massive amounts of processing power" have any relevance to pragmatic AI.
As an example of this sort of theoretical research, check out:
http://www.hutter1.net/
which describes among other things an AI system called AIXI that uses infinitely much computational resources and achieves a level of intelligence greater than or equal to that of any other possible AI system. There are also approximations to AIXI such as AIXItl that use only insanely rather than infinitely much computational resources.
My feeling is that one should think about, not just
Intelligence = complexity of goals that a system can achieve
but also
Efficient intelligence = Sum over goals a system can achieve of: (complexity of the goal)/(amount of space and time resources required to achieve the goal)
According to these definitions, AIXI has zero efficient intelligence, and AIXItl has extremely low efficient intelligence. The challenge of AI in the real world is in achieving efficient intelligence not just raw intelligence.
Also, according to these definitions, the Bekenstein bound places a limit on the maximal efficient intelligence of any system in the physical universe.
Now, back to the Chinese room (hmm, writing this blog post is making me hungry ... after I'm done typing it I'm going to head out for some Kung Pao chicken!!)....
A key point is: The scenario Searle describes is likely not physically possible, due to the unrealistically large size of the rulebook.
And even if Searle's scenario somehow comes out physically plausible (e.g. maybe Bekenstein is wrong due to currently unknown physics), it certainly involves systems totally unlike any that we have ever encountered. Our terms like "intelligence" and "understanding" and "mind" were not created for dealing with massive-computational-resources systems of this nature.
The structures that we associate with intelligence (will, focused awareness, etc.) in a human context, all come out of the need to do intelligent processing within modest space and time requirements.
So when someone says the feel like the {Searle+rulebook} system isn't really understanding Chinese, what they really mean (I argue) is: It isn't understanding Chinese according to the methods we are used to, which are methods adapted to deal with modest space and time resources.
This ties in with the relationship between intensity-of-consciousness and degree-of-intelligence.
(Note that I write about intensity of consciousness rather than presence of consciousness. I tend toward panpsychism but I do accept that "while all animals are conscious, some animals are more conscious than others" (to pervert Orwell). I have elaborated on this perspective considerably in my 2006 book The Hidden Pattern.)
In real life, these seem often to be tied together, because the cognitive structures that correlate with intensity of consciousness are useful ones for achieving intelligent behaviors.
However, Searle's scenario is pathological in the sense that it posits a system with a high degree of intelligence associated with a functionality (understanding Chinese) that is NOT associated with any intensity-of-consciousness.
But I suggest that this pathology is due to the unrealistically large amount of computing resources that the rulebook requires.
I.e., it is finitude of resources that causes intelligence and intensity-of-consciousness to be correlated. The fact that this correlation breaks in a pathological, physically-impossible case that requires dramatically much resources, doesn't mean too much...
What it means is that "understanding", as we understand it, has to do with structures and dynamics of mind that arise due to having to manifest efficient intelligence, not just intelligence.
That is really the moral of the Chinese room.
<< Home