Cognitive Neuroscience of Consciousness
At any given point in time, some research areas are stagnant, some are slowly waddling along, and some are zooming ahead amazingly. Unfortunately my own main field of AI is currently in the mostly-stagnant category, though I hope my work on Novamente will change all that within the next couple years. On the other hand, molecular genetics (the domain of my work with my startup Biomind LLC) seems to be zooming ahead amazingly, and the same can be said for some aspects of neuroscience -- e.g. the subject of this post, the cognitive neuroscience of consciousness.
So, in this rather technical and academic blog entry (no details on my sex life or the psychotropic characteristics of Armenian swine toes today, sorry...), I'm going to talk about some interesting research in this field that I've been reading about lately....
Specifically, I'm going to briefly comment on a paper I just read, by a guy named Ned Block, called "Paradox and Cross-Purposes in Recent Work on Consciousness." The paper is in a book called "The Cognitive Neuroscience of Consciousness," which is a special issue of the journal COGNITION. Many of the other papers in the book are good too. This is one of two really good books I've recently read on this subject, the other being "Neural Correlates of Consciousness" edited by Thomas Metzinger (whose over-long tome Being No One, on the way the brain-mind constructs phenomenal selves, I also recommend).
One point raised repeatedly in the book is that the brain can often respond to stimuli in an unconscious and yet useful way. Stimuli that are too weak to enter consciousness can nevertheless influence behavior, via priming and other methods. For instance, if a person is shown a picture (the Muller-Lyer illusion) that is known to cause the human mind to mis-estimate line lengths, and then asked to make a motor response based on the line lengths in the picture (say, pointing to the ends of the lines) VERY QUICKLY, they will respond based on the actual line lengths without making any illusory perceptions. But if they are given a little more time to respond, then they will respond erroneously, falling prey to the illusion. The illusion happens somewhere between perception and cognition -- but this pathway is slow, and there can be super-quick loops between perception and action, which bypass cognition with all its benefits and illusions.
Block, in his paper, raises the familiar point that the concept of "consciousness" is a bit of a mess, and he decomponses it into three subconcepts:
Accessibility has to do with Baars' old-but-good notion of the "global workspace" -- the idea that reflective consciousness consists of representing knowledge in some kind of "workspace" where it can be freely manipulated in a variety of ways. This workspace appears not to be localized in any particular part of the brain, but rather to be a kind of coordinated activity among many different brain regions ... perhaps, in dynamical systems terms, some kind of "attractor."
The experienced intensity of consciousness of something, Block proposes, has to do largely with the intensity of the phenomenality of the something, which may have to do with the amount of activation in the neural region where the "something" is taking place. But reflectivity requires something else besides just intensity (it requires the triggering of the global workspace attractor).
In terms of scientists' search for neural correlates of consciousness, Block reckons that what they're finding now are mainly neural correlates of intense phenomenality. For instance, when the ventral area of the brain is highly active, this seems to indicate some conscious perception is going on. But, if reflectivity is a separate and additional process to phenomenality, then finding neural correlates of the latter may not be any help in deducing the neural basis of the former.
Block's ideas fit in pretty nicely with my hypothesis (see my essay Patterns of Awareness) that the phenomenality attached to a pattern has to do with the degree to which that pattern IS a pattern in the system that it's a pattern in. In this view, locally registered things can be patterns in the brain and ergo be phenomenal to an extent; but, expansion of something into the global workspace attractor is going to make it a lot more intense as a pattern, ergo more intensely phenomenal. Ergo in the human brain intense phenomenality and reflectivity seem to go along with each other -- since both are coupled to accessibility....
All this is still pretty far from a detailed understanding of how consciousness arises in human brains. But finally, it seems to me that neuroscientists are saying the right sorts of things and asking the right sorts of questions. The reason isn't that this generation of neuroscientists is wiser than the last, but rather that modern experimental tools (e.g. fMRI and others) have led to empirical data that make it impossible either to ignore the issue of consciousness, or to continue to hold to simplistic and traditional views.
No specific brain region or brain function or neurotransmitter or whatever will be found that causes raw awareness (Block's phenomenality). But the particular aspects associated with intense human awareness -- like global cognitive accessibility and reflectivity -- will in the next few years come to be clearly associated with particular brain structures and processes. As Block proposes, these will come to be viewed as ways of modulating and enhancing (rather than causing) basic phenomenal awareness. In AI terms, it will become clear how software systems can emulate these structures and processes -- which will help guide the AI community to creating reflective and highly intelligent AI systems, without directly addressing the philosophical issue of whether AI's can really experience phenomenality (which is bogus, in my view -- of course they can; every bloody particle does; but for me, as a panpsychist, the foundational philosophy of consciousness is a pretty boring and easy topic).
I don't find these ideas have much to add to the Novamente design -- I already took Baars' global workspace notions into account in the design of Webmind, Novamente's predecessor, way back in the dark ages when Java was slow and Dubya was just a nightmare and I still ate hamburgers. But they increase the plausibility of simple mappings between Novamente and the human mind/brain -- which is, as my uncle liked to say, significantly better than a kick in the ass.
So, in this rather technical and academic blog entry (no details on my sex life or the psychotropic characteristics of Armenian swine toes today, sorry...), I'm going to talk about some interesting research in this field that I've been reading about lately....
Specifically, I'm going to briefly comment on a paper I just read, by a guy named Ned Block, called "Paradox and Cross-Purposes in Recent Work on Consciousness." The paper is in a book called "The Cognitive Neuroscience of Consciousness," which is a special issue of the journal COGNITION. Many of the other papers in the book are good too. This is one of two really good books I've recently read on this subject, the other being "Neural Correlates of Consciousness" edited by Thomas Metzinger (whose over-long tome Being No One, on the way the brain-mind constructs phenomenal selves, I also recommend).
One point raised repeatedly in the book is that the brain can often respond to stimuli in an unconscious and yet useful way. Stimuli that are too weak to enter consciousness can nevertheless influence behavior, via priming and other methods. For instance, if a person is shown a picture (the Muller-Lyer illusion) that is known to cause the human mind to mis-estimate line lengths, and then asked to make a motor response based on the line lengths in the picture (say, pointing to the ends of the lines) VERY QUICKLY, they will respond based on the actual line lengths without making any illusory perceptions. But if they are given a little more time to respond, then they will respond erroneously, falling prey to the illusion. The illusion happens somewhere between perception and cognition -- but this pathway is slow, and there can be super-quick loops between perception and action, which bypass cognition with all its benefits and illusions.
Block, in his paper, raises the familiar point that the concept of "consciousness" is a bit of a mess, and he decomponses it into three subconcepts:
- phenomenality (which I've called "raw awareness")
- accessibility (that something is accessible throughout the brain/mind, not just in one localized region)
- reflectivity (that something can be used as content of another mental experience)
- everything has some phenomenality ("the mind is aware of everything inside it", which to me is just a teeeeeensy step from the attractive panpsychist proposition "everything is aware")
- but only things that undergo a particular kind of neural/mental processing become reflective, and
- with reflectivity comes accessibility
Accessibility has to do with Baars' old-but-good notion of the "global workspace" -- the idea that reflective consciousness consists of representing knowledge in some kind of "workspace" where it can be freely manipulated in a variety of ways. This workspace appears not to be localized in any particular part of the brain, but rather to be a kind of coordinated activity among many different brain regions ... perhaps, in dynamical systems terms, some kind of "attractor."
The experienced intensity of consciousness of something, Block proposes, has to do largely with the intensity of the phenomenality of the something, which may have to do with the amount of activation in the neural region where the "something" is taking place. But reflectivity requires something else besides just intensity (it requires the triggering of the global workspace attractor).
In terms of scientists' search for neural correlates of consciousness, Block reckons that what they're finding now are mainly neural correlates of intense phenomenality. For instance, when the ventral area of the brain is highly active, this seems to indicate some conscious perception is going on. But, if reflectivity is a separate and additional process to phenomenality, then finding neural correlates of the latter may not be any help in deducing the neural basis of the former.
Block's ideas fit in pretty nicely with my hypothesis (see my essay Patterns of Awareness) that the phenomenality attached to a pattern has to do with the degree to which that pattern IS a pattern in the system that it's a pattern in. In this view, locally registered things can be patterns in the brain and ergo be phenomenal to an extent; but, expansion of something into the global workspace attractor is going to make it a lot more intense as a pattern, ergo more intensely phenomenal. Ergo in the human brain intense phenomenality and reflectivity seem to go along with each other -- since both are coupled to accessibility....
All this is still pretty far from a detailed understanding of how consciousness arises in human brains. But finally, it seems to me that neuroscientists are saying the right sorts of things and asking the right sorts of questions. The reason isn't that this generation of neuroscientists is wiser than the last, but rather that modern experimental tools (e.g. fMRI and others) have led to empirical data that make it impossible either to ignore the issue of consciousness, or to continue to hold to simplistic and traditional views.
No specific brain region or brain function or neurotransmitter or whatever will be found that causes raw awareness (Block's phenomenality). But the particular aspects associated with intense human awareness -- like global cognitive accessibility and reflectivity -- will in the next few years come to be clearly associated with particular brain structures and processes. As Block proposes, these will come to be viewed as ways of modulating and enhancing (rather than causing) basic phenomenal awareness. In AI terms, it will become clear how software systems can emulate these structures and processes -- which will help guide the AI community to creating reflective and highly intelligent AI systems, without directly addressing the philosophical issue of whether AI's can really experience phenomenality (which is bogus, in my view -- of course they can; every bloody particle does; but for me, as a panpsychist, the foundational philosophy of consciousness is a pretty boring and easy topic).
I don't find these ideas have much to add to the Novamente design -- I already took Baars' global workspace notions into account in the design of Webmind, Novamente's predecessor, way back in the dark ages when Java was slow and Dubya was just a nightmare and I still ate hamburgers. But they increase the plausibility of simple mappings between Novamente and the human mind/brain -- which is, as my uncle liked to say, significantly better than a kick in the ass.