|
The evolution of the Internet up till now can be divided, roughly speaking, into three phases:
Pre-Web. Direct, immediate interchange of small bits of text, via e-mail and Usenet. Indirect, delayed interchange of large amounts of text, visual images and computer programs, via ftp.
Web. Direct, immediate interchange of images, sounds, and large amounts of text. Online publishing of articles, books, art and music. Interchange of computer programs, via ftp, is still delayed, indirect, and architecture-dependent.
Network Computing. Direct, immediate interchange of animations and computer programs as well as large texts, images and sounds. Enabled by languages such as Java, the Internet becomes a real-time software resource. Intelligent agents traverse the web carrying out specialized intelligent operations for their owners.
The third phase, the Network Computing phase, is still in a relatively early stage of development, driven the dissemination and development of the Java programming language. However, there is an emerging consensus across the computer industry as to what the ultimate outcome of this phase will be. For many applications, people will be able to run small software "applets" from Web pages, instead of running large, multipurpose programs based in their own computers' hard disk drives. The general-purpose search engines of the Web phase will evolve into more specialized and intelligent individualized Web exploration agents. In short, the Web will be transformed from a "global book" into a massively parallel, self-organizing software program of unprecented size and complexity.
But, exciting as the Network Computing phase is, it should not be considered as the end-point of Web evolution. I believe it is important to look at least one step further. What comes after Network Computing, I propose, is the autopoietic, emergently structured and emergently intelligent Web -- or, to put it metaphorically, the World Wide Brain. The Network Computing environment is a community of programs, texts, images, sounds and intelligent agents, interacting and serving their own ends. The World Wide Brain is what happens when the diverse population making up the Active Web locks into a global attractor, displaying emergent memory and thought-oriented structures and dynamics not programmed into any particular part of the Web. Traditional ideas from psychology or computer science are of only peripheral use in understanding and engineering a World Wide Brain. As we shall see in the following chapters, however, ideas from complex systems science are considerably more relevant.
It may seem hasty to be talking about a fourth phase of Web evolution -- a World Wide Brain -- when the third phase, network computing, has only just begun, and even the second phase, the Web itself, has not yet reached maturity. But if any one quality characterizes the rise of the Web, it is rapidity. The Web took only a few years to dominate the Internet; and Java, piggybacking on the spread of the Web, has spread more quickly than any programming language in history. Thus, it seems reasonable to expect the fourth phase of Internet evolution to come upon us rather rapidly, certainly within a matter of years rather than decades.
The history of networks as a computing paradigm is at first glance simple. There were mainframes and terminals, then PC's and local-area networks ... and now large-scale, integrated network computing environments, providing the benefits of both mainframes and PC's and new benefits besides. This is a true story, but far from the whole story. This view underestimates the radical nature of the network computing paradigm. What network computing truly represents is a return to the cybernetic, self-organization-oriented origins of computer science. It goes a long way toward correcting the fundamental error committed in the 1940's and 1950's, when the world decided to go with a serial, von-Neumann style computer architecture, to the almost total exclusion of more parallel, distributed, brain-like architectures.
The move to network computing is not merely a matter of evolving engineering solutions, it is also a matter of changing visions of computational intelligence. Mainframes and PC's mesh naturally with the symbolic, logic-based approach to intelligence; network computing environments, on the other hand, mesh with a view of the mind as a network of intercommunicating, intercreating processes. The important point is that the latter view of intelligence is the correct one. From computing frameworks supporting simplistic and fundamentally inadequate models of intelligence, one is suddenly moving to a computing framework supporting the real structures and dynamics of mind.
For mind and brain are fundamentally network-based. The mind, viewed system-theoretically, is far more like a network computing system than like a mainframe-based or PC-based system. It is not based on a central system that services dumb peripheral client systems, nor it is based on a huge host of small, independent, barely communicating systems. Instead it is a large, heterogenous collection of systems, some of which service smart peripheral systems, all of which are intensely involved in inter-communication. In short, by moving to a network computing framework, we are automatically supplying our computer systems with many elements of the structure and dynamics of mind.
This does not mean that network computer systems will necessarily be intelligent. But it suggests that they will inherently be more intelligent than their mainframe-based or PC counterparts. And it suggests that researchers and developers concerned with implementing AI systems will do far better if they work with the network computing environment in mind. The network computing environment, supplied with an appropriate operating system, can do half their job for them -- allowing them to focus on the other half, which is inter-networking intelligent agents in such a way as to give rise to the large-scale emergent structures of mind. Far more than any previous development in hardware, network computing gives us real hope that the dream of artificial intelligence might be turned into a reality.
The history of computing in the late 20'th century is familiar, but bears repeating and re-interpretation from a cybernetic, network-centric point of view.
As everyone knows, over the past twenty years, we have seen the mainframes and terminals of early computing replaced with personal computers. With as much memory and processing power as early mainframes, and far better interfaces, PC's have opened up computing to small businesses, homes and schools. But now, in the late 90's, we are seeing a move away from PC's. The PC, many computer pundits say, is on its way out, to be replaced with smart terminals called "network computers" or NC's, which work by downloading their programs from central servers.
An NC lacks a floppy drive; word processing and spreadsheet files, like software, are stored centrally. Instead of everyone having their own copy of, say, a huge word processor like MicroSoft Word, the word processor can reside on the central server, and individuals can download those parts of the word processor that they need. Routine computing is done by the NC; particularly intensive tasks can be taken over by the central computer.
At the present time NC's are not a viable solution for home computer users, as the bandwidth of modem or even ISDN connections is not sufficient for rapid transmission of complex computer programs. For businesses, however, there are many advantages to the network computing approach. Internal bandwidth, as exploited by local-area networks, is often already high, and by providing employees with NC's instead of PC's, an employer saves on maintenance and gains in control.
And once cable modem or equivalently powerful technology becomes common, NC's will be equally viable for the home user. The individual PC, with its limited array of software, will quickly come to seem sterile and limiting. Why pay a lot of money for a new game which you may not even like, when you can simply download a game in half a minute, and pay for it on an as-you-play basis? When it is almost as fast to download software from the Net as it is to extract it from the hard drive of a PC, the advantages of PC's over NC's will quickly become negative. The entire network becomes a kind of virtual hard drive, from which programs or pieces thereof can be extracted on command.
These developments are widely perceived as ironic. After all, the move to PC's was greeted with relief by almost everyone -- everyone but IBM and the other large corporations who made their money from the mainframe market. What a relief it was to be able to get a print-out on your desk, instead of waiting two hours and then running down the the basement printer room. How wonderful to be able to individualize one's computing environment. How exciting to compute at home. What is ironic is that, in an NC context, these mainframes -- or newer machines of even greater "grunt power" -- are becoming fashionable again.
The catch is, of course, that the context is different. In a modern network computing environment, consisting of a central server and a large group of peripheral NC's, no one has to wait two hours for their print-out to come out in the basement. No one has to use a line printer interface, or even a text interface on a screen. High-bandwidth communications conspire with advances in hardware, operating system and graphical interface design to make the modern network computing environment something entirely new: something incorporating the advantages of the old mainframe approach and the advantages of PC's, along with other advantages that have no precedent.
The grunt is there, the raw processing power of the central server; and so is the efficiency of storing only one central copy of each program. But the convenience of PC's is there too, provided by the modern possibility of spinning off new copies of centrally-stored programs on the fly, and downloading them into super-smart terminals -- terminals so smart that they are no longer merely terminals but network computers.
From the point of view of the NC user, the network is like a PC with a giant hard drive. The fact that this hard drive happens to be shared by other people is barely even relevant. From the point of view of the central server itself, on the other hand, the network is almost like an old-style mainframe system, the only difference lying in the semantics of the input/ouput messages sent and received, and in the speed with which these messages have to be sent. Instead of just exchanging verbal or numerical messages with dumb terminals operated by humans, the central server is now exchanging active programs with less powerful but still quite capable satellite computers.
Finally, what is entirely new in modern network computing is the interconnection of various central servers into their own large-scale network. This occurs in the Internet, and within large organizations, it also occurs internally. This is important because it means that no single server has to hold all the information potentially required by its client NC's. The server is responsible for feeding information to its NC's, but it may in some cases derive this information from elsewhere, and simply serve as a "middleman." In other words, a higher-level network of information is overlaid on a lower-level network of control. This opens up the scope of information and applications available to computer users, to a degree never before seen.
All this is exciting, tremendously exciting, but to computer industry insiders, it is becoming almost hackneyed before it has even come to pass. "The Network is the Computer" is a late 90's mantra and marketing slogan. And it is accurate; the network should be, and increasingly really is the computer. What is now inside the PC -- memory, processing, information -- will in the network computing environment be stored all over the place. The overall computation process is distributed rather than centralized, even for basic operations like word processing or spreadsheeting.
What is not observed very often, however, is the relation between network computing and the mind. As it turns out the mainframe approach to computing and the PC approach to computing embody two different incorrect theories of intelligence. The network computing approach embodies a more sophisticated approach to intelligence, which, although still simplistic compared to the structure of the human brain, is fundamentally correct according to principles of cybernetics and cognitive science. As we move toward a worldwide network computing environment, we are automatically moving toward computational intelligence, merely by virtue of the structure of our computer systems, of the logic by which our computer systems exchange and process information. In other words, not only is the network is the computer, but it is the mind as well. This leads to the new, improved slogan with which I have titled this chapter: The network is the computer is the mind.
What does this slogan mean? It stands for a series of statements, of increasing daring. At the most conservative end, one may say that the network computing environment provides an ideal context within which to program true artificial intelligence. At the most radical end, on the other hand, one might plausibly argue that the widespread adoption of network computing will lead automatically to the emergence of computational intelligence. The truth almost certainly lies somewhere between these two extremes. Network computing provides a certain degree of intelligence on its own, and it provides a natural substrate within which to implement programs that display yet greater degrees of intelligence. The network is the computer is the mind.
It's a pleasant, familiar story: mainframes to PC's to network computer systems. The whole thing has the inevitability of history about it. In fact, though, there is more to these developments than is commonly discussed. Network computing is, I believe, an inevitable occurrence -- but only in the abstract sense of "computing with self-organizing networks of intercommunicating processes." Networks are the best way to do artificial intelligence, and they are also the best way to solve a huge variety of other problems in computing. The particular form that network computing is assuming at the present time, however -- computing with networks of interconnected serial digital computers -- is merely a consequence of the evolutionary path that computer hardware has taken over the past half-century. This evolutionary path toward network computing has in fact been highly ironic. Of all the directions it could possibly have taken, computer technology has taken the one most antithetical to artificial intelligence and self-organizing network dynamics. Even so, however, the "network" archetype has proved irresistable, and is emerging in an unforeseen way, out of its arch-enemy, the standard serial, digital computer architecture.
During the last century, we have seen, there have emerged two sharply divergent approaches to the problem of artificial intelligence: the cybernetic approach, based on emulating the brain and its complex processes of self-organization, and the symbolic approach, based on emulating the logical and linguistic operations of the conscious mind.
Neither of these approaches is perfect. It has become increasingly clear in recent years, however, that whereas the symbolic approach is essentially sterile, the cybernetic approach is fertile. The reason for this is quite simple: the mind/brain really is a large, specially-structured, self-organizing network of processes; it really is not a rule-based logical reasoning system. >
We do, of course, carry out behaviors that can be described as logical reasoning -- but what we are doing in these cases, while it can be described in terms of rule-following to some degree of accuracy, is not really rule-following. Ignoring the self-organizing dynamics underlying apparent rule-following -- taking logical reasoning out of the context of intelligently perceived real-world situations, and out of the context of illogical, unconscious mental dynamics -- results in an endless variety of philosophical paradoxes and practical obstacles. The result is that symbolic AI systems are inflexible and uncreative -- not much more "intelligent," really, than a program like Mathematica which does difficult algebra and calculus problems by applying subtle mathematical methods.
The cybernetic approach to AI, for all its flaws, has a far clearer path to success. Building larger and larger, more and more intelligently structured self-organizing networks, we can gradually build up to more and more mind-like structures. Neural networks are one of the more popular implementations of the cybernetic approach, but not the only one. One can also think in terms of genetic algorithms, or more abstract systems of "software agents" -- the point is the emphasis on creative, complex network dynamics, rather than deterministic interactions between systems of logical, rational rules.
In the beginning, back in the 1940's and 50's, these two different approaches to AI were tied in with different approaches to computer hardware design: parallel, distributed analog design versus serial, digital design. Each theory of "computational mind" matched up naturally with a certain approach to "computational brain."
In the parallel, distributed, analog computer, many things happen at each point in time, at many different points in space. Memory is stored all over, and problem-solving is done all over. Memory is dynamic and is not fundamentally distinct from input/output and processing. Furthermore, the basic units of information are not all-or-nothing switches, but rather continuous signals, with values that range over some interval of real numbers. This is how things happen in the brain: the brain is an analog system, with billions of things occurring simultaneously, and with all its different processes occurring in an intricately interconnected way.
On the other hand, in the serial, digital computer, commonly known as the von Neumann architecture, there is a central processor which does one thing at each time, and there is a separate, inert memory to which the central processor refers. On a hardware level, the von Neumann design won for practical engineering reasons (and not for philosophical reasons: von Neumann himself was a champion of neural-net-like models of the mind/brain). By now it is ingrained in the hardware and software industries, just as thoroughly as, say, internal combustion engines are ingrained in the automobile industry. Most likely, every computer you have ever seen or heard of has been made according to the von Neumann methodology.
The cybernetic approach to artificial intelligence, however, has survived the dominance of the von Neumann architecture, moving to a methodology based primarily on serial digital simulations of parallel distributed analog systems. The fact that cybernetic AI has survived even in a totally hostile computing hardware environment is a tribute to the fundamental soundness of its underlying ideas. One doubts very much, on the other hand, whether symbolic AI would have ever become dominant or even significant in a computing environment dominated by neural net hardware. In such an environment, there would have been strong pressure to ground symbolic representation in underlying network dynamics. The whole project of computing with logic, symbolism and language in a formal, disembodied way, might never have gotten started.
Today, many of us feel that the choice of the von Neumann architecture may have been a mistake -- that computing would be far better off if we had settled on a more brain-like, cybernetics-inspired hardware model back in the 1940's. The initial engineering problems might have been greater, but they could have been overcome with moderate effort, and the billions of dollars of money spent on computer R&D in the past decades would have been spent on brainlike computers rather than on the relatively sterile, digital serial machines we have today. In practice, however, no alternate approach to computer hardware has yet come close to the success of the von Neumann design. All attempts to break the von Neumann hegemony have met with embarrassing defeat.
Numerous parallel-processing digital computers have been constructed, from the restricted and not very brainlike "vector processors" inside Cray supercomputers, to the more flexible and AI-oriented "massively parallel" Connection Machines manufactured by Thinking Machines, Inc.. The Cray machines can do many things at a each time step, but they all must be of the same nature. This approach is called SIMD, "single-instruction, multiple dataset": it is efficient for scientific computation, and some simple neural network models, but not for sophisticated AI applications. The Thinking Machines computers, on the other hand, consist of truly independent processors, each of which can do its own thing at each time, using its own memory and exchanging information with other processors at its leisure. This is MIMD, "multiple instruction, multiple dataset"; it is far more reminiscent of brain structure. The brain, at each time, has billions of "instructions" and billions of "data sets"!
These parallel digital machines are exciting, but, for a combination of technical and economic reasons, they have not proved as cost-effective as networks of von Neumann computers. They are used almost exclusively for academic, military and financial research, and even their value in these domains has been dubious. Thinking Machines Inc. has gone bankrupt, and is trying to re-invent itself as a software company; their flagship product, GlobalWorks, is a piece of low-level software that allows networks of Sun workstations to behave as if they were Connection Machines (Sun workstations are high-end engineering computers, running the Unix operating system and implementing, like all other standard contemporary machines, the serial von Neuman model).
With GlobalWorks, all the software tools developed for use with the Connection Machines can now be used in a network computing environment instead. There is a serious loss of efficiency here: instead of a network of processors hard-wired together inside a single machine, one is dealing with a network of processors wired together by long cables, communicating through complex software protocols. However, the economies of scale involved in manufacturing engineering workstations means that it is actually be more cost-effective to use the network approach rather than the parallel-machine approach, even though the latter is better from a pure engineering point of view.
Of even greater interest than the massively parallel digital Connection Machine is the analog, neural net based hardware being produced by several contemporary firms, -- radical, non-binary computer hardware that is parallel and distributed in nature, mixing up multiple streams of memory, input/output and processing at every step of time. For instance, the Australian company Formulab Neuronetics, founded in the mid-80's by industrial psychologist Tony Richter, manufactures analog neural network hardware modeled fairly closely on brain structure. The Neuronetics design makes the Connection Machine seem positively conservative. Eschewing traditional computer engineering altogether, it is a a hexagonal lattice of "neuronal cells," each one exchanging information with its neighbors. There are perceptual neurons, action neurons, and cognitive neurons, each with their own particular properties, and with a connection structure loosely modelled on brain structure. This technology has proved itself in a variety of process control applications, such as voice mail systems and internal automotive computers, but it has not yet made a splash in the mainstream computer industry. By relying on process control applications for their bread and butter, Neuronetics will hopefully avoid the fate of Thinking Machines. But the ultimate ambition of the company is the same: to build an ultra-high-end supercomputer that, by virtue of its size and its brainlike structure, will achieve unprecedented feats of intelligence.
As of now, this kind of neural net hardware is merely a specialty product. But I suspect that, as PC's fade into history, these analog machines will come to play a larger and larger role in the world. In the short run, we might see special-purpose analog hardware used in the central servers of computer networks, to help deal with the task of distributing information amongst various elements of a network computing environment. In the long run, one might see neurocomputers joining digital computers in the worldwide computer network, each contributing their own particular talents to the overall knowledge and processing pool.
The history of AI and computer hardware up till now, then, is a somewhat sad one, with an ironic and optimistic twist at the end. The dominant von Neumann architecture is patently ill-suited for artificial intelligence. Whether it is truly superior from the point of view of practical engineering is difficult to say, because of the vast amount of intelligence and resources that has been devoted to it, as compared to the competitors. But it has incredible momentum -- it has economies of scale on its side, and it has whole industries, with massive collective brainpower, devoted to making it work better and better. The result of this momentum is that alternate, more cybernetically sophisticated and AI-friendly visions of computing are systematically squelched. The Connection Machine was abandoned, and the Neuronetics hardware is being forced to earn its keep in process control. This is the sad part. As usual in engineering, science, politics, and other human endeavors, once a certain point of view has achieved dominance, it is terribly difficult for anything else to gain a foothold.
The ironic and possibly optimistic part, however, comes now and in the near future. Until now, brainlike parallel architectures have been squelched by serial von Neumann machines -- but the trend toward network computing is an unexpected and unintentional reversal of this pattern. Network computing is boldly cybernetic -- it is brainlike computer architecture emerging out of von Neumann computer architecture. It embodies a basic principle of Oriental martial arts: when your enemy swings at you, don't block him, but rather position yourself in such a way that his own force causes him to flip over.
The real lesson, on a philosophical level, may be that the structure of brain and intelligence is irresistable for computing. We took a turn away from it way back in the 1940's, rightly or wrongly, but now we are returning to it in a subtle and unforeseen way. The way to do artificial intelligence and other sophisticated computing tasks is with self-organizing networks of intercommunicating processes -- and so, having settled on computer hardware solutions that do not embody self-organization and intercommunication, we are impelled to link our computers together into networks that do.
Another way to look at these issues is to observe that, historically, the largest obstacle to progress in AI has always been scale. Put simply, our best computers are nowhere near as powerful as a chicken's brain, let alone a human brain. One is always implementing AI programs on computers that, in spite of special-purpose competencies, are overall far less computationally able than one really needs them to be. As a consequence, one is always presenting one's AI systems with problems that are far, far simpler than those confronting human beings in the course ordinary life. When an AI project succeeds, there is always the question of whether the methods used will "scale-up" to problems of more realistic scope. And when an AI project fails, there is always the question of whether it would have succeeded, if only implemented on a more realistic scale. In fact, one may argue on solid mathematical grounds that intelligent systems should be subject to "threshold effects," whereby processes that are inefficient in systems below a certain size threshold, become vastly more efficient once the size threshold is passed.
Some rough numerical estimates may be useful. The brain has somewhere in the vicinity of 100,000,000,000 -10,000,000,000,000 neurons, each one of which is itself a complex dynamical system. There is as yet no consensus on how much of the internal dynamics of the neuron is psychologically relevant. Accurate, real-time models of the single neuron are somewhat computationally intensive, requiring about the computational power of a low-end Unix workstation. On the other hand, a standard "formal neural-network" model of the neuron as a logic gate, or simple nonlinear recursion, is far less intensive. A typical workstation can simulate a network of hundreds of formal neurons, evolving at a reasonable rate.
Clearly, whatever the cognitive status of the internal processes of the neuron, no single computer that exists today can come anywhere near to emulating the computational power of the human brain. One can imagine building a tremendous supercomputer that would approximate this goal. However, recent history teaches that such efforts are plagued with problems. A simple example will illustrate this point. Suppose one sets out, in 1995, to build a massively parallel AI machine by wiring together 100,000 top-of-the-line chips. Suppose the process of design, construction, testing and debugging takes three years. Then, given the current rate of improvement of computer chip technology (speed doubles around every eighteen months), by the time one has finished building one's machine in 1998, its computational power will be the equivalent of only 25,000 top-of-the-line chips. By 2001, the figure will be down to around 6,500.
Instead of building a supercomputer that is guaranteed to be obsolete by the time it is constructed, it makes more sense to utilize an architecture which allows the continuous incorporation of technology improvements. One requires a highly flexible computer architecture, which allows continual upgrading of components, and relatively trouble-free incorporation of new components, which may be constructed according to entirely new designs. Such an architecture may seem too much to ask for, but the fact is that it already exists, at least in potential form. The Web has the potential to transform the world's collective computer power into a massive, distributed AI supercomputer.
Once one steps beyond the single-machine, single-program paradigm, and views the whole Web as a network of applets, able to be interconnected in various ways, it becomes clear that, in fact, the Web itself is an outstanding AI supercomputer. Each Web page, equipped with Java code or something similar, is potentially a "neuron" in a world-wide brain. Each link between one Web page and another is potentially a "synaptic link" between two neurons. The neuron-and-synapse metaphor need not be taken too literally; a more appropriate metaphor for the role of a Web page in the Web might be the neuronal group (Edelman, 1988). But the point is that Java, in principle, opens up the possibility for the Web to act as a dynamic, distributed cognitive system. The Web presents an unprecedentedly powerful environment for the construction of large-scale intelligent systems. As the Web expands, it will allow us to implement more and more intelligent World Wide Brain, leading quite plausibly, in the not too far future, to a global AI brain exceeding the human brain in raw computational power.
What is the relation between computer system designs and approaches to artificial intelligence? We have already observed that all the mainstream computing designs of the last half-century, from mainframes to PC's to network computing, support symbolic AI over cybernetic AI: they are serial and digital rather than analog and parallel. But there is more to the story than this. On a more detailed level than the symbolic/cybernetic dichotomy, different computer architectures embody different visions of computational intelligence, different models of mind.
The vision of mind suggested by the mainframe computer is quite different from the vision of mind suggested by the PC, which is in turn quite different from the vision of mind suggested by the modern network computing system. All the different digital computing frameworks are "universal computers," so that one can, in principle, run any kind of AI algorithm on either a mainframe, a PC or a network computing system. A computing system can be used to run AI algorithms embodying a philosophy of intelligence entirely different from the philosophy of intelligence embodied in its structure -- this is what happens when, for example neural networks are simulated on PC's or mainframes. But when one does this, when one works with AI algorithms that are "conceptually mismatched" to their hardware, one pays a price. One loses efficiency, ease of experimentation, and conceptual integrity.
When one's algorithm conceptually matches one's hardware, one finds that "following the hardware" leads to all sorts of new twists and innovations. When there is a mismatch, on the other hand, the suggestions implicitly made by the hardware will more than often be wrong, or at least will push one in directions alien to the ideas that one started out with. This may be seen in many of the recent innovations to neural network AI, derived from experimenting with neural nets via simulations on serial digital machines. The neural net learning algorithms that work best in serial digital simulations may not be the same ones that would work best in a more neural-net oriented, parallel analog hardware setting. In fact, many recent neural network techniques make no sense whatsoever as massively parallel, self-organizing analog algorithms: they are merely neural-net-inspired serial computing algorithms, consequences of the adaptation of neural networks to the unnatural environment of digital serial machines. This kind of development, spurred on directly by hardware/algorithm mismatch, is useful from a short-term perspective, but it does not add much to the long-term understanding of intelligence and its computer implementation. If experimentation were taking place on Neuronetics or similar technology, most of the recent neural net algorithms would not exist; instead there would be other new approaches, ones fitting in better with parallel analog computing and hence with the underlying vision of the neural network approach.
A good analogy to this hardware-software synergy in AI may be found in music. The hardware is the instrument and the software is the song played on the instrument. One can play any song on any instrument, but some song go more naturally with some instruments. Playing a punk rock song on a guitar, one finds that the "hardware" naturally enhances the "software" -- playing around with the song on the guitar, one is led to different timings, slight modifications here and there, maybe even new, related melodies. On the other hand, playing the same song on a violin, one can hit all the notes perfectly accurately, but one may find that one has turned something inherently very simple into something complex and awkward to play -- "efficiency" has degraded. Also, one will find that the physically natural variations and improvisations on the melody no longer sound so musically natural -- they are no longer punk tunes, because the violin is not a punk instrument. Ease of experimentation has decreased.
Orchestration, the mapping of complex compositions onto sets of instruments, is essential in classical music, and it is even subtler in jazz, because many of the instruments must improvise around the melodies and rhythms of the song, and their improvisations must be simultaneously physically logical in terms of the structure of the instrument and musically logical in the context of the song as a whole. Designing a complex AI system on a complex system of computer hardware is a somewhat similar process, the difference being that one is orchestrating, not human beings producing sounds, but mechanical devices producing various types of information, and monitored and modified by human beings.
The mainframe computer, with its central control unit and its dumb peripherals, embodies a simple and powerful model of mind: mind as a monolithic cognitive center, carrying out thinking and reasoning processes, collecting raw data from unintelligent peripheral processes and providing raw data to unintelligent motor processes. This is the kind of AI program that is most naturally programmed on mainframes. Of course, one could always deviate from this implicit suggestion of the computer architecture, and program complex, sophisticated perceptual and motor processes, together with cognitive processes. But this would be an unmotivated move, an "artificial" gesture with regard to the system one is working within.
A network computing environment, on the other hand, embodies a vision of mind as a large, heterogenous system of interacting agents, sending messages to each other and processing information in a variety of different ways. The modern network computing environment is not ideal for artificial intelligence -- it would be much better if our PC's and engineering workstations were powered by Neuronetics or similar hardware instead of serial digital hardware. But even so, there is reason to believe that the overall structure of mind/brain is more important than the microscopic details by which individual processes are carried out. The point is that network computing, unlike previously popular computing methodologies, embodies a vision of mind as a network of autonomous, interacting agents, which is precisely the right vision of mind to have.
Less crucial than the hardware/software synergy, but still important, is the software/operating-system synergy. In particular, in order to turn network computing into network AI, one needs an appropriate operating system (OS) -- a cybernetically wise, intelligence-friendly system for communicating between intelligent mind-network-embodying software, and the network computing environment.
The operating system is the low-level computer software that mediates communications between hardware and other computer software. Even in the realm of ordinary serial digital computers, there is a variety of operating systems; and two different OS's, running on the same system, can give the feeling of two different machines. For instance, two of the most popular OS's today are the Microsoft operating systems, DOS and Windows, and the Unix operating system, used on engineering workstations. These competing systems do essentially the same things, and are now invading each other's territory: Windows NT is a version of Windows that runs on engineering workstations, and Linux is a version of Unix that runs on PC's. But even so, they have drastically different "flavors."
What one needs, in order to exploit the potential of network computing -- for AI in particular and for other sophisticated applications as well -- is to have a kind of network operating system. Not an OS for any particular hardware, but an OS for networks of diverse, intercommunicating hardware. This is different from the conventional concept of an "operating system," but it is well within the common-language meaning of the phrase. What one needs is a system by which AI programs (and other programs) can operate the computer network as a network of agents, a network of processes, continually changing and communicating with each other -- rather than as a collection of von Neumann machines, each with particular properties to be taken into account, and sending messages of unrelated types that all need to be treated differently. One needs a network OS that leaves one blissfully blind to the details of managing different types of computers, and communicating different types of messages, leaving one in a mindlike world of abstract entities communicating abstract information and modifying themselves accordingly.
Java goes a fair way toward this goal of an agent-oriented network OS. Java allows one to write network- aware computer programs that will run on any modern computer, and it streamlines the processes of communication and interaction to an unprecedented extent. However, it falls a fair way short of providing the tools required to truly bring process-network-based AI and network computing together. Much creative engineering on the part of Java and other software developers is needed, if the potential embodied in "The network is the computer is the mind" is to be brought to fruition.