Computers
get smaller and smaller each year.
Today’s laptops – heck, today’s *Gameboys*
-- are vastly more powerful than the mainframes of the past. But it’s often been said that there’s a
limit to miniaturization. Because when
things get small enough, the ordinary rules of macroscopic physics don’t hold –
the weird and wily laws of quantum theory take over.

In the quantum microworld, indeterminacy is the rule – things are confusing vague and bleary, and don’t obey the normal rules of logic. An object doesn’t necessarily have a definite position, or momentum, or energy -- it may have a number of different positions or momentums or energies at once, all to different extents.. Quantities like the direction of spin of an electron can be totally, purely random. Observation is not a neutral activity but has an intrinsic physical effect. “Nonlocality” holds sway: observing a particle in one part of the universe can have, in some sense, an instantaneous effect on a particle far across the universe -- even if the two places are so far apart that no message could possibly get from one to another in less than a billion years, not even a message traveling at the universal speed limit, the speed of light.

Shrinking computers to quantum size sounds pretty dangerous. Who wants these strange and scary effects screwing up the behavior of their processor chip, or spontaneously mutating the files on their hard drive?

But it turns out that quantum
effects can actually be *good* for
computation, if they’re properly managed.
More and more physicists are turning their attention to *quantum computing* – the design and
construction of computers specifically intended to take advantage of quantum
microworld weirdness. And as it turns
out, there are even ways to make *macroscopic*
quantum computers; extreme miniaturization is convenient but not necessary for
quantum computing.

No one has yet constructed a
working quantum computer doing anything terribly interesting. But there have been some valuable early
experiments. And enough of the theory
has been worked out for us to know that there will be some significant benefits
in this kind of computing. One can
create quantum computing powered cryptograpic devices that transmit messages using
codes that are uncrackable in a very strong sense. And some kinds of computational tasks can be solved a lot faster,
on the average, by taking clever advantage of quantum weirdness. Some people think that this sort of quantum
computational acceleration is essential to the operation of the human
brain. There’s no real evidence for
this – but even if it’s false, one may still be able to use quantum computing
to build a *better* sort of brain … and
all sorts of other wondrous things.

The philosopher Charles S. Peirce, writing toward the end of the 1900’s, made the radical proposal that all atoms contain a chance element which makes them "swerve" a little bit. His reasoning was philosophical: he just didn’t believe the universe could be absolutely deterministic. He associated this mysterious chance element with the “absolute freedom,” spontaneity or consciousness of the atom. Even atoms, he declared, had a little bit of consciousness, a little bit of freedom, a little bit of life.

Quantum physics has proved Peirce right on both these points: all atoms do swerve a little; and this swerving is, at least in the view of some distinguished scientists, intimately connected with consciousness.

In more modern language, what quantum physics tells us is that, in the microworld of particles and atoms, an event does not become definite until someone observes it. An unobserved quantum system remains in an uncertain state, a superposition of many different possibilities. Observation causes "collapse" into a definite condition, which is chosen at random from among the possibilities provided.

http://www.fortunecity.com/emachines/e11/86/qphil.html

Consider the classic double- slit experiment. A particle passes through one of two slits in a barrier and leaves a mark on a plate on the other side of the barrier, indicating which slit it passed through. If one observes each particle as it passes through the barrier, the marks on the plate will be consistent with one's observations: "Fifteen hundred through the top slit, six hundred and ninety through the bottom slit," or whatever. But if one does not observe the particles passing through the barrier, then something very strange happens. There are marks on the plate where there shouldn't be any - - marks that could not have been made by particles passing through either slit. Instead of passing through the slit like a good particle should, the particle acts as if it were a wave in some mysterious medium, squeezing through the slit and then rapidly diffusing. The key point is whether the particle was looked at or not.

In fact, as the great physicist
John Archibald Wheeler – the inventor of the term “black hole” and a leading
developer of Einsteinian gravitation theory -- pointed out, this even works if
the choice is *delayed*. One then arrives at the phenomenon of the
"quantum eraser." In other
words, suppose one has a machine record which slit each particle passed
through. If after a few hours one destroys
the machine's records without having looked at them, and only afterwards looks
at the plate, then result is the same as if the information had never existed;
the plate shows that the particles behaved like waves. But in the same
scenario, if one looks at the machine's information before one erases it, the
picture on the plate is quite different: it is consistent with whatever the
machine said.

Somehow looking at the particle,
measuring it, forces it to make a choice between one of the two alternatives,
one of the two slits. This choice is a random one: even knowing all there is to
know about the physical system, there is no way to predict the path that each
individual particle will take, in the eventuality that it is observed. The reduction from indeterminacy to
definiteness occurs at the *point of
observation*.

Richard Feynman, one of the physics stars of the century, once said: “ He who to tries to understand quantum theory vanishes into a black hole, never to be seen again.” Niels Bohr, one of the founding fathers of quantum physics, said in 1927: “Anyone who is not shocked by quantum theory does not understand it." The reason for these declarations of bafflement is clear. If the dynamical equations of quantum theory are taken literally, nothing is ever in a definite state; everything is always suspended in a superposition of various possibilities. But yet that's not what we see in the world around us - - neither in the physics lab nor in everyday life. When then does the superposed world become the actual world? When it is recorded by a machine? When it is recorded by a person? What about an intelligent machine ... or an observant chimpanzee, dog, mouse, or ant?

An added, related pecularity is provided by the phenomenon of nonlocality. The classic example of this involves two particles that are at one point joined together, but are then shot off in different directions until they’re far distant. One supposes that the particles are, for instance, electrons, each of which has a property called “spin” that takes one of two values: either Up or Down. One may know, since the two particles were initially joined together, that only one of the particles may have spin Up, the other having spin Down. But which is Up, which is Down? This is random. There’s no way to predict this.

Now, when you observe one of the particles, it automatically tells you something about the other particle – no matter how far away the other one is. If the first particle is observed to have spin Up, then the other particle is known to have spin Down, even if it’s 10 quadrillion light years away. But the funny thing is that, because of the critical role of observation in quantum measurement, this act of observation in some sense causes a physical change. By observing the one particle to have spin Up, the state of the other, far-distant particle is then in a way caused to have spin Down. Its state is caused to collapse from uncertainty into definiteness.

When Einstein, Podolsky and Rosen discovered this phenomenon in the 1930’s they thought they had proved quantum theory false. It seemed to contradict Einstein’s special relativity theory, which says that no information can travel faster than light speed. But there’s no contradiction, because it’s not classical information that’s traveling instantaneously, it’s just bizarre counterintuitive quantum collapse-into-definiteness. Although Einstein was himself a pioneer of quantum theory, he never liked its indeterminacy: he said, famously “God doesn’t play dice.” But it turns out that the universe at a very basic level does play dice, and this dice-playing is not only empirically verifiable, but useful as the foundation for a new generation of computing technology.

The role of observation in quantum theory has caused
some theorists to associate quantum theory with consciousness. This idea goes back to the 60’s and quantum
pioneer Eugene Wigner, and has more recently been adopted by a veritable army
of New Age thinkers. But the problem
with this would-be "quantum theory of consciousness" is that – so
far, at any rate -- it fails to connect in a sufficiently thorough way with the
*biology* and *psychology* of consciousness.

It’s true that in quantum physics,
consciousness often appears as *the agent
which forces a choice*. And
correspondingly, in psychology, consciousness is often psychologically
associated with choice or decision. In
many cases we only become conscious of something when appropriate unconscious
processes judge that some sort of complex decision in its regard is
required. But even so, one cannot
plausibly define human consciousness as the collapse of the quantum wave
function. A good theory of consciousness
must have something to say about the psychology of attention, about the neural
processes underlying the subjectively perceived world, and above all about the
experience of being a conscious person.
So far the idea of quantum observation as the essence of consciousness
fails this test by a long shot.

Quantum consciousness advocates have long sought to establish a crucial role for quantum phenomena in the human brain. This is not as outlandish as it may seem; while quantum theory is generally considered a theory of the very small, it is not firmly restricted to the microworld. There are some well-known macroscopic systems that display elementary-particle-style quantum weirdness; for example, SQUIDs, Superconducting Quantum Interference Devices, which are little supercooled rings used in some medical devices. A SQUID is around the size of a wedding ring, but its electromagnetic properties are just as indeterminate, and just as able to participate in peculiar nonlocality phenomena, as the spin of an electron.

Is the brain a macrosopic quantum device, like a SQUID? It’s not impossible. If it’s true, then all of contemporary cognitive neuroscience potentially goes out the window. Neuroscientists now commonly think in terms of “neural networks.” The brain cells called neurons pass electricity amongst eachother along connections called synapses, and this constant to-and-fro of electricity seems to give rise to the dynamics of mind. Thoughts and feelings are considered as patterns of electrical flow, and synaptic conductance. But if the brain is a macroscopic quantum system, all this electrodynamics may be epiphenomenal – the high-level consequence of low-level quantum-consciousness-based cognitive magic.

It’s a
wonderful story; at present, however, there’s basically no evidence that the
brain’s high-level dynamics display quantum properties. The Japanese physicists Jibu and Yasue, in
their book *Quantum Theory of Consciousness*, put forth a seductive hypothesis
regarding quantum effects in water megamolecules floating inbetween
neurons. But the jury’s still out; in
fact, from the point of view of the scientific mainstream, there’s not even
enough cause yet to convene the jury in the first place. We don’t currently have enough evidence to
verify or refute such theories.

It’s quite possible that there are quantum effects in the brain, but without the dramatic consequences that some theorists associate with them. Perhaps the quantum effects aid and augment the neural network dynamics traditionally studied; perhaps they’re just one among many factors that go into our experience of consciousness. Or perhaps the skeptics are right, and the brain is no more a quantum system than an automobile – which is also made out of tiny particles obeying the laws of quantum physics.

Just how powerful *are* quantum computers?

They’re tremendously powerful, it
turns out – at least in theory; no useful quantum computer has yet been
constructed. But the precise nature of
their power took a while to be understood.
For a while, some theorists thought that quantum computers might be able
to compute “uncomputable” things – things that ordinary digital computers just
can’t compute. This turns out not to
be the case. But even so, there are many
things they can compute *faster* than
ordinary computers. And in computer science,
very frequently, speed makes all the difference.

What does “uncomputable” mean? Something is uncomputable if it can’t be
represented in terms of a finite-sized computer program. For instance, the number Pi=3.1415926235…. is
*not* uncomputable. Even though it goes on forever, and never
repeats itself, there is a simple computer program that will generate it. True, this computer program can never
generate *all* of Pi, because to do so
it would have to run on literally *forever*
– it can only generate each new digit at a finite speed. But still, there is a program with the
property that, *if* you let it run
forever, then it *would* generate all
of Pi, and because of this the number Pi is not considered uncomputable. What’s fascinating about Pi is that even though
it goes on forever and doesn’t repeat itself, in a sense it only contains a
finite amount of information -- because it can be compactly represented by the
computer program that generates it.

But it turns out that, unlike Pi,
almost all of the numbers on the number line *are* uncomputable. They have
infinite information; they can’t be stored in, or produced by, any digital
computer program. Although they’re the
majority case, these are strange numbers, because it’s impossible to ever give
an example of one of them. For, if one
could give an example of such a number, by the very act of giving the example
one would be giving a finite exact description of that number – but how can one
give a finite description of a number that has infinite information and hence
by definition has no finite exact description?

Mathematicians have proved that
these uncomputable numbers “exist” in an indirect way, by showing that the set
of all numbers on the number line is a *bigger
kind of infinity* than the set of all computers. Because of the strangeness of these ideas, some mathematicians
think that the idea of a continuous number line should be discarded, and that
mathematics should concern itself only with computable numbers, numbers that
have some finite description that can be embodied in a computer program.

Uncomputable numbers have a lot to do with randomness. The digits of Pi are random in a sense, but yet in another sense they’re non-random, because you can predict them if you know the formula. The digits of an uncomputable number are random in a much stronger sense: there’s no formula or computer program that will allow you to predict exactly what they’re going to be.

Because of this relationship with randomness, some scientists thought that quantum theory and uncomputability might be connected. They thought that quantum randomness might somehow allow quantum computers to compute these mysterious, seemingly ineffable uncomputable numbers. This notion was particularly appealing because of the conceptual resonance between uncomputability and consciousness. Uncomputable numbers exist, but you can never grasp onto them – they’re elusive in the same vague sense that consciousness is. Perhaps quantum randomness, uncomputability and consciousness are facets of the same underlying mystery?

But, while it may be that in some
sense these things all reflect the same mystery, it is *not* true that quantum computers can compute uncomputable
numbers. In the mid-1980’s, physicist
David Deutsch proved mathematically that a quantum computer can’t compute
anything special, beyond what an ordinary computer can do. The mysterious uncomputable numbers, that
some mathematicians don’t believe in, are also uncomputable for quantum
computers. Deutsch was not the first to
study quantum computing – there was earlier work by legendary physicist Richard
Feynman, Paul Benioff of AT&T Bell Labs, and William H. Bennett of IBM
Research, among others. But Deutsch’s
paper set the field of quantum computing on a significantly more solid footing,
mathematically and cnceptually.

But if quantum computers can’t
compute anything new beyond what ordinary digital computers can then what good
are they? Well, one other thing
Deutsch discovered was that, in some cases, quantum computers can solve problems
much faster than ordinary computers.
They’ll come up with the same answer, but their indeterminate nature
allows them, in a sense, to explore multiple pathways to an answer at once,
hence arriving at the right answer faster.
The trick is that they can’t be *guaranteed
*to get to the right answer faster.
In the worst case scenario, they’ll take as long as an ordinary
computer. But *on average*, they’ll be vastly faster.

To understand the power of this, think about contemporary systems for encrypting information, for secure transmission over the Internet. The encryption algorithms in use today are based on factorization – to crack one of them, you’d need to be able to divide a very large number into its component factors. But factoring large numbers is an intractable problem for ordinary computers today. On the other hand, it’s known in theory how, with a quantum computer, one can factor large numbers rapidly, on average. When these devices are built, we’ll have to find different ways of creating codes … and fortunately, quantum computing also provides some of these.

Ordinary computers are based on bits – elementary pieces of information, which always take one of the two values 0 or 1. All traditional computer programs internally represent information as long sequences of bits. A word processing file, a computer program itself, the Windows operating system – all are represented as sequences of 0’s and 1’s, where each 0 or 1 is represented physically by the absence or presence of electrical charge at a certain position in computer memory, the absence or presence of magnetic charge at a certain position on a hard drive, etc. Quantum computers are based instead on what are called “qubits.” A qubit may be most simply considered as the spin state of a particle like an electron. An electron can have spin Up or spin Down, or, it can have a superposition of Up and Down spin – spin, say, half Up and half Down; or three quarters Up and one quarter Down. A qubit contains more information than a bit – but in a strange sense, not in the same sense in which two bits contain more information than a bit.

Now, quantum theory is not the
ultimate theory of the universe. The
big thorn in the side of modern physics is the apparent irreconcilability of
quantum physics with Einstein’s general-relativistic theory of
gravitation. Quantum theory talks about
indeterminate waves; general relativity talks about curved spacetime; and no
one knows how to translate between the two languages with complete
accuracy. Mathematician Roger Penrose
and a few others have speculated that the ultimate unified theory of quantum
gravitation will yield a new kind of computing – quantum gravity computing –
that *will* allow the computation of
traditionally uncomputable numbers. Of
course this can’t be ruled out. No one
knows what a quantum gravity bit would look like, nor how they would
interacts. However, the vast majority of physicists and computer
scientists are rightly skeptical of Penrose’s conjecture… it doesn’t take a
terribly sensitive nose to detect a scent of wishful thinking here.

**Quantum Computing Today**

What transformed quantum
computing from a futuristic intellectual curiosity into a burgeoning research
field blessed with considerable commercial interest was a paper by Peter Schor
in 1994. Schor demonstrated something
simple but fantastic: a method for using quantum computers to factor large
numbers. Factoring even small numbers
like 391 (which equals, you guessed it, 23*17) is annoying for most humans, and
factoring very large numbers is extremely slow even for sophisticated
computers. Schor showed how quantum
computers can, in principle, be used to factor huge numbers much more quickly
than any ordinary computer. This
aroused a huge amount of interest because all the secret code systems in common
use today are based on factoring.
Whomever can factor large numbers rapidly, can crack the cryptosystems
used by governments and corporations worldwide. Schor’s design has not yet been implemented in practice – there
are plenty of engineering difficulties – but there is little doubt that it or
something like it will be, in time.

Fortunately,
although quantum computing will one day kill contemporary
cryptographic
methods, quantum theory also has the capability to create new methods of
sending secret codes, which are far more profoundly secure than any nonquantum
techniques. This does not exactly
involve quantum *computing*, but it
uses similar properties of qubits. The
laws of quantum theory allow one to create a completely secure communication channel
between two individuals, relying on the strange nonlocality properties of
quantum information to ensure eavesdropping can be detected. Any disturbance in the connection will
result in a “collapse of the wavefunction” and hence easily be noticed. This is being used now for experiments in
“key exchange,” a phase of conventional cryptography where two parties exchange
the key needed to decode secret messages to be sent to one another via normal
nonquantum means. The current
experiments are fairly crude: IBM researchers have used polarized photons to
distribute cryptographic keys over a distance of 30 centimeters, at a rate of
10 bits per second. Primitive, but it’s
a start; most importantly, it demonstrates that the uncanny mathematics of
quantum cryptography actually works out in practice.

But cryptography is just the beginning. Factoring is not the only problem that seems to be particularly amenable to rapid quantum computation. For instance, on an ordinary computer, to search for an item in an unordered list of N items takes N operations at worst, and N/2 operations on average. L.K. Grover, also from Bell Labs, showed how a quantum computer can do it in less the square root of N operations. For example, if one has to find an object in a list of a million objects, an ordinary computer will take 500,000 operations on average. A quantum computer using Grover’s approach can do it in around 758 steps. This is a particularly striking example because of the simplicity of the task involved. An ordinary computer, given the task of finding an item in an unsorted list, basically can’t do any better than simply enumerating the whole list until it finds the desired object. On the other hand, a quantum computer uses the mysteries of quantum reality to find an object in the list without checking every position in the list in any classical sense. Simple operations like list lookup are the bread and butter of all complex computer programs. Being able to do these sorts of operations supernaturally fast – as quantum computing will allow -- will revolutionize computer science.

Fleshing out the details of quantum
computing casts new light on the speculative notion of the quantum brain. While it may be that the brain uses quantum
effects in some way, it seems unlikely that it uses quantum computing tricks
analogous to those in Schor’s or Grover’s algorithms. So far as we can tell, human minds are *not* tremendously efficient at operations like searching large lists
to find elements in them. Cognitive
psychologists have studied human performance on such tasks extensively and in
fact we’re pretty darned clunky and error-prone at such things. Now, it could be that at a deep subconscious
level we carry out these operations with quantum superefficiency, which is somehow
lost in all the more conscious-level phenomena psychologists have studied. But again, the scent of wistful thinking
grows stronger….

** **

What stands
in the way of the glorious vision of a superefficient quantum computer on every
desk? There are the usual engineering
difficulties associated with any new technology. But there’s also a major scientific puzzle: the phenomenon of *decoherence*.

Decoherence refers to the tendency of qubits to get “mixed up” with the environment. After they sit around for a while, qubits will get into a superposed state somewhere between Up and Down spin that is not entirely due to the quantum computation in question, but is affected by all sorts of other things in the surrounding universe. This is the natural tendency of quantum systems: they couple with each other, nonlocally and mysteriously. This coupling is largely responsible for the formation of macroscopic structures like you and me. But to make quantum computing work with precision, it needs to be carefully controlled.

Solving the decoherence problem
isn’t easy, but some success has been found using Nuclear Magnetic Resonance
(NMR) technology, which is developing rapidly due to its use in MRI medical
imaging hardware. For instance, in 1998
Raymond Laflamme’s team at Los Alamos and MIT used NRM to construct a coherent
qubit spread across three nuclear spins in each molecule of a liquid solution
of alanine and trichloroethylene molecules.
To avoid inducing incoherence, the group didn’t measure the spins
directly, but instead *compared* the
spins. They then actively corrected
errors found in the qubit’s coherence.
This kind of *quantum error
correction* is a major area of research.
It may be years or it may be decades before the decoherence problem is
fully solved, no one can tell for sure.
But the problem definitely seem approachable – there are no deep
theoretical obstacles in the way of solving it, just a lot of fascinating
physics.

In examples like factoring and
searching, one is coaxing a quantum computer to behave with the inexorable
precision of a standard digital computer.
Another approach, which I discussed in a paper I published in 1999, is *evolutionary* quantum computing. Evolutionary quantum computing tries to
infuse quantum computing with some of the creative imprecision of living
systems (which, at the molecular level, are complex quantum systems in their
own right.)

Evolutionary quantum computing is an extension to the quantum domain of a technique in conventional computer science called “evolutionary programming,” in which one creates a computer program solving a given problem by simulating natural selection. One creates a population of candidate programs, and evaluates how well each of them does at solving the problem. One takes the better programs from the population and combines them with each other, in the manner of the DNA crossover operations by which, in sexually reproducing species, a mother and a father combine to yield a child. The population of programs evolves over time, by the dynamic of “survival of the fittest programs,” where fitness is defined by efficacy at solving the problem at hand.

The beauty of evolutionary
computing in the quantum domain is that it provides a partial way of getting
around the decoherence problem. In
theory, one can evolve a bunch of quantum “programs” solving a target problem *without ever looking inside the programs to
see how they work*. All one has to
observe is how well they solve the target problem. Observing a quantum system is one thing that causes it to
decohere, because when you observe it, its wave function gets smeared up with
yours – the famous observer/observed interaction. The ability to create quantum computer programs without observing
them along the way – because, to create programs in an evolutionary manner, one
only has to observe their behavior --
may be very powerful, as quantum computing progresses.

If quantum theory does play a role in the brain, it is probably more in the evolutionary-quantum-computing style than in the manner of quantum algorithms for factoring and searching. Nobelist Gerald Edelman and others have argued that the brain is an evolving system, with different “maps” of neural connectivity competing with each other to serve various cognitive, perceptive and active functions. If neural subnetworks in the brain are really molecular quantum computing systems, then the brain may be an evolving quantum computer. Who knows? At the present time, evolutionary quantum computing is still in the domain of theory -- just like Schor’s codebreaking quantum computers, and all other quantum computers except extremely simple examples. There’s definitely a long way to go.

Quantum
computing is *hard* at this point, as
well as speculative, counterintuitive and downright whacky. But the laws of physics tell us it’s
possible, and all the preliminary experiments done so far support the ample
theoretical work.

The gap between wild-eyed vision
and successful commercial product gets smaller and smaller each year – and so
it’s not too surprising that, in spite of its primitive state of development,
quantum computing is already entering the business world, and not only in the
domain of big corporate research labs.
At least one quantum computing startup – the New York firm Magiqtech –
has already sprung into life.
Magiqtech is positioning itself to be a key player in the long-term
explosion of quantum computing devices – but they also expect to have commercial
products within 3-5 years,. We can
expect to see more and more such ventures emerge, as today’s “miniature”
computing components come to seem increasingly bulky, sluggish and
old-fashioned – and downright *boring*
in their tedious adherence to classical physical laws….

** **