FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups 
 ProfileProfile   PreferencesPreferences   Log in to check your private messagesLog in to check your private messages   Log inLog in 
Forum index » Science and Technology » Physics » Research
This Week's Finds in Mathematical Physics (Week 235)
Post new topic   Reply to topic Page 1 of 1 [1 Post] View previous topic :: View next topic
Author Message
John Baez
science forum Guru Wannabe


Joined: 01 May 2005
Posts: 220

PostPosted: Mon Jul 17, 2006 3:32 pm    Post subject: This Week's Finds in Mathematical Physics (Week 235) Reply with quote

Also available at http://math.ucr.edu/home/baez/week235.html

July 15, 2006
This Week's Finds in Mathematical Physics (Week 235)
John Baez

After leaving the Perimeter Institute near the end of June,
I went home to Riverside and then took off for a summer in Shanghai.
That's where I am now. I'm having a great time - you can read
about it in my online diary!

Today I'll talk about classical and quantum computation, then
quantum gravity, and finally the categorification of quantum
mechanics.

My interest in quantum computation was revived when Scott
Aaronson invited me to this building near the Perimeter
Institute:

1) Institute for Quantum Computing (IQC), http://www.iqc.ca/

Raymond Laflamme gave me a fun tour of the labs, especially the
setup where he's using nuclear magnetic resonance to control
the spins of three carbon-13 nuclei in some organic molecule in
liquid form. Each molecule is its own little quantum computer.
I forget the name of the stuff - maybe someone can remind me.

One of the banes of quantum computation is "decoherence", in
which the quantum state of the computer interacts with its
environment and becomes correlated with it, or "entangled",
in a way that appears to "collapse its wavefunction" and ruin
the calculation.

In general it's good to keep things cool if you don't want
things to get messed up. But surprisingly, Laflamme said that
keeping this liquid warmer *reduces* the rate of decoherence:
when it's warm, with molecules zipping around, nuclei from
different molecules don't stay near each other long enough to
affect each other much!

So, each molecule acts like an isolated system, doing its own
computation as they zap it with carefully timed pulses of
microwaves and the three nuclei interact. About a quadrillion of
these molecules are doing their thing in parallel, mixed
in with a bunch more of the same molecules made using carbon-12,
which serve as an inert "shield" between the active ones.

LaFlamme also showed me some beams of spin-entangled photons which
they can use as keys for quantum cryptography. Nobody can peek
at these photons without affecting them! It's a great scheme.
If you don't know it, try this simple explanation:

2) Artur Ekert, Cracking codes, part II, Plus Magazine,
http://pass.maths.org.uk/issue35/features/ekert/index.html

There are already two companies - idQuantique and MagiQ - selling
quantum key distribution systems that send entangled photons
down optical fibers. But the folks at the IQC are planning to
send them right through the air!

Eventually they want to send them from satellites down to the
Earth. But as a warmup, they'll send beams of entangled photons
from an intermediate building to the Institute of Quantum Computing
and the Perimeter Institute! Then they can share secrets with
nobody able to spy on them unnoticed. They should do something to
dramatize this capability. Unfortunately they don't actually *have*
any secrets. So, they might need to make some up.

The really cool part, though, is that Scott helped me see that
at least in principle, quantum computers could keep from drifting
off course without the computation getting ruined by quantum
entanglement with the environment. I had long been worried about
this.

You see, to make any physical system keep acting "digital" for a
long time, one needs a method to keep its time evolution from
drifting off course. It's easiest to think about this issue
for an old-fashioned, purely classical digital computer. It's
already an interesting problem.

What does it mean for a physical system to act "digital"? Well,
we like to idealize our computers as having a finite set of states;
with each tick of the clock it jumps from one state to another in
a deterministic way. That's how we imagine a digital computer.

But if our computer is actually a machine following the laws of
classical mechanics, its space of states is actually continuous -
and time evolution is continuous too! Physicists call the space
of states of a classical system its "phase space", and they describe
time evolution by a "flow" on this phase space: states move
continuously around as time passes, following Hamilton's equations.

So, what we like to idealize as a single state of our classical
computer is actually a big bunch of states: a blob in phase space,
or "macrostate" in physics jargon.

For example, in our idealized description, we might say a wire
represents either a 0 or 1 depending on whether current is
flowing through it or not. But in reality, there's a blob of
states where only a little current is flowing through, and
another blob of states where a lot is flowing through. All
the former states count as the "0" macrostate in our idealized
description; all the latter count as the "1" macrostate.

Unfortunately, there are also states right on the brink, where a
medium amount of current is flowing through! If our machine gets
into one of these states, it won't act like the perfect digital
computer it's trying to mimic. This is bad!

So, you should imagine the phase space of our computer as having
a finite set of blobs in it - macrostates where it's doing something
good - separated by a no-man's land of states where it's not
doing anything good. For a simple 2-bit computer, you can imagine
4 blobs like this:

--------------------------
|??????????????????????????|
|??? ----- ?????? ----- ???|
|???| |??????| |???|
|???| 0 0 |??????| 0 1 |???|
|???| |??????| |???|
|??? ----- ?????? ----- ???|
|??????????????????????????|
|??? ----- ?????? ----- ???|
|???| |??????| |???|
|???| 1 0 |??????| 1 1 |???|
|???| |??????| |???|
|??? ----- ?????? ----- ???|
|??????????????????????????|
--------------------------

though in reality the phase space won't be 2-dimensional, but
instead much higher-dimensional.

Now, as time evolves for one tick of our computer's clock, we'd
like these nice macrostates to flow into each other. Unfortunately,
as they evolve, they sort of spread out. Their volume doesn't change -
this was shown by Liouville back in the 1800s:

3) Wikipedia, Liouville's theorem (Hamiltonian),
http://en.wikipedia.org/wiki/Liouville's_theorem_(Hamiltonian)

But, they get stretched in some directions and squashed in others.
So, it seems hard for each one to get mapped completely into another,
without their edges falling into the dangerous no-man's-land
labelled "???".

We want to keep our macrostates from getting hopelessly smeared
out. It's a bit like herding a bunch of sheep that are drifting
apart, getting them back into a tightly packed flock. Unfortunately,
Liouville's theorem says you can't really "squeeze down" a flock of
states! Volume in phase space is conserved....

So, the trick is to squeeze our flock of states in some directions
while letting them spread out in other, irrelevant directions.

The relevant directions say whether some bit in memory is a zero or
one - or more generally, anything that affects our computation. The
irrelevant ones say how the molecules in our computer are wiggling
around... or the molecules of air *around* the computer - or anything
that doesn't affect our computation.

So, for our computer to keep acting digital, it should pump out
*heat*!

Here's a simpler example. Take a ball bearing and drop it into
a wine glass. Regardless of its initial position and velocity -
within reason - the ball winds up motionless at the bottom of
the glass. Lots of different states seem to be converging to
one state!

But this isn't really true. In fact, information about the ball's
position and velocity has been converted into *heat*: irrelevant
information about the motion of atoms.

In short: for a fundamentally analogue physical system to keep
acting digital, it must dispose of irrelevant information, which
amounts to pumping out waste heat.

In fact, Rolf Landauer showed back in 1961 that getting rid of
one bit of information requires putting out this much energy
in the form of heat:

kT ln(2)

where T is the temperature and k is Boltzmann's constant. That's
not much - about 3 x 10^{-21} joules at room temperature! But,
it's theoretically important.

What had me worried was how this would work for quantum computation.
A bunch of things are different, but some should be the same. When
we pump information - i.e., waste heat - from the computer into
the environment, we inevitably correlate its state with that of the
environment.

In quantum mechanics, correlations often take the form of
"entanglement". And this is a dangerous thing. For example,
if our quantum computer is in a superposition of lots of states where
it's doing interesting things, and we peek at it to see *which*, we
get entangled with it, and its state seems to "collapse" down to one
specific possibility. We say it "decoheres".

Won't the entanglement caused by pumping out waste heat screw up
the coherence needed for quantum computation to work its wonders?

I finally realized the answer was: maybe not. Yes, the quantum state
of the computer gets entangled with that of the environment - but
maybe if one is clever, only the *irrelevant* aspects of its state
will get entangled: aspects that don't affect the computation. After
all, it's this irrelevant information that one is trying to pump out,
not the relevant information.

So, maybe it can work. I need to catch up on what people have
written about this, even though most of it speaks the language of
"error correction" rather than thermodynamics. Here are some things,
including material Scott Aaronson recommended to me.

Gentle introductions:

4) Michael A. Nielsen and Isaac L. Chuang, Quantum Computation and
Quantum Information, Cambridge University Press, Cambridge, 2000.

5) John Preskill, Quantum computation - lecture notes, references
etc. at http://www.theory.caltech.edu/people/preskill/ph229/

6) John Preskill, Fault-tolerant quantum computation, to appear
in "Introduction to Quantum Computation", eds. H.-K. Lo, S. Popescu,
and T. P. Spiller. Also available as quant-ph/9712048.

Chapter 7 of Preskill's lecture notes is about error correction.

This is a nice early paper on getting quantum computers to work
despite some inaccuracy and decoherence:

7) Peter Shor, Fault-tolerant quantum computation, 37th Symposium
on Foundations of Computing, IEEE Computer Society Press, 1996,
pp. 56-65. Also available as quant-ph/9605011.

This more recent paper shows that in a certain model, quantum
computation can be made robust against errors that occur at less
than some constant rate:

8) Dorit Aharonov and Michael Ben-Or, Fault-tolerant quantum
computation with constant error rate, available as quant-ph/9906129.

Here's a paper that assumes a more general model:

9) Barbara M. Terhal and Guido Burkard, Fault-tolerant quantum
computation for local non-markovian noise, Phys. Rev. A 71, 012336
(2005). Also available as quant-ph/0402104.

Rolf Landauer was a physicist at IBM, and he discovered the result
mentioned above - the "thermodynamic cost of forgetting" - in a study
of Maxwell's demon. This is a fascinating and controversial subject,
and you can learn more about it in this book of reprints:

10) H. S. Leff and Andrew F. Rex, editors, Maxwell's Demon: Entropy,
Information and Computing, Institute of Physics Publishing, 1990.

I think Landauer's original paper is in here. He figured out why
you can't get free energy from heat by using a little demon to
watch the molecules and open a door to let the hot ones into a
little box. The reason is that it takes energy for the demon to
forget what it's seen!

Finally, on a somewhat different note, if you just want a great
read on the interface between physics and computation, you've got
to try this:

11) Scott Aaronson, NP-complete problems and physical reality,
ACM SIGACT News, March 2005. Also available as quant-ph/0502072.

If quantum mechanics were slightly nonlinear, could quantum
computers solve NP problems in polynomial time? Read and learn
and the state of the art on puzzles like these.

At the Perimeter Insitute I also had some great discussions with
Laurent Freidel and his student Aristide Baratin. They have a
new spin foam model that reproduces ordinary quantum field
theory - in other words, particle physics in flat spacetime.
It's not interesting as a model of quantum gravity - it doesn't
include gravity! Instead, it serves as a convenient target for
spin foam models that *do* include gravity: it should be the limit
of any such model as the gravitational constant approaches zero.

Their paper should hit the arXiv soon:

12) Aristide Baratin and Laurent Freidel, Hidden quantum gravity
in 4d Feynman diagrams: emergence of spin foams.

It's the sequel of this paper for 3d spacetime:

13) Aristide Baratin and Laurent Freidel, Hidden quantum gravity
in 3d Feynman diagrams. Available as gr-qc/0604016.

Freidel, Glikman and Starodubtsev have also just come out with
a paper carrying out some of the exciting project I mentioned in
"week208":

14) Laurent Freidel, J. Kowalski-Glikman and Artem Starodubtsev,
Particles as Wilson lines in the gravitational field, available
as gr-qc/0607014.

Their work is based on the MacDowell-Mansouri formulation of gravity.
This is a gauge theory with gauge group SO(4,1) - the symmetry group
of deSitter spacetime. DeSitter spacetime is a lot like Minkowski
spacetime, but it has constant curvature instead of being flat.
It's really just a hyperboloid in 5 dimensions:

{(w,x,y,z,t) : w^2 + x^2 + y^2 + z^2 - t^2 = k^2}

for some constant k. It describes an exponentially expanding
universe, a lot like ours today. It's the most symmetrical
solution of Einstein's equation with a positive cosmological
constant. The cosmological constant is inversely proportional
to k.

When you let the cosmological constant approach zero, which is the
same as letting k -> infinity, DeSitter spacetime flattens out to
Minkowski spacetime, and the group SO(4,1) contracts to the symmetry
group of Minkowski spacetime: the Poincare group.

So, MacDowell-Mansouri gravity is similar to the formulation of
gravity as gauge theory with the Poincare group as gauge group.
I explained that pretty carefully back in "week176".

But, there's one way SO(4,1) is better than the Poincare group.
It's a "simple" Lie group, so it has an inner product on its Lie
algebra that's invariant under conjugation. This lets us write down
the BF Lagrangian:

tr(B ^ F)

where tr is defined using the inner product, F is the curvature of
an SO(4,1) connection A, and B is an so(4,1)-valued 2-form. Spin
foam models of BF theory work really well:

15) John Baez, An introduction to spin foam models of BF theory and
quantum gravity, in Geometry and Quantum Physics, eds. Helmut
Gausterer and Harald Grosse, Lecture Notes in Physics,
Springer-Verlag, Berlin, 2000, pp. 25-93. Also available as
gr-qc/9905087.

So, the MacDowell-Mansouri approach is a natural for spin foam
models. It's not that MacDowell-Mansouri gravity *is* a BF theory -
but its Lagrangian is the BF Lagrangian plus extra terms. So,
we can think of it as a perturbed version of BF theory.

There's also one way SO(4,1) is worse than the Poincare group.
It's a simple Lie group - so it doesn't have a god-given
"translation" subgroup the way the Poincare group does. The
Poincare gauge theory formulation of general relativity requires
that we treat translations differently from boosts and rotations.
We can't do this in an SO(4,1) gauge theory unless we break the
symmetry down to a smaller group: the Lorentz group, SO(3,1).

So, to get MacDowell-Mansouri gravity from SO(4,1) BF theory,
we need to add extra terms to the Langrangian that break the
symmetry group down to SO(3,1). This isn't bad, just a bit sneaky.

The new paper by Freidel, Kowalski-Glikman and Starodubtsev is
mainly about the SO(4,1) BF theory rather than full-fledged
MacDowell-Mansouri gravity. They show that if you cut out
curves in spacetime and couple them to the A field in the right
way, they act like point particles. In particular, they have a
mass and spin, and they move along geodesics when their spin is
zero. Spinning particles do something a bit fancier, but it's
the right thing.

This generalizes some results for 3d gravity that I explained
in detail back in "week232". It's nice to see it working in
4 dimensions too.

Back then I also explained something else about 4d BF theory:
if you cut out *surfaces* in spacetime and couple them to the
*B* field, they act like 1-dimensional extended objects, which
one might call *strings*. I don't think they're the wiggling
stretchy strings that string theorists like; I think their
equation of motion is different. But I should actually check!
It's stupid; I should have checked this a long time ago.

Ahem. Anyway, it's really neat how particles couple to the
A field and "strings" couple to the B field in BF theory.

This is vaguely reminiscent of how the A and B field form two
parts of a "2-connection" - a gadget that lets you define
parallel transport along curved and surfaces. You can read
about 2-connections here:

16) John Baez and Urs Schreiber, Higher gauge theory, to
appear in the volume honoring Ross Street's 60th birthday,
available as math.DG/0511710.

The cool thing is that a pair consisting of an A field and a
B field gives well-behaved parallel transport for curves and
surfaces only if they satisfy an equation... which is *implied*
by the basic equation of BF theory!

The above paper is a summary of results without proofs.
Before one can talk about 2-connections, one needs to understand
2-bundles, which are a "categorified" sort of bundle where
the fiber is not a smooth manifold but a smooth category.
My student Toby Bartels recently finished writing an excellent
thesis that defines 2-bundles and relates them to "gerbes" -
another popular approach to higher gauge theory, based on
categorifying the concept of "sheaf" instead of "bundle":

17) Toby Bartels, Higher Gauge Theory I: 2-bundles, available
as math.CT/0410328.

The detailed study of 2-connections will show up in the next
installment - a paper I'm writing with Urs Schreiber.

You can also see transparencies of some talks about this
stuff:

1Cool John Baez, Alissa Crans and Danny Stevenson, Chicago
lectures on higher gauge theory, available at
http://math.ucr.edu/home/baez/namboodiri/

19) John Baez, Higher gauge theory, 2006 Barrett lectures,
available at http://math.ucr.edu/home/baez/barrett/

It'll be lots of fun if higher gauge theory and the work
relating MacDowell-Mansouri gravity to BF theory fit together
and develop in some nontrivial direction. But the funny thing
is, I don't how they fit together yet.

Here's why. In gauge theory, there's a famous way to get a number
from a connection A and a loop. First you take the "holonomy" of A
around the loop, and then you take the trace (in some representation
of your gauge group) to get a number. This number is called a "Wilson
loop".

This is an obvious way to define an *action* for a particle coupled
to a connection A - at least if the particle moves around a loop.
For example, it's this action that let us compute knot invariants
from BF theory: you use the BF action for your fields, you use the
Wilson loop as an action for your particle, and you compute the
amplitude for your particle to trace out some knot in spacetime.

One might guess from the title "Particles as Wilson lines in the
gravitational field" that this is the action Freidel and company use.
But it's not!

Instead, they use a different action, which involves extra fields on
the particle's worldline, describing its position and momentum.
I explained a close relative of this action back in "week232",
when I was coupling particles to 3d gravity.

The same funny difference shows up when we couple strings to the
B field. In higher gauge theory you can define holonomies and
Wilson loops using the A field, but you can also define "2-holonomies"
and "Wilson surfaces" using both the A and B fields. The 2-holonomy
describes how a string changes as it moves along a surface, just as
the holonomy describes how a particle changes as it moves along a curve.
If you have a closed surface you can take a "trace" of the 2-holonomy
and get a number, which deserves to be called a "Wilson surface".

This is an obvious way to define an action for a string coupled to
the A and B fields - at least if it traces out a closed surface.
But, it's not the one Perez and I use! Why not? Because we were
trying to do something analogous to what people did for particles
in 3d gravity.

So, there's some relation between this "particles and strings
coupled to 4d BF theory" business and the mathematics of higher
gauge theory, but it's not the obvious one you might have guessed
at first.

Mysteries breed mysteries. For more musings on these topics,
try my talk at the Perimeter Institute:

20) John Baez, Higher-dimensional algebra: a language for quantum
spacetime, available at http://math.ucr.edu/home/baez/quantum_spacetime/

-----------------------------------------------------------------------
Previous issues of "This Week's Finds" and other expository articles on
mathematics and physics, as well as some of my research papers, can be
obtained at

http://math.ucr.edu/home/baez/

For a table of contents of all the issues of This Week's Finds, try

http://math.ucr.edu/home/baez/twfcontents.html

A simple jumping-off point to the old issues is available at

http://math.ucr.edu/home/baez/twfshort.html

If you just want the latest issue, go to

http://math.ucr.edu/home/baez/this.week.html
Back to top
Google

Back to top
Display posts from previous:   
Post new topic   Reply to topic Page 1 of 1 [1 Post] View previous topic :: View next topic
The time now is Wed Jun 28, 2017 3:42 am | All times are GMT
Forum index » Science and Technology » Physics » Research
Jump to:  

Similar Topics
Topic Author Forum Replies Last Post
No new posts Compare and contrast physics and chemistry parent Chem 0 Mon Jan 08, 2007 4:26 pm
No new posts WHO KILLED PHYSICS: CLAUSIUS OR EINSTEIN? Pentcho Valev Relativity 7 Thu Jul 20, 2006 8:24 am
No new posts A mathematical game - probability questions Alun Math 8 Sun Jul 16, 2006 12:25 pm
No new posts This week in the mathematics arXiv (19 Jun - 23 Jun) Greg Kuperberg Research 0 Sun Jul 16, 2006 12:04 am
No new posts Writing physics for the public and other matters - parano... Jack Sarfatti Math 0 Sat Jul 15, 2006 6:29 pm

Copyright © 2004-2005 DeniX Solutions SRL
Other DeniX Solutions sites: Electronics forum |  Medicine forum |  Unix/Linux blog |  Unix/Linux documentation |  Unix/Linux forums  |  send newsletters
 


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.0281s ][ Queries: 16 (0.0052s) ][ GZIP on - Debug on ]