Author 
Message 
John Baez science forum Guru Wannabe
Joined: 01 May 2005
Posts: 220

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: This Week's Finds in Mathematical Physics (Week 211)



Also available at http://math.ucr.edu/home/baez/week211.html
March 6, 2005
This Week's Finds in Mathematical Physics  Week 211
John Baez
The last time I wrote an issue of this column, the Huyghens probe was
bringing back cool photos of Titan. Now the European "Mars Express"
probe is bringing back cool photos of Mars!
1) Mars Express website, http://www.esa.int/SPECIALS/Mars_Express/index.html
There are some tantalizing pictures of what might be a "frozen sea" 
water ice covered with dust  near the equator in the Elysium Planitia region:
2) Mars Express sees signs of a "frozen sea",
http://www.esa.int/SPECIALS/Mars_Express/SEMCHPYEM4E_0.html
Ice has already been found at the Martian poles  it's easily visible there,
and Mars Express is getting some amazing closeups of it now  here's a
here's a view of some ice on sand at the north pole:
3) Glacial, volcanic and fluvial activity on Mars: latest images,
http://www.esa.int/SPECIALS/Mars_Express/SEMLF6D3M5E_1.html
What's new is the possibility of large amounts of water in warmer parts of
the planet.
Now for some math. It's always great when two subjects you're interested in
turn out to be bits of the same big picture. That's why I've been really
excited lately about Bott periodicity and the "superBrauer group".
I wrote about Bott periodicity in "week105", and about the Brauer group
in "week209", but I should remind you about them before putting them together.
Bott periodicity is all about how math and physics in n+8dimensional space
resemble math and physics in ndimensional space. It's a weird and wonderful
pattern that you'd never guess without doing some calculations. It shows up
in many guises, which turn out to all be related. The simplest one to verify
is the pattern of Clifford algebras.
You're probably used to the complex numbers, where you throw in just *one*
square root of 1, called i. And maybe you've heard of the quaternions, where
you throw in *two* square roots of 1, called i and j, and demand that they
anticommute:
ij = ji
This implies that k = ij is another square root of 1. Try it and see!
In the late 1800s, Clifford realized there's no need to stop here. He
invented what we now call the "Clifford algebras" by starting with the
real numbers and throwing in n square roots of 1, all of which anticommute
with each other. The result is closely related to rotations in n+1
dimensions, as I explained in "week82".
I'm not sure who first worked out all the Clifford algebras  perhaps it was
Cartan  but the interesting fact is that they follow a periodic pattern.
If we use C_n to stand for the Clifford algebra generated by n anticommuting
square roots of 1, they go like this:
C_0 R
C_1 C
C_2 H
C_3 H + H
C_4 H(2)
C_5 C(4)
C_6 R(
C_7 R( + R(8)
where
R(n) means n x n real matrices,
C(n) means n x n complex matrices, and
H(n) means n x n quaternionic matrices.
All these become algebras with the usual addition and multiplication of
matrices. Finally, if A is an algebra, A + A consists of pairs of guys
in A, with pairwise addition and multiplication.
What happens next? Well, from then on things sort of "repeat" with period 8:
C_{n+8} consists of 16 x 16 matrices whose entries lie in C_n!
So, you can remember all the Clifford algebras with the help of this
eighthour clock:
0
R
7 1
R+R C
6 R H 2
C H+H
5 3
H
4
To use this clock, you have to remember to use matrices of the right size to
get C_n to have dimension 2^n. So, when I write "R + R" next to the "7" on
the clock, I don't mean C_7 is really R + R. To get C_7, you have to take
R + R and beef it up until it becomes an algebra of dimension 2^7 = 128. You
do this by taking R( + R(, since this has dimension 8 x 8 + 8 x 8 = 128.
Similarly, to get C_{10}, you note that 10 is 2 modulo 8, so you look at
"2" on the clock and see "H" next to it, meaning the quaternions. But to get
C_{10}, you have to take H and beef it up until it becomes an algebra of
dimension 2^{10} = 1024. You do this by taking H(16), since this has
dimension 4 x 16 x 16 = 1024.
This "beefing up" process is actually quite interesting. For any associative
algebra A, the algebra A(n) consisting of n x n matrices with entries in A
is a lot like A itself. The reason is that they have equivalent categories
of representations!
To see what I mean by this, remember that a "representation" of an algebra
is a way for its elements to act as linear transformations of some vector
space. For example, R(n) acts as linear transformations of R^n by matrix
multiplication, so we say R(n) has a representation on R^n. More generally,
for any algebra A, the algebra A(n) has a representation on A^n.
More generally still, if we have any representation of A on a vector space V,
we get a representation of A(n) on V^n. It's less obvious, but true, that
*every* representation of A(n) comes from a representation of A this way.
In short, just as n x n matrices with entries form an algebra A(n) that's a
beefedup version of A itself, every representation of A(n) is a beefedup
version of some representation of A.
Even better, the same sort of thing is true for maps between representations
of A(n). This is what we mean by saying that A(n) and A have equivalent
categories of representations. If you just look at the categories of
representations of these two algebras as abstract categories, there's no
way to tell them apart! We say two algebras are "Morita equivalent" when
this happens.
It's fun to study Morita equivalence classes of algebras  say algebras over
the real numbers, for example. The tensor product of algebras gives us a way
to multiply these classes. If we just consider the invertible classes, we get
a *group*. This is called the "Brauer group" of the real numbers.
The Brauer group of the real numbers is just Z/2, consisting of the classes
[R] and [H]. These correspond to the top and bottom of the Clifford clock!
Part of the reason is that
H tensor H = R(4)
so when we take Morita equivalence classes we get
[H] x [H] = [R]
But, you may wonder where the complex numbers went! Alas, the Morita
equivalence class [C] isn't invertible, so it doesn't live in the Brauer
group. In fact, we have this little multiplication table for tensor prod
algebras:
tensor R C H

R  R C H

C  C C+C C(2)

H  H C(2) R(4)
Anyone with an algebraic bone in their body should spend an afternoon
figuring out how this works! But I won't explain it now.
Instead, I'll just note that the complex numbers are very aggressive and
infectious  tensor anything with a C in it and you get more C's. That's
because they're a field in their own right  and that's why they don't
live in the Brauer group of the real numbers.
They do, however, live in the *superBrauer* group of the real numbers,
which is Z/8  the Clifford clock itself!
But before I explain that, I want to show you what the categories of
representations of the Clifford algebras look like:
0
real vector spaces
7 1
split real vector spaces complex vector spaces
6 real vector spaces quaternionic vector spaces 2
complex vector spaces split quaternionic vector spaces
5 3
quaternionic vector spaces
4
You can read this information off the 8hour Clifford clock I showed you
before, at least if you know some stuff:
A real vector space is just something like R^n
A complex vector space is just something like C^n
A quaternionic vector space is just something like H^n
and a "split" vector space is a vector space that's been written as the direct
sum of two subspaces.
Take C_4, for example  the Clifford algebra generated by 4 anticommuting
square roots of 1. The Clifford clock tells us this is H + H. And if you
think about it, a representation of this is just a pair of representations of
H. So, it's two quaternionic vector spaces  or if you prefer, a "split"
quaternionic vector space.
Or take C_7. The Clifford clock says this is R + R... or at least Morita
equivalent to R + R: it's actually R( + R(, but that's just a beefedup
version of R + R, with an equivalent category of representations. So, the
category of representations of C_7 is *equivalent* to the category of split
real vector spaces.
And so on. Note that when we loop all the way around the clock, our
Clifford algebra becomes 16 x 16 matrices of what it was before, but this
is Morita equivalent to what it was. So, we have a truly period8 clock
of categories!
But here's the really cool part: there are also arrows going clockwise and
counterclockwise around this clock! Arrows between categories are called
"functors".
Each Clifford algebra is contained in the next one, since they're built
by throwing in more and more square roots of 1. So, if we have a
representation of C_n, it gives us a representation of C_{n1}. Ditto
for maps between representations. So, we get a functor from the category
of representations of C_n to the category of representations of C_{n1}.
This is called a "forgetful functor", since it "forgets" that we have
representations of C_n and just thinks of them as representations of C_{n1}.
So, we have forgetful functors cycling around counterclockwise!
Even better, all these forgetful functors have "left adjoints" going
back the other way. I talked about left adjoints in "week77",
so I won't say much about them now. I'll just give an example.
Here's a forgetful functor:
forget complex structure
complex vector spaces > real vector spaces
which is one of the counterclockwise arrows on the Clifford clock.
This functor takes a complex vector space and forgets your ability to multiply
vectors by i, thus getting a real vector space. When you do this to C^n,
you get R^{2n}.
This functor has a left adjoint:
complexify
complex vector spaces < real vector spaces
where you take a real vector space and "complexify" it by tensoring it with
the complex numbers. When you do this to R^n, you get C^n.
So, we get a beautiful version of the Clifford clock with forgetful functors
cycling around counterclockwise and their left adjoints cycling around
clockwise! When I realized this, I drew a big picture of it in my math
notebook  I always carry around a notebook for precisely this sort of thing.
Unfortunately, it's a bit hard to draw this chart in ASCII, so I won't
include it here.
Instead, I'll draw something easier. For this, note the following mystical
fact. The Clifford clock is symmetrical under reflection around the
3o'clock/7o'clock axis:
0
real vector spaces
7 1
split real vector spaces complex vector spaces
\
\
\
\
6 real vector spaces \ quaternionic vector spaces 2
\
\
\
\
complex vector spaces split quaternionic vector spaces
5 3
quaternionic vector spaces
4
It seems bizarre at first that it's symmetrical along *this* axis instead
of the more obvious 0o'clock/4o'clock axis. But there's a good reason,
which I already mentioned: the Clifford algebra C_n is related to rotations in
n+1 dimensions.
I would be very happy if you had enough patience to listen to a full
explanation of this fact, along with everything else I want to say. But
I bet you don't... so I'll hasten on to the really cool stuff.
First of all, using this symmetry we can fold the Clifford clock in half...
and the forgetful functors on one side perfectly match their left adjoints
on the other side!
So, we can save space by drawing this "folded" Clifford clock:
split real vector spaces
 ^
forget splitting   double
V 
real vector spaces
 ^
complexify   forget complex structure
v 
complex vector spaces
 ^
quaternionify   forget quaternionic structure
v 
quaternionic vector spaces
 ^
double   forget splitting
v 
split quaternionic vector spaces
The forgetful functors march downwards on the right, and their
left adjoints march back up on the left!
The arrows going between 7 o'clock and 0 o'clock look a bit weird:
split real vector spaces
 ^
forget splitting   double
V 
real vector spaces
Why is "forget splitting" on the left, where the left adjoints belong, when
it's obviously an example of a forgetful functor?
One answer is that this is just how it works. Another answer is that it
happens when we wrap all the way around the clock  it's like how going from
midnight to 1 am counts as going forwards in time even though the number is
getting smaller. A third answer is that the whole situation is so symmetrical
that the functors I've been calling "left adjoints" are also "right adjoints"
of their partners! So, we can change our mind about which one is
"forgetful", without getting in trouble.
But enough of that: I really want to explain how this stuff is related
to the superBrauer group, and then tie it all in to the *topology* of Bott
periodicity. We'll see how far I get before giving up in exhaustion....
What's a superBrauer group? It's just like a Brauer group, but where we
use superalgebras instead of algebras! A "superalgebra" is just physics
jargon for a Z/2graded algebra  that is, an algebra A that's a direct
sum of an "even" or "bosonic" part A_0 and an "odd" or "fermionic" part A_1:
A = A_0 + A_1
such that multiplying a guy in A_i and a guy in A_j gives a guy in A_{i+j},
where we add the subscripts mod 2.
The tensor product of superalgebras is defined differently than for algebras.
If A and B are ordinary algebras, when we form their tensor product, we
decree that everybody in A commutes with everyone in B. For superalgebras
we decree that everybody in A "supercommutes" with everyone in B  meaning
that
ab = ba
if either a or b are even (bosonic) while
ab = ba
if a and b are both odd (fermionic).
Apart from these modifications, the superBrauer group works almost like the
Brauer group. We start with superalgebras over our favorite field  here
let's use the real numbers. We say two superalgebras are "Morita equivalent"
if they have equivalent categories of representations. We can multiply
these Morita equivalence classes by taking tensor products, and if we just
keep the invertible classes we get a group: the superBrauer group.
As I've hinted already, the superBrauer group of the real numbers is Z/8 
just the Clifford algebra clock in disguise!
Here's why:
The Clifford algebras all become superalgebras if we decree that all the
square roots of 1 that we throw in are "odd" elements. And if we do this,
we get something great:
C_n tensor C_m = C_{n + m}
The point is that all the square roots of 1 we threw in to get C_n
*anticommute* with those we threw in to get C_m.
Taking Morita equivalence classes, this mean
[C_n] [C_m] = [C_{n+m}]
but we already know that
[C_{n+8}] = [C_n]
so we get the group Z/8. It's not obvious that this is *all* the superBrauer
group, but it actually is  that's the hard part.
Now let's think about what we've got. We've got the superBrauer group,
Z/8, which looks like an 8hour clock. But before that, we had the categories
of representations of Clifford algebras, which formed an 8hour clock with
functors cycling around in both directions.
In fact these are two sides of the same coin  or clock, actually. The
superBrauer group consists of Morita equivalence classes of Clifford
algebras, where Morita equivalence means "having equivalent categories
of representations". But, our previous clock just shows their categories
of representations!
This suggests that the functors cycling around in both directions are secretly
an aspect of the superBrauer group. And indeed they are! The functors going
clockwise are just "tensoring with C_1", since you can tensor a representation
of C_n with C_1 and get a representation of C_{n+1}. And the functors going
counterclockwise are "tensoring with C_{1}"... or C_7 if you insist, since
C_{1} doesn't strictly make sense, but 7 equals 1 mod 8, so it does the
same job.
Hmm, I think I'm tired out. I didn't even get to the topology yet! Maybe
that'll be good as a separate little story someday. If you can't wait,
just read this:
4) John Milnor, Morse Theory, Princeton U. Press, Princeton, New Jersey, 1963.
You'll see here that a representation of C_n is just the same as a vector
space with n different anticommuting ways to "rotate vector by 90 degrees",
and that this is the same as a real inner product space equipped with a map
from the nsphere into its rotation group, with the property that the north
pole of the nsphere gets mapped to the identity, and each great circle
through the north pole gives some action of the circle as rotations. Using
this, and stuff about Clifford algebras, and some Morse theory, Milnor gives a
beautiful proof that
Omega^8(SO(infinity)) ~ SO(infinity)
or in English: the 8fold loop space of the infinitedimensional rotation
group is homotopy equivalent to the infinitedimensional rotation group!
The thing I really like, though, is that Milnor relates the forgetful functors
I was talking about to the process of "looping" the rotation group. That's
what these maps from spheres into the rotation group are all about... but I
want to really explain it all someday!
I learned about the superBrauer group here:
5) V. S. Varadarajan, Supersymmetry for Mathematicians: An Introduction,
American Mathematical Society, Providence, Rhode Island, 2004.
though the material here on this topic is actually a summary of some
lectures by Deligne in another book I own:
6) P. Deligne, P. Etingof, D.S. Freed, L. Jeffrey, D. Kazhdan, J. Morgan,
D.R. Morrison and E. Witten, Quantum Fields and Strings: A Course For
Mathematicians 2 vols., American Mathematical Society, Providence, 1999.
Notes also available at http://www.math.ias.edu/QFT/
Varadarajan's book doesn't go as far, but it's much easier to read, so I
recommend it as a way to get started on "super" stuff.

Previous issues of "This Week's Finds" and other expository articles on
mathematics and physics, as well as some of my research papers, can be
obtained at
http://math.ucr.edu/home/baez/
For a table of contents of all the issues of This Week's Finds, try
http://math.ucr.edu/home/baez/twf.html
A simple jumpingoff point to the old issues is available at
http://math.ucr.edu/home/baez/twfshort.html
If you just want the latest issue, go to
http://math.ucr.edu/home/baez/this.week.html 

Back to top 


Herman Rubin science forum Guru
Joined: 25 Mar 2005
Posts: 730

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: Hellinger Distance



In article <cv0152$11p$1@dizzy.math.ohiostate.edu>,
mdshafri <mdshafri@yahoo.com> wrote:
Quote:  How to set the range of d value to represent the measurement.
We could be set 0 for no changes between two discrete distributions C
and H. What happen if the value of C far much bigger than H.
Please anybody advice me !!
On 19 Apr 1997 13:46:35 0500, Herman Rubin wrote:
In article <335237AD.59E2@dcs.rhbnc.ac.uk>,
Peter Burge <peteb@dcs.rhbnc.ac.uk> wrote:
Please could someone give me a reference for a measure
known as the Hellinger Distance between two
discrete distributions C and H. In Latex,
d=\sum_{i=0}^{K} (\sqrt{C_{i}}  \sqrt{H_{i}})^{2}
This expression may be lacking additional terms.
It is not lacking any terms. For general measures, although
it is not likely to be of much use unless they are finite,
it is
d = \int (sqrt(dF)  sqrt(dG))^2.
That this is welldefined can be seen by using as a base
measure H = F + G. It is the supremum of the discrete
versions obtained by using finite partitions.
The real introduction of this into mathematics was by
Kakutani, who gives credit to Hellinger in a footnote.

Assuming that C and H are finite measures, the Hellinger
distance between C and H is welldefined, and bounded
by the sum of their measures on the whole space.

This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Department of Statistics, Purdue University
hrubin@stat.purdue.edu Phone: (765)4946054 FAX: (765)4940558 

Back to top 


alain verghote science forum Guru Wannabe
Joined: 29 Apr 2005
Posts: 293

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: fractional iteration of functions



Dear Friends,
I've read with great interest all your recent articles about
" fractional iteration of functions".
I believe there is a missing point:ABEL FUNCTIONS.
1°) I'll take the example given by NPC g^[2](x)=xx^2;
and start from a 'next by' function : f(x) = x^2+x+1/4 of
which ABEL function phi(x) =ln(ln(1/2x))/ln(2) is known
so you can get any frational iterate of f(x).
example m(x)= exp((ln(2)ln(12x))*sqrt(2)) + 1/2
verifies m(m(x))= x^2+x+1/4 = f(x).
A fruitful track is ,when needed , approximating Abel functions...
by inversible functions.
2°)d/dx phi(x) has also nice properties :
a)as phi(f(x))=phi(x)+1 , f'(x)*phi'(f(x))=phi'(x)
we may solve equation like g(x)+1/(x*ln(x))= 2x*g(x^2)...
b)rth iterate of f(x) f^[r](x)=exp(r/phi'(x)D)o x
with D <=> d/dx
Sincerely,Alain. 

Back to top 


Eugene Stefanovich science forum Guru
Joined: 24 Mar 2005
Posts: 519

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: How real are the "Virtual" partticles?



Arnold Neumaier wrote:
Quote:  Could you be more specific please. Suppose I would like to find
the time dependence of the expectation value of position (or momentum)
of the electron in the system of two interacting particles
(electron + proton). This quantity is definitely measurable.
How would you do this calculation in QED?
Nobody can do it exactly.
I can write you a formula (see eq. (12.40) in the book)
r(t) = <0da exp(iH_d t) R exp(iH_d t) a^*d^* 0
where 0> is noparticle vacuum, a^*, a and d^*, d are particle
operators for electrons and protons, respectively.
H_d = H_0 + a^*d^*ad + ...
is the dressed particle Hamiltonian, and R is the (NewtonWigner)
position operator for the electron.
This expression is formally exact at low energies where the
bremsstrahlung photons, pair creation, etc. can be neglected.
It is not exact since to do anything with it you need to work
at a finite order; at infinite order you run into convergence
problems. That's why I wrote that nobody can do it exactly.
Once one uses approximations, one has to compare which method produced
the most accurate results with a given effort. This decides on the
merit of the approximation, not the fact that you have a divergent
infinite series and cut it off somewhere. You could also cut off
instead the energy scaleat a fixed cutoff energy, and also get a
theory that is as formally exact as yours.

I think there is a big difference between two types of approximations:
1) momentum cutoff and 2) use of limited perturbation order.
The momentum cutoff is artificial and explicitly nonaccurate way
of torturing the theory. Cutting off integrals is needed to compensate
difficulties in the illdefined Hamiltonian. This kills the relativistic
invariance and other good properties. Use of the dressed particle
Hamiltonian allows you to get rid of the momentum cutoff and completely
eliminate this source of errors.
Limited perturbation order is an unavoidable feature of calculations.
No matter, whether you use QED or RQD you are stuck with perturbation
theory, simply because we don't know how to do better (yet).
But experience
shows that this approximation is not that bad. All phenomenal results
of QED were obtained by using just few low order terms. So, for
all practical reasons, this seems to be working just fine. As far as I
know, there is no hard proof that the perturbation series diverge.
So, there is no reason to panic, at least until we haven't reached the
137th perturbation order in our calculations.
Eugene Stefanovich 

Back to top 


Arnold Neumaier science forum Guru
Joined: 24 Mar 2005
Posts: 379

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: How real are the "Virtual" partticles?



Eugene Stefanovich wrote:
Quote: 
Suppose that you are right and all these fancy methods can indeed
describe the time evolution. Then I would like to ask a question.
Why straightforward quantum formula exp(iHt) does not work in
QFT? Why do we need to go such length to invent substitutes for
this simple and transparent operator?

Because H is _defined_ by a limiting process involving renormalization.
Arnold Neumaier 

Back to top 


Eugene Stefanovich science forum Guru
Joined: 24 Mar 2005
Posts: 519

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: How real are the "Virtual" partticles?



Arnold Neumaier wrote:
Quote: 
I put enough effort into these discussions with you and will
leave you now alone. You don't need to justify your position
further. I knew it already from last year and don't share
your assessment; and I have nothing new to say.

Let me then summarize the main points of our discussion
(the way I see it).
My major original claim was that in the current QED theory
(in the limit of infinite cutoff) there is
no Hamilton operator H that can be simultaneously used
in both perturbative expansion for the Smatrix and in the
formula for the time evolution operator exp(iHt). The
latter formula remains illdefined because
a) existing H have infinite coefficients
b) existing H have trilinear terms that lead to the "instability"
of electrons and vacuum.
These problems can be fixed by the unitary dressing transformation of
the QED Hamiltonian. The dressed particle Hamiltonian allows to
perform calculations without regularization and renormalization.
It can be used equally well for scattering, time evolution,
bound states, etc. The vacuum and oneparticle states are stable.
You said that my assessment of the QED problems is wrong and
presented the following arguments:
1) GlazekWilson Hamiltonian has finite coefficients in the
limit of infinite cutoff.
I agreed with that and conceded that my claim a) wasn't correct.
The statement b) is still valid in the GlazekWilson approach.
2) "closed path" formalism allows to calculate the time evolution.
I maintain that this approach requires a welldefined Hamiltonian,
and, even if used with the GlazekWilson Hamiltonian, it
still suffers from the problem b)
3) DiracFock approach can be derived from the QED Hamiltonian
and does not have problems a) and b)
I maintain that there is no consistent derivation of the DiracFock
Hamiltonian from first principles. This is a heuristic semiempirical
approach.
4) The dressing is equivalent to the wave function renormalization
which takes care of the two problems a) and b) in QED.
I don't see how multiplication by a factor (wave function
renormalization) can be equivalent to a unitary transformation
(dressing). Your statement remains unexplained.
Eugene Stefanovich. 

Back to top 


Larne Pekowsky science forum beginner
Joined: 24 Mar 2005
Posts: 3

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: A classical mechanics aperitif



Quote:  There is a natural bracket on vector fields which is the
commutator. There is another natural bracket on Hamiltonians (or
better thought as just arbitrary functions on the symplectic space,
or 'observables') which is the Poisson bracket. It is not hard to
check that these two brackets are really the same.

Ok, I checked this and I think I've got it: if H_1 and H_2 are
functions, their exterior derivatives are (d H_1) and (d H_2), and the
corresponding vector fields are (d H_1)' and (d H_2)' then
d( (H_1,H_2) )' = [(d H_1)',(d H_2)']
By the way, is there a name for the (d _)' operation?
Quote:  And of course, there is also Noethers theorem: If you have a symmetry
given by a vector field V on your symplectic space that preserves w
(that is the Lie derivative of w in direction V vanishes) inserting
it
in the first slot of w gives you a closed (check!) oneform.

Ok, for the one dimensional case
w(V,.) = V^p dq  V^q dp
d(V^q dp  V^p dq) = (\partial_p V^p) dp /\ dq
 (\partial_q V^q) dq /\ dp
= (\partial_p V^p + \partial_q V^q) dp /\ dq
L_V (w) is a little more involved, especially since I've never taken
the Lie derivative of a form before! But following the recipe at
http://mathworld.wolfram.com/LieDerivative.html I get the same thing:
L_V (w) = (\partial_p V^p + \partial_q V^q) dp /\ dq
So yes, if this vanishes then so does the exterior derivative, and the
form is closed. And then, as you point out, w(V,.) is locally the d
of some function (or globally if our space is simply connected?)
Quote:  This function is the charge that generates the symmetry

Your use of the word "charge" here is... suggestive. Not to get too
far afield, but is this related to things like electric charge and
U(1) symmetry in gauge theory?
Quote:  First, the connection between exponentiation and integration here
seems very deep and more than a little mysterious. Anyone have
any insights on this?
Hmm, I would say, this comes just from the way you solve an ODE with
constant coefficients.

Awww is that it? I was hoping for something a bit more exotic. Maybe
it's just that exponentiating operators still seems exotic to me.
For example, when looking at the harmonic oscillator I got (ignoring
factors of 2 and m and k)
h = p \partial_q  q \partial_p
Integrating to get the flow
p = cos(t)
q = sin(t)
is straightforward, but I'm not sure how I would get this directly
from exp(h).
Quote:  Finally, I've heard it said that flows "preserve the symplectic
structure."
It just means (in fancy language) that starting from some H, the Lie
derivative of w w.r.t the vector field that comes from H vanishes.

Ah, and this is the inverse of what you said earlier: rather than
using a moment map to get from a vector field with L_v(w) = 0 to a
function, this says that starting from a function the vector field
obtained as the dual of dH will have L_v(w) = 0. Right?

This account is a spamtrap. To send me real mail, use
larne@caneprince.com but change a prince into a toad. 

Back to top 


Larne Pekowsky science forum beginner
Joined: 24 Mar 2005
Posts: 3

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: A classical mechanics aperitif



In article 42402406.DCF70860@novgorod.net Alexey Popov
<avp@novgorod.net> wrote:
Quote:  Volume form can be derived from given metric. But, in general,volume
is
not directly connected with metric. Volume form is
fixed non degenerated nform, where n  dimention of our manifold.
Symplectic space have volume form based on symplectic structure.

Thanks for the clarification. I'd only ever seen the volume form
defined with a factor of det g  which made sense, since I would
expect the conventional notion of area and volume would depend on the
metric.

This account is a spamtrap. To send me real mail, use
larne@caneprince.com but change a prince into a toad. 

Back to top 


Arnold Neumaier science forum Guru
Joined: 24 Mar 2005
Posts: 379

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: a question about non  locality



Kevin Blake wrote:
Quote:  In Bjorken and Drell  QED part 1 I read a statement that one doesnt
use a square rooted Hamiltonian (H= SQRT/m*2.c*4+m*2.p*2/) in a wave
equation of the Schoedinger type
(–ih.dpsi/dt=H.psi) because after expanding the root in Taylor series
one gets all powers to infinity of the space derivatives. This makes
the theory nonlocal.
1.Now I don't inderstand how the n+1 derivative is more non local
than the nth derivative – in the end all is taken to the limit of the
local point)

It is only the infinite sum that is nonlocal.
sum h^k/k! d^k f(x)/dx^k = f(x+h)
is the value at a point at some distance from x.
Quote:  2.Then in the quantum theory based on Schroedinger equation there are
only second order derivatives over space but nevertheless one is left
at the end with a nonlocal theory (EPR type paradoxes).

These are two different notions of nonlocality.
Arnold Neumaier 

Back to top 


Susy science forum beginner
Joined: 24 Mar 2005
Posts: 6

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: a question about non  locality



[Moderator's note: please quote to provide context. ik]
mmmm
i do not see in what sense in that example f(x) was non local
i mean how to determine whether a function or a term in the hamiltonian
or the lagrangian is local or not, what defines locality here ?
i still have troubles with SQRT(m^2+p^2) understanding its nonlocal
behaviour as many authors say in their books
susy 

Back to top 


Aaron Bergman science forum addict
Joined: 24 Mar 2005
Posts: 94

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: a question about non  locality



In article <slrnd45abn.dc8.robert@localhost.localdomain>,
"Robert C. Helling" <robert@hellingdell600.iuhb02.iubremen.de>
wrote:
Quote:  On Wed, 23 Mar 2005 20:52:01 +0000 (UTC), Kevin Blake <kvblake2003@yahoo.com
wrote:
In Bjorken and Drell  QED part 1 I read a statement that one doesnt
use a square rooted Hamiltonian (H= SQRT/m*2.c*4+m*2.p*2/) in a wave
equation of the Schoedinger type
(–ih.dpsi/dt=H.psi) because after expanding the root in Taylor series
one gets all powers to infinity of the space derivatives. This makes
the theory nonlocal.
1.Now I don't inderstand how the n+1 derivative is more non local
than the nth derivative – in the end all is taken to the limit of the
local point)
You are right, any finite number of derivatives is local. However
powerseries in derivatives can be nonlocal. The standard example is
the translation operator exp(a d_x) (d_x is the derivative):
(exp(a d_x) f)(x) = f(x+a)
and that's genuinely nonlocal.

This is said a lot, and I hadn't really thought about it much, but
there's something funky here. The above formula only holds for analytic
functions. Now, at least in whatever way we can make sense of it,
analytic functions are going to be of measure zero in the path integral.
On the other hand, in the stationary phase approximation, you'll
probably get analytic functions, and things really will look nonlocal.
So what's going on here?
Aaron 

Back to top 


Cl.Massé science forum Guru Wannabe
Joined: 24 Mar 2005
Posts: 149

Posted: Thu Mar 24, 2005 9:51 pm Post subject:
Re: a question about non  locality



"Kevin Blake" <kvblake2003@yahoo.com> a écrit dans le message de
news:b21e95fa.0503230651.4649df80@posting.google.com...
Quote:  In Bjorken and Drell  QED part 1 I read a statement that one doesnt
use a square rooted Hamiltonian (H= SQRT/m*2.c*4+m*2.p*2/) in a wave
equation of the Schoedinger type
(–ih.dpsi/dt=H.psi) because after expanding the root in Taylor series
one gets all powers to infinity of the space derivatives. This makes
the theory nonlocal.

The Schroedinger equation isn't relativistic, it isn't supposed to have
all the good properties. Which and why is really an exercise in
futility, or in mathematics.
Note that the *massive* Dirac equation doesn't use a squareroot either,
it is a simple factorization. Note again that the "squared" equation is
also a valid one. The only reason why it isn't used is that it has less
independent variables. Note thirdly that the Maxwell equation is still
another factorization, that use another representation of the Poincaré
group, and this is the genuine reason why the Schrödinger equation isn't
amenable to such a handling.
Quote:  1.Now I don't inderstand how the n+1 derivative is more non local
than the nth derivative – in the end all is taken to the limit of the
local point)

The Nth derivative, with N very large, is local. That doesn't imply
that the derivative isn't nonlocal for N tending to infinity.
Quote:  2.Then in the quantum theory based on Schroedinger equation there are
only second order derivatives over space but nevertheless one is left
at the end with a nonlocal theory (EPR type paradoxes).

The equation by itself isn't nonlocal. It's interpretation is, which
postulates the projection onto an eigenstate at the time of a measure.

~~~~ clmasse on free Fcountry
Liberty, Equality, Profitability. 

Back to top 


Guest

Posted: Sun Apr 10, 2005 7:42 am Post subject:
Re: McDowellMansouri gravity



John Baez wrote:
Quote:  I'm desperate, I'll talk to anyone with brains.

This has always been the motivating hope of anyone first posting to
usenet.
But you've been around a while. ;)
Quote:  I've been working through part of this paper with my students
Derek Wise and Jeffrey Morton. If you like this stuff about
McDowellMansouri gravity and topological quantum field theory,
I strongly recommend the later paper by Freidel and Starodubtsev:
http://arxiv.org/abs/hepth/05 01191

Thanks. I kicked myself for having not found this on my own when you
first posted this thread. It fleshes out their calculations quite a
bit, and has a lot of other cool stuff in it.
I'm really not sure if I like the McDowellMansouri idea yet or not.
It does seem appealing to be able to work with just a connection,
rather than with also a metric or frame. I also kind of like it
because of its relation to YangMills  it seems to provide a
different path towards a unified view of gravity and the other gauge
fields. This is why I'm looking into it, anyway.
Quote:  don't see how you're getting SO(10) into the game  i.e., I don't
see a *natural* way to get bivectors and trivectors in the Cl_4
to correspond to bivectors generating SO(10). In fact, I don't
even get what you mean about 10 bivectors generating SO(10)!
In 10 dimensions the space of bivectors has dimension 10 choose 2 =
45,
so the dimension of the Lie algebra of SO(10) is 45. But you must
know that.

Eeek! This is what I get for not being careful enough. I meant to
type SO(5), not SO(10). Sorry! For my sad excuse I can only say there
were too many 10's floating around in my head when I typed that. (I
will consider myself zapped with a lightening bolt for that  but I'll
try to make up for it.)
Quote:  This Clifford algebra stuff is sort of independent of the key
McDowellMansouri idea, in my opinion  namely, combining the
frame field and SO(n) connection on an nmanifold into an SO(n+1)
connection. But, I guess you're right that it might blend well
with how the even part of Cl_{n+1} is Cl_n!

OK, I can do better than that now that I've had a chance to work on it.
See below.
Quote:  So he says. Alas, I've never understood more than tiny bits of this
model, and I don't see how they're supposed to fit together.

His presentation is abysmal. There's definitely good stuff in there
though. And I must admit I have nightmares in which I've spent twenty
years working on a beautiful model for the whole enchilada, but when I
finally understand how it all fits together it gives me the insight to
see I've only managed to replicate Tony's model, which I wasn't able to
get from just studying it the first time.
Quote:  The McDowellMansouri stuff was apparently pretty fashionable for
a while when people were studying diverse forms of supergravity,
back in the 80's. I guess this stuff has now been absorbed by
that behemoth they call string theory. But, I don't think most
people understand the geometry of it terribly well!

Well, it seems to me more of a neat trick than anything that's going to
have an elegant underlying geometric explanation. But differential
geometry is blessed and cursed with the ability to address everything
several different ways. So I guess it's a matter of picking the
geometric description you like.
It does seem like whenever I'm going through recent papers that have
interesting math there are always strings attached. Sometimes it's
just a section at the end where strings are mentioned and brought in
somehow. I suspect these authors are compelled to make a nod to the
behemoth in order to get their work published, which is very sad.
Quote:  Peter Woit gave me a crucial clue, in another post on this topic
over on sci.math.research. He writes:

I'll have to hang out over there a bit  forever in the hope of
finding people with brains. But I think my brains may be too small,
as I've always viewed proofs as means rather than ends. And
mathematical structure that's there for it's own sake, rather than
giving a cool description of real world physics, doesn't motivate me to
do the work it takes to grok it. Typical physicist that way I guess.
Quote:  This is a Cartan connection. For a recent book that discusses
them in detail, see R.W. Sharpe, "Differential Geometry".
You can do the same thing with other pairs of Lie groups G and
subgroups H (here you're using G=O(n+1), H=O(n)), when
dim G  dim H= dim M. A G connection breaks up into an
H connection and an identification of Lie G/Lie H with the
tangent space to M at each point. The Lie Hcomponent of the
curvature of this Cartan G connection is the standard curvature, the
Lie G/Lie H component is the torsion.

That makes sense.
But, really, it just seems like a trick. You start with SO(5), break
those 10 Lie algebra generators (now connection 1forms) into the 6 of
the SO(4) connection and the 4 for the frame, and cook up an action for
the original SO(5) connection that gives you GR after the split. It is
neat. But, here, let me show you what I just worked out as the
Clifford algebra approach, as you might be amused and/or like it
better.
Lets just deal in Cl_4 (or CL_(1,3) to be more precise) the whole time.
In differential geometry terms, a Clifford algebra fiber bundle.
Standard GR can be formulated as follows. The dynamic variables are
the frame, a Clifford vector valued 1form, e, and the connection, a
Clifford bivector valued 1form, W. Using the unit Clifford
fourvector element, g = gamma_0 gamma_1 gamma_2 gamma_3, (which
satisfies g g = 1) the action for gravity is
S = int < e e R(W) g + lambda e e e e g >
Where the < > means take the Clifford scalar part, or, if you like
using matrices, the trace. Lambda is the cosmological constant and R
is the Clifford bivector valued curvature 2form,
R = d W + (1/2) W W
I think that's pretty cool in itself  no indices. :)
One thing to note is, using Clifford algebra, the Hodge star
transformation, or duality, is just multiplication by g.
OK, now lets do something along the lines of McDowellMansouri. Taking
advantage of the fact that in a Clifford algebra it's OK to add and
multiply vectors, bivectors, and fourvectors, you can just write a
Clifford connection as
A = e + W
The curvature of this connection, a Clifford multivector valued
2form, is
F = d A + (1/2) A A
= ( d e + (1/2) (e W + W e) ) + ( d W + (1/2) W W + (1/2) e e )
Where the first term in ( ) is a Clifford vector (the torsion) and the
second term is a Clifford bivector, the curvature R plus a frame term.
The action we cook up to start with is
S = int < B (F  (1/2) B g) >
with B some arbitrary 2form with Clifford vector and bivector parts.
The Clifford vector part of B is the Lagrange multiplier that makes the
vector part of F, the torsion, vanish. Varying the rest of B then
gives its bivector part as
B =  (R + (1/2) e e) g
and plugging this back into the action gives the effective action
S = int < (R + (1/2) e e) (R + (1/2) e e) g >
= int < R R g + e e R g + e e e e g >
which we also could have gotten just by starting with S = int < F F g > 

Back to top 


Guest

Posted: Mon Apr 11, 2005 5:29 am Post subject:
Re: McDowellMansouri gravity



Hmm, it looks like I got too long winded and the last part of my post
got cut off  so, picking up after:
which we also could have gotten just by starting with S = int < F F g > 

Back to top 


Nick Maclaren science forum Guru Wannabe
Joined: 26 Apr 2005
Posts: 127

Posted: Tue Apr 26, 2005 12:13 pm Post subject:
Re: Grav and Inertial Mass



In article <426D7292.94458A21@hate.spam.net>,
Uncle Al <UncleAl0@hate.spam.net> writes:
>
> Photons are unmassed bosons, electrons are massed fermions. How much
> of physics are willing to trash to maintain your illusions? More
> read, less screed
Oh, just the dogma  I am quite happy with the science.
It might surprise you to learn that not everything that is to be
known is already known, that there are known inconsistencies
between several aspects of basic physics, and that even the most
eminent physicists admit that some things that they believe to
be true may only be approximations.
It would surprise me to learn that you know exactly which of the
"known facts" will be preserved when physics moves on to the next
level and/or unifies quantum mechanics and general relativity.
Regards,
Nick Maclaren. 

Back to top 


Google


Back to top 



The time now is Mon Dec 17, 2018 7:12 pm  All times are GMT

