Author 
Message 
Arnold Neumaier science forum Guru
Joined: 24 Mar 2005
Posts: 379

Posted: Tue Apr 26, 2005 12:14 pm Post subject:
Re: Renormalization



Matthew Nobes wrote:
Quote:  On 20050425, Arnold Neumaier <Arnold.Neumaier@univie.ac.at> wrote:
Matthew Nobes wrote:
No no no. It's not correct. You have to implement some form of
regulator, that's it. You can use dimreg, or a PauliVillars cutoff,
or a spacetime lattice, whatever you like.
But dimensional regularization is the most flexible one when working
with explicit formulas.
Sure, but that's not the point. You can use a different scheme if you
want, it'll just be harder. It may also be advantagous, I'm no expert,
but IIRC dimreg doesn't play very nicely with supersymmetry.

So far, supersymmetry has not produced any verifiable physics.
Thus I am entitled to ignore it until experimental verification
comes in.
Quote:  Similarly,
lattice QCD works, but only for a big computer,
This is untrue. One can (and I *do*) do perturbation theory with a
lattice cutoff. It's more difficult, but not impossible. And you only
need small computers

But you can't go to the limit, while you can in dimensional
regularization at fixed loop order. With lattice calculations,
you'd never come close in accuracy to the experimental value
of the Lamb shift for hydrogen.
Quote:  not for someone who wants insight...
I would disagree with that. The lattice approach has provided lots of
insight, starting with Wilson's work on confinement. It also provides a
tool to investigate other "deep" questions in field theory (for example,
work on the trivality of \phi^4 theory),

This is still undecided, in spite of the lattice results.
Probably it just means that lattice regularization is not
appropriate for renormalizable theories which are not
asymptotically free.
Quote:  to say nothing of the "insight"
one gets from acutally using the theory to extract a number that can be
compared to experiments.

OK, so let me be more specific and say,
''not for someone who wants insight into analytical questions...''
Arnold Neumaier 

Back to top 


Aaron Denney science forum beginner
Joined: 26 Apr 2005
Posts: 15

Posted: Tue Apr 26, 2005 12:14 pm Post subject:
Re: Orbitals as Orbits: The Return of Bohr



On 20050419, Oz <Oz@farmeroz.port995.com> wrote:
Quote:  Aaron Denney <wnoise@ofb.net> writes
On 20050417, Oz <Oz@farmeroz.port995.com> wrote:
There is one question that needs answering in this scenario.
Its glibly stated that the orbiting electron would radiate energy away.
However what would be the frequency of such radiation?
What would the energy of such a photon be?
It radiates away in the continuum approximation, under the rules of
classical E&M. Under those rules, the E&M field isn't composed of
photons. The frequency of such radiation is just the frequency
of the orbit.
Eh? Hang on, that's not an argument for an electron as an extended
object.

Never said it was  "continuum approximation" wasn't meant to imply that,
just "ignoring quantum rules". We can (sweeping some infinities and
other ugliness under the rug) treating the electrons as
Quote:  I always assumed it was cyclotron radiation losses that were the energy
loss mechanism. Unfortunately I only ever just lightly dealt with this
during formal education 40 years ago, and am (to be honest) extremely
vague about the details other than an accelerating charge radiates.

But that's enough  orbiting is changing velocity, is acceleration.
Quote:  However for some obscure gut feeling I would expect a rather high energy
photon to be expected from an electron orbiting an atom.

If we think about this situation in the classical case there is no such
thing as a photon. If we think about this in the quantum case, then
the energy of the photon is simply given by the change in energy between
the old and new states of the electron.
Quote:  If we're dealing
with a continuous case, all orbital radii are allowed.
Well, not really. It would be, for example, tricky to explain the
emission of EM radiation and 'dropping' into an orbit of *higher*
energy. Conservation of energy and all that. Take energy from the moons
orbit and the orbital radius *increases*, after all.

Are allowed in the sense of "it is physically possible to set up an atom
in that situation. Of course, getting to a higherenergy orbit would
only me possible with the addition of energy.
Quote:  An orbit
arbitrarily close to the nucleus is allowed, with an arbitrary low
energy value (and hence arbitrarily high allowed energy extraction from
a given orbit).
I that right?
Doesn't it depend on where you started from?

Yes, if we take everything to be pointparticle. It doesn't depend on
where one starts in the classical case. (In the quantum case, one
can talk about allowed transitions for certain types of radiation. An
electric dipole transition, can only happen if the quantum numbers
l and m change in certain ways. Slightly different rules for magnetic
dipole transitions and electric quadrupole transitions, and so forth.)

Aaron Denney
>< 

Back to top 


georgesZ science forum beginner
Joined: 25 Mar 2005
Posts: 8

Posted: Thu Apr 28, 2005 3:30 pm Post subject:
Re: Schur result on linear dimension



Dear Robin,
thank you very much for this beautiful and elementary proof of Schur
result.
Robin Chapman <rjc@ivorynospamtower.freeserve.co.uk> wrote in
Quote:  zellerg@wanadoo.fr (georgesZ) writes:
Can someone give here the proof of a Schur result :
"for n>=1, the maximal linear dimension of a commutative
subalgebra Aof the matrix algebra M_n(K) (K a commutative field) is > >>[n^2/4]+1".
There's a 1998 note in the Monthly about this:
A Simple Proof of a Theorem of Schur
M. Mirzakhani
The American Mathematical Monthly, Vol. 105, No. 3. (Mar., 1998),
pp. 260262.
Available on JSTOR at
http://links.jstor.org/sici?sici=00029890%28199803%29105%3A3%3C260%3AASPOAT%3E2.0.CO%3B2R

Thanks also to Anton Deitmar for the reference :
Jacobson, N.
Schur's theorems on commutative matrices.
Bull. Amer. Math. Soc. 50, (1944). 431436.
And also to Manuel Ojanguren for the reference :
R.C.Cowsik A short note on the ScurJacobson Theorem
Proc.Amer.Math.Soc.118 (1993)p.675676.
Best regards,
Georges 

Back to top 


Guest

Posted: Fri Apr 29, 2005 12:30 am Post subject:
Re: McDowellMansouri gravity



"With so much stuff happening, the probability of something very
improbable occuring is very high."
It turns out that in my previous posts to this thread a trailing . got
put on a line by itself... which killed the post, since it's routed
through unix mail.
Here's the meat of the post on the Clifford algebra version of
McDowellMonsouri gravity that I had tried to send earlier, and a
question for help at the end:
Lets just deal in Cl_4 (or CL_(1,3) to be more precise) the whole time.
In differential geometry terms, a Clifford fiber bundle. Standard GR
can be formulated as follows. The dynamic variables are the frame, a
Clifford vector valued 1form, e, and the connection, a Clifford
bivector valued 1form, W. Using the unit Clifford fourvector
element, g = gamma_0 gamma_1 gamma_2 gamma_3, (which satisfies g g =
1) the action for gravity is
S = int < e e R(W) g + lambda e e e e g >
Where the < > means take the Clifford scalar part, or, if you like
using matrices, the trace. Lambda is the cosmological constant (times
some factor) and R is the Clifford bivector valued curvature 2form,
R = d W + (1/2) W W
I think that's pretty cool in itself  no indices. :)
One thing to note is, using Clifford algebra, the Hodge star
transformation, or duality, is just multiplication by g.
OK, now lets do something along the lines of McDowellMansouri. Taking
advantage of the fact that in a Clifford algebra it's OK to add and
multiply vectors, bivectors, and fourvectors, you can just write a
Clifford connection as
A = e + W
The curvature of this connection, a Clifford valued 2form, is
F = d A + (1/2) A A
= ( d e + (1/2) (e W + W e) ) + ( d W + (1/2) W W + (1/2) e e )
Where the first term in ( ) is a Clifford vector (the torsion) and the
second term is a Clifford bivector, the curvature R plus a frame term.
The action we cook up to start with is
S = int < B (F  (1/2) B g) >
with B some arbitrary 2form with Clifford vector and bivector parts.
The Clifford vector part of B is the Lagrange multiplier that makes the
vector part of F, the torsion, vanish. Varying the rest of B then
gives its bivector part as
B =  (R + (1/2) e e) g
and plugging this back into the action gives the effective action
S = int < (R + (1/2) e e) (R + (1/2) e e) g >
= int < R R g + e e R g + e e e e g >
which we also could have gotten just by starting with S = int < F F g >
Scaling e by a constant gets the cosmological constant in there, and
the < R R g > is a "topological" term that doesn't contribute to the
dynamics  so what's left is GR.
Nifty, eh?
One thing I just realized is, via this formalism, diffeomorphisms and
local frame rotations enter through the same infinitesimal gauge
transformation of the connection:
A' = A + d C + (1/2)( A C  C A )
with the Cillford vector part of C giving diffeomorphisms and the
bivector part giving frame rotations. That's pretty cool.
Of course, the next thing to try is going whole hog and letting the
connection, A, be an arbitrary Clifford element. Then we'd get GR and
a bunch of other stuff.
Now, if you want to think about this in terms of group theory, the best
thing to do is think of the basis elements of the Clifford algebra as
Lie algebra generators. Then I think, for example, the Lie algebra
corresponding the 16 generators of Cl_4 is... u(4)? And I think the
Lie algebra corresponding to just the 10 Cl_4 vectors and bivectors is
sp(2), but I'm not at all sure of that, as my group theory is lacking.
Maybe someone out there knows how to correlate the Clifford algebras of
various dimensions, and their multivector subalgebras, to the
corresponding Lie algebras? I've seen how to represent Lie algebra
generators as Clifford bivectors, but I'd like to see how the
arbitrary Clifford subalgebras, and not just bivector subalgebras,
correlate the other way. I'd love to see that.
Best,
Garrett 

Back to top 


Chris Oakley science forum beginner
Joined: 30 Apr 2005
Posts: 26

Posted: Sat Apr 30, 2005 8:26 am Post subject:
Re: Renormalization



<jsolomon@mail.com> wrote in message
news:1114565139.935605.269260@z14g2000cwz.googlegroups.com...
Quote: 
Arnold Neumaier wrote:
The problem I have with all this (at this point) is that when the
theory is developed from basic postulates (as by P&S or Weinberg or
any other textbook) it is fairly obvious the intergrals are Riemann.
Only on first sight. Once they get senseless naive results, they say
that something need to be modified. But being physicists and not
mathematicians they do it on the ad hoc level rather than stressing it
from the beginning, as a more solid foundation would have to do.
I am interested in a formulation of QED that is mathematically
consistent throughout. What you are saying, I think, is that
physicists tend to use Ad hoc methods that may not be all that
mathematically consistent. Can you suggest some references that
formulate the theory in a consistent way. (I recall that you have
referenced various works in previous posts. However this thread is so
long I don't think I can find them.)
Dan Solomon

As far as I know, you will not find a consistent treatment of QED in 3+1
dimensions in the literature. I got something that was not obviously
inconsistent using nonlocal field equations where interaction picture
was just an approximation. One gets the correct treelevel Feynman
amplitudes, but the Lamb shift and anomalous magnetic moment remain a
mystery. 

Back to top 


Oz science forum Guru Wannabe
Joined: 30 Apr 2005
Posts: 155

Posted: Sat Apr 30, 2005 8:27 am Post subject:
Re: Pauli Exclusion Principle for Free Electrons



Frank Hellmann <Certhas@gmail.com> writes
Quote:  If you have two hydrogenes far removed from each other the pauli
exclusion principle will not prevent the electrons from both being in
the local groundstate. You would (could) build both electron states from
a superposition of infinitely many impulse eigenstates into a
wavepacket. Bring them closer together and you will start seeing the
effect, more and more the requirement of antisymmetry will take effect
and in the limit of one 2+ pointcharge they have to occupy the two
lowest lying levels.

Alternatively consider two real solids with complex electron
wavefunctions.
Sodium chloride is ionic and there is no overlap of orbitals so that
each electron sits attached to its own atom and never exchanges
orbitals. The next energy level up is high, essentially unreachable by
thermal processes. You can tell this because sodium chloride is a good
insulator at room temperature. Although the electrons are in close
proximity they are in effect infinitely separated.
Metals are very different. Here the orbitals of the conduction electrons
expand to cover the entire crystal. Every conduction electron must be at
a different energy which is broadly set by the energy levels of the
METAL. I forget the details now (they are hugely simple) but these
energy levels are very closely spaced indeed so its easy to pack vast
numbers of electrons into them. Typically the temperature of the
electron gas is (from memory) in the thousands of degrees C.
Semiconductors are similar to both. The energy gap from the ionic
orbitals to the conduction band is modest and accessible to thermal
electrons (or holes). Those that jump to the metallic band can engage in
conduction across the crystal since their wavefunction encompasses the
entire crystal. This is why conductivity in semiconductors has a strong
exp(kt) term.
UnclAl would give better and more scientific details without breaking
into a sweat....

Oz
This post is worth absolutely nothing and is probably fallacious.
Use oz@farmeroz.port995.com [ozacoohdb@despammed.com functions].
BTOPENWORLD address has ceased. DEMON address has ceased. 

Back to top 


Sergio Ballestrero science forum beginner
Joined: 30 Apr 2005
Posts: 2

Posted: Sat Apr 30, 2005 8:27 am Post subject:
Re: Search for free/cheap scintillating crystals



Uncle Al wrote:
Quote:  Sergio Ballestrero wrote:
A small lab is planning an experiment for applying a PETlike technique
to mineral extraction industry. The optimal setup that we identified
would require
16 scintillating crystals, 3 inch x 3 inch x 1 radiation length (eg.
~10mm for BGO, ~25mm for NaI). The best material would be BGO, but also
NaI(Tl) would be OK,
At the moment, we are quite severely constrained by limited fundings,
that would not allow us to acquire the optimal number of scintillators
at the prices that have been quoted to us until now.
We are therefore looking for some alternative source of scintillator
crystals  like second hand ones from a finished experiment, or some
primary producer with low prices. I have been conducting a search,
through personal acquaintaces, without much success.
Any support or suggestion will be much appreciated.
http://www.cerncourier.com/main/article/41/8/15/1/cernimag11001
http://www.cerncourier.com/main/article/39/4/5
80,000 (!!!) huge perfect lead tungstate crystals. A lot more are
rejects.
http://www.cerncourier.com/main/article/41/8/15/1
google
"Lead tungstate" CMS CERN LHC 445 hits

Thank you Al,
unfortunately Lead tungstate is perfect for high energy calorimetry at
LHC, but is really bad for low energy gammas, since its light yeld is
only 0.5 percent of NaI(Tl):
http://crystalclear.web.cern.ch/crystalclear/HEP.html
http://crystalclear.web.cern.ch/crystalclear/medical_imaging.htm
The CERN Courier is not very clear about this  using PWO would make
it pretty difficult to identify the 511KeV gammas.
I've heard about doped PWO with higher light yeld, but as far as I
know it's not a commercial product yet  or is it ?
Sergio 

Back to top 


Robert C. Helling science forum beginner
Joined: 30 Apr 2005
Posts: 22

Posted: Sat Apr 30, 2005 8:27 am Post subject:
Re: values of the 26+ fundamental dimensionless constants?



On Fri, 22 Apr 2005 08:54:40 +0000 (UTC), robert bristowjohnson <rbj@audioimagination.com> wrote:
Quote:  i've been googling "U(1) coupling constant" and "SU(2) coupling constant"
and variations of both and cannot find a definition of these that an
electrical engineer nonphysicist can decode meaning from. can someone tell
me which of these coupling constants are related to the finestructure
constant, and how is it or are they related?

The fine structure constant is given by (in SI units)
alpha= e^2 / 4 pi epsilon_0 h c.
h is Planck's constant, c is the speed of light, epsilon_0 is the
conversion factor that shows up in Coulomb's law to convert Ampers to
mechanical units, pi is of course 3.14159365... and e is the charge of
the electron. Of these, e is really the electromagnetic coupling
constant since it measures the coupling of charged particles to
photons. So stripping this formula of all constants that come from
using SI units rather than more natural units, we obtain
alpha = e^2.
Now you can do a similar thing for the coupling of Wbosons to weak
doublets say. Parametrize this by a weak charge and square it to
obtain alpha_SU(2) (a slight complication arises because the U(1) in
the standard model is not the electromagnetic U(1) but the
hypercharge which is some mixture).
However, there is another problem and that is that those numbers (like
alpha roughly 1/137) are not constants but due to renormalization they
depend on the energy scale of your experiment, that is, they "run"
with energy. So to be really precise, you have to state at which
energy you have measured the coupling. The above number is measured at
zero momentum transfer, at the Wmass it's more like 1/128.
In listings, the weak coupling constant is usually given as the Fermi
coupling constant (which goes into the muon lifetime as it is the
weak interaction that governs that decay) G_F but strinctly speaking
this is again the square of the gauge theory coupling constant as
mu   nu_mu
\
W \_____ e
\
\____ nu_e bar
involves two weak couplings. To look up these numbers (and many more
such as alpha_strong) open the "physical constants" table at
http://pdg.lbl.gov/2004/reviews/contents_sports.html#constantsetc
As a final complication, instead of the strong coupling constant people
often state the QCD scale Lambda_QCD which is defined as the energy
scale where the strong coupling constant is 1. The advantage is that
this is really a constant that is not running. Numerically it is some
hundrets of MeV and it is no coincidence that that is roughly the mass
of a proton.
Robert

..oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oO
Robert C. Helling School of Science and Engineering
International University Bremen
print "Just another Phone: +49 421200 3574
stupid .sig\n"; http://www.aeipotsdam.mpg.de/~helling 

Back to top 


Arnold Neumaier science forum Guru
Joined: 24 Mar 2005
Posts: 379

Posted: Sat Apr 30, 2005 8:27 am Post subject:
Re: Renormalization



jsolomon@mail.com wrote:
Quote: 
I am interested in a formulation of QED that is mathematically
consistent throughout. What you are saying, I think, is that
physicists tend to use Ad hoc methods that may not be all that
mathematically consistent.

They care about doing calculations quickly, along established routes,
and don't bother too much about questions of rigor, which requires
a _lot_ of extra attention. The mathematically consistent route is
the one travelled by mathematical physicists.
Quote:  Can you suggest some references that
formulate the theory in a consistent way.

G. Scharf,
Finite Quantum Electrodynamics: The Causal Approach, 2nd ed.
New York: SpringerVerlag, 1995.
has an impeccable treatment of the case with a classical external field
(i.e. only electrons/positrons quantized),
and a rigorous but only perturbative treatment of full QED at 1 loop.
J.S. Feldman, T.R. Hurd, L. Rosen and J.D. Wright,
QED: A proof of renormalizability,
Lecture Notes in Physics 312,
Springer, Berlin 1988
gives a rigorous proof of perturbative existence of QED at all orders.
This means that a formal power series for the Smatrix is shown to
exist rigorously. This includes renormalization and is sufficient for
actual computations since a few terms in the power series give very
high accuracy.
A much more readable account of perturbative existence of Phi^4 theory
at all orders is in
M Salmhofer,
Renormalization: An Introduction,
Texts and Monographs in Physics,
Springer, Berlin 1999.
QED would work in essentially the same way, but with significantly
more messy formulas since one has to account for electrons and photons
and their spin.
Salmhofer constructs the Euclidean theory for Phi^4 theory in
four dimensions perturbatively, i.e., in the formal power
series topology, with full mathematical rigor.
If this construction would work nonperturbatively
(i.e., give functions instead of formal power series),
analytic continuation using OsterwalderSchrader theory would do
the rest. The latter is described, e.g., in Chapter 6 of
J. Glimm and A Jaffe,
Quantum Physics: A Functional Integral Point of View,
Springer, Berlin 1987.
The techniques used (and the starting points defining the theory)
are different in each of the above references.
Unfortunately, the power series is believed to diverge if enough
(i.e., infinitely many) terms are added, and a consistent
nonperturbative treatment of full QED is presently missing.
You can find a more thorough discussion in my theoretical physics FAQ at
http://www.mat.univie.ac.at/~neum/physicsfaq.txt
Arnold Neumaier 

Back to top 


Arnold Neumaier science forum Guru
Joined: 24 Mar 2005
Posts: 379

Posted: Sat Apr 30, 2005 8:27 am Post subject:
Re: Renormalization



Matthew Nobes wrote:
Quote:  On 20050426, Arnold Neumaier <Arnold.Neumaier@univie.ac.at> wrote:
Matthew Nobes wrote:
On 20050425, Arnold Neumaier <Arnold.Neumaier@univie.ac.at> wrote:
Similarly,
lattice QCD works, but only for a big computer,
This is untrue. One can (and I *do*) do perturbation theory with a
lattice cutoff. It's more difficult, but not impossible. And you onl=
y
need small computers :)
But you can't go to the limit,
Yes you can, in perturbation theory you can take the a>0 limit.

But not on a small or even a big computer.
Quote:  The renormalization program works the same way, order by order, you tun=
e
counterterms, etc. It is more technically involved (there are more
diagrams, more, and more complicated, Feynman rules) but is possible.
There are some theorems, which prove this, due to Reisz IIRC.

Interesting; I didn't know this.
T. Reisz
A power counting theorem for Feynman integrals on the lattice
Comm. Math. Phys. 116 (1988), no. 1, 81=96126
http://projecteuclid.org/Dienst/UI/1.0/Summarize/euclid.cmp/1104161198
assumes nonvanishing mass, hence does not apply to QED. The massless
case needs _additional_ regularization, not just the lattice:
T. Reisz
Renormalization of Lattice Feynman Integrals with Massless
Propagators
Commun. Math. Phys. 117, 639671 (1988)
http://projecteuclid.org/Dienst/UI/1.0/Summarize/euclid.cmp/1104161822
I have no access to
T. Reisz,
Lattice gauge theory: Renormalization to all orders in the loop
expansion
Nucl. Phys. B318 (1989) 417463
but the abstract says it is for SU(N) gauge theory. In particular,
it does not apply to QED.
M Luescher an P Weisz,
Background field technique and renormalization in lattice gauge
theory,
heplat/9504006
seem to use dimensional regularization to get the analogous results for
SU(N) gauge theory in a background field, although I haven't looked at
the details.
Quote:  while you can in dimensional
regularization at fixed loop order. With lattice calculations,
you'd never come close in accuracy to the experimental value
of the Lamb shift for hydrogen.
I don't know about the Lamb shift calculations, but it's worth pointing
out that Tom Kinoshita doesn't use dimreg for his four and fiveloop
computations of the magnetic moment anomoly.

Yes. He uses NRQED, which is, however, specially adapted to QED.
Arnold Neumaier 

Back to top 


Arnold Neumaier science forum Guru
Joined: 24 Mar 2005
Posts: 379

Posted: Sat Apr 30, 2005 8:28 am Post subject:
Re: Renormalization



jsolomon@mail.com wrote:
Quote:  Thank you for the reference. I don't disagree with the idea of having
a photon mass counterterm in the Lagrangian. However, even though we
can find a couple of references, it is hardly ever explicitly used.
For example, it does not appear in Peskin and Schroeder or Weinberg as
far as I can tell. You seem to say it is implied. I would like to ask
the other people who have been engaged in this discussion what they
think of the idea of a photon mass counterterm.

Physicists tend to be implicit and ad hoc in the justification of
recipes they use to get their results. There is a lot of formal
(i.e. not clearly justified) calculation and hand waving about
terms to neglect. This makes for speed in derivation but also for
difficulties in justification, and in meaningless results if one
is not careful enough. Here 'careful' is 'defined' by experience in
avoiding obviously strange results.
Implied is always that if one would be fully careful one could
justify everything rigorously. Thus one needs to learn to read
between the lines and guess the missing justifications. In some
cases, getting a rigorous justification is very hard. That's why
the latter is left to mathematical physicists who take their
pride in doing physics rigorously, but are (therefore) restricted
to doing everything much more slowly and in a more technical way.
Thus finding some details skipped in some treatments doesn't mean
much if you want to understand everything from a rigorous point
of view. But you'd note that most theoretical physicists don't
aspire to this high level of rigor...
Arnold Neumaier 

Back to top 


Murray Arnow science forum beginner
Joined: 30 Apr 2005
Posts: 20

Posted: Sat Apr 30, 2005 8:28 am Post subject:
Re: Search for free/cheap scintillating crystals



Uncle Al <UncleAl0@hate.spam.net> wrote:
Quote:  Sergio Ballestrero wrote:
A small lab is planning an experiment for applying a PETlike technique
to mineral extraction industry. The optimal setup that we identified
would require
16 scintillating crystals, 3 inch x 3 inch x 1 radiation length (eg.
~10mm for BGO, ~25mm for NaI). The best material would be BGO, but also
NaI(Tl) would be OK,
At the moment, we are quite severely constrained by limited fundings,
that would not allow us to acquire the optimal number of scintillators
at the prices that have been quoted to us until now.
We are therefore looking for some alternative source of scintillator
crystals  like second hand ones from a finished experiment, or some
primary producer with low prices. I have been conducting a search,
through personal acquaintaces, without much success.
Any support or suggestion will be much appreciated.
http://www.cerncourier.com/main/article/41/8/15/1/cernimag11001
http://www.cerncourier.com/main/article/39/4/5
80,000 (!!!) huge perfect lead tungstate crystals. A lot more are
rejects.
http://www.cerncourier.com/main/article/41/8/15/1
google
"Lead tungstate" CMS CERN LHC 445 hits

I think Sergio has a size requirement and cutting his rejects to fit his
detectors could be an issue.
NaI(Tl) and BGO crystals are used extensively in CT scanners. When I was
building detectors for CTs, there were a number of crystal manufacturers
who would supply samples in the hope of getting business (I still have
samples lying about).
The number of crystals required is small, and Sergio may have luck in
contacting Sales of a manufacturer and asking for samplesI'd ask for
manufacturing rejects if they seem reluctant to give handouts; e.g.,
broken crystals, embedded flaws. There may be a problem in cutting them
to size; this requires equipment and expertise (NaI(Tl) is relatively
inexpensive and the easiest to slicea wire saw and kerosene should do
it). The other crystals are considerably more expensive, harder and more
difficult to cut. But one can always get lucky and get an enthusiastic
sales rep who will provide crystals precisely meeting one's
requirements.
Crystal manufacturers have undoubtedly changed since I was in the
business, but a few can still be located using the Thomas Register.
http://www.thomasnet.com/home.html?INCP=1
Search for "Crystals: Scintillation" for a start. 

Back to top 


Blagoj Petrushev science forum beginner
Joined: 30 Apr 2005
Posts: 15

Posted: Sat Apr 30, 2005 8:28 am Post subject:
Re: Question about quantum gravity



The thing is that general gravity cracks down at singularites e.g,
whether the spacial or temporal interval are smaller than 10^35 m
(Planck lenght) or matter density is bigger than 10^97 kg m^3 (Planck
density). At this point, quantum gravity steps in, and the singularity
(such is the premordial cosmic egg) will not be established. But, what
is the physics at this point, nobody knows for sure.
B Petrushev 

Back to top 


Guest

Posted: Sat Apr 30, 2005 8:28 am Post subject:
Re: Renormalization



Aaron Bergman wrote:
Quote:  In article <1114564299.821706.203780@o13g2000cwo.googlegroups.com>,
jsolomon@mail.com wrote:
In a recent posting on this thread Eugene Stefanovich gave a simple
example of this. He defined a function,
F1(n) = \int_1^{\infty} x^n dx (integration in the interval [1,
oo])
and another function,
F2(n) = 1/(n+1)
Now F1(n) = F2(n) for n<1. If we replace n by the complex
variable z
we have,
F2(z) = 1/(z+1)
F2(z) is the analytic continuation of the function F1(z) into the
entire complex plan (except z=1). However F2(z) does not equal
F1(z)
in the entire complex plan. For example at z=0 F2(0) does not
equal
F1(0). Therefore it is not mathematically correct to replace F2(z)
with F1(z) for all z. Now in dimensional regularization this is
what
is done. This is where I think you are confused. In
dimensional
regularization a function G1(d) is replaced by its analytical
continuation G2(d). But G1(d) does not equal G2(d) for the value
of d
at which this replacement is made. That is why this does not seem
to
me to be a mathematically correct step. (Note  I understand that
it
gives the physically correct result).
It's a regularization, a trick to get a finite result. It is
perfectly
mathematically legitimate to replace a function only defined on some
set
by its analytic continuation. There is no issue of mathematical
correctness here; the analytically continued function has a pole at
d=4
just like the integral. Dimensional regularization is a way to avoid
sitting on that pole. The analytic continuation allows us to identify
that pole from a distance and construct counterterms to get rid of
it.
Once we have the counterterms in place, we can then limit back to
where
the pole was, d>4 and see that the counterterms cured our
divergence.
Please consider the example of my last post which I will repeat below. 
Let,
F1(n) = \int_1^{\infty} x^n dx (integration in the interval [1,
oo])
An integral is defined as an infinite sum. For example at n=0 we would
have,
F1(0) = 1 + 1 + 1 + 1 + 1 + etc. (1)
Now define another function,
F2(n) = 1/(n+1) (2)
Now F1(n) = F2(n) for n<1. If we replace n by the complex variable z
we have,
F2(z) = 1/(z+1) (3)
We would say that F2(z) is the analytic continuation of F1(z) to the
entire complex plane (except z = 1). Now at z = 0 we have F2(0) = 1.
If F1(z) and F2(z) are equal at z=0 then we would have,
1 + 1 + 1 +1 + etc. = 1 (4)
which doesn't make sense.
Now suppose I have a mathematical model of a physical process. My
model predicts that under certain conditions I will get the result R1
which is given by,
R1 = 1/(F1(0) + 1) (5)
Now when I examine my expression for F1(0) I conclude it is infinite.
If I use this fact in the above expression for R1 I get,
R1 = 1/(oo + 1) = 0 (6)
This is a perfectly reasonable result for a physical process. However
by your reasoning I should replace F1(0) by F2(0). In this case,
R1 = 1/(1 + 1) = 1/0 = oo (7)
Therefore I get a nonsensical result. This example shows that you
cannot arbitrarily replace a function with its analytical continuation.
They are not equal everywhere.
Dan Solomon 

Back to top 


Matthew Nobes science forum beginner
Joined: 30 Apr 2005
Posts: 10

Posted: Sat Apr 30, 2005 8:28 am Post subject:
Re: Renormalization



On 20050427, Arnold Neumaier <Arnold.Neumaier@univie.ac.at> wrote:
Quote:  jsolomon@mail.com wrote:
[snip]
Arnold Neumaier
So Dr. Neumaier says yes and you say no,no, no. Maybe you two should
talk to each other.
It was a matter of emphasis. We both agree on that _some_
regularization procedure must be part of the basic postulates.
The question is which one.
I recommended dimensional regularization, because it is most flexible,
but allowed in my qualifying statement for less flexible
alternatives. Matthew Nobes http://www.lns.cornell.edu/~nobes/
works with lattice gauge theories and hence emphasizes the freedom
to use lattice regulators instead.

Not to start an arguement, but it should be pointed out that "most
flexible" strongly depends on the problem you want to solve. For a
numerical evaluation of the path integral, a lattice regulator is the
"most flexible".
As I understand it (and I'm willing to be corrected)
dimreg is defined only at the level of perturbation theory. For nonabelian
gauge theories, dimreg is certainly the most flexible scheme for
perturbative calculations.
[snip]
Quote:  The proof of absence of infinities for all 4D renormalizable theories
in the limit of removed cutoff has been given only for dimensional
regularization, I believe.
Thus dimensional regularization is safe, while the situation is
less clear for lattice regularization.

The theorems of Reisz apply here. For certain classes of lattice
actions (including some, but not all of the popular actions used in
simulations) you can show that the a > 0 limit in perturbation theory
gives you the same results as the eps > 0 limit in dimreg.
This is certainly true for the Wilson gauge and quark actions, so a
lattice regulated theory is equivalent to using dimreg, from a
perturbative standpoint.

Matthew Nobes  email: nobes@lepp.cornell.edu
Newman Lab, Cornell University  web: http://lepp.cornell.edu/~nobes/
Ithaca NY 14853, USA  

Back to top 


Google


Back to top 



The time now is Thu Jun 29, 2017 4:00 pm  All times are GMT

