FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups 
 ProfileProfile   PreferencesPreferences   Log in to check your private messagesLog in to check your private messages   Log inLog in 
Forum index » Science and Technology » Math » num-analysis
Numerical diagonalization by not using zgeev?
Post new topic   Reply to topic Page 1 of 1 [7 Posts] View previous topic :: View next topic
Author Message
MWimmer
science forum beginner


Joined: 11 Jul 2006
Posts: 3

PostPosted: Tue Jul 11, 2006 11:11 am    Post subject: Numerical diagonalization by not using zgeev? Reply with quote

Dear group,

I have a problem when doing numerical diagonalization of an unsymmetric
complex NxN matrix H. N is typically of order 100-1000, H is sparse.
Let me briefly outline my problem here, below I will give some more
background: I need not only the eigenvalues, but also the
transformation matrix, that is, do the decomposition

H = U * diag(\lambda_1, ..., _lambda_N) U^{-1}

where \lambda_i are the complex eigenvalues and the columns of U
contain the eigenvectors. After that I apply a transformation to the
eigenvalue matrix that sets all the eigenvalues
with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually,
there's N/2 of each) .After that I do the backtransformation using U
and U^{-1}.
Quote:
From this backtransformed matrix, I only need some entries and it turns
out in the end that I would only need those eigenvectors with

eigenvalue |\lambda_i| <=1.

I intended to solve this problem by using the LAPACK routine zgeev to
calculate the eigenvalues and the transformation matrix U, whose
columns are the eigenvectors. However, i have found out that for matrix
sizes from N=100 and certain parameter regions (see below), U is
(numerically) singular and inverting it fails.
I was rather surprised, because the eigenvalues do not span several
orders of magnitude (in the most extreme cases 0.1<|\lambda|<10, but
usually more like 0.5<|\lambda|<5), and they are clearly pairwise
different.

I then used zgeevx to see the errors on the eigenvalues and
eigenvectors and found that I started to get problems in inverting the
matrix when the relative errors were around 10^{-4}. That kind of
accuracy would be enough for my purposes, but obviously not good enough
for the eigenvectors to linearly independent.

So I have several questions:
* I'm not an expert in numerical mathematics. Is it clear from the
matrix given below that this problem is ill-conditioned?
* Are there ways to circumvent the problem? I need U^{-1}, well, at
least I need the duals to the eigenvectors with |\lambda| <=1 and
inverting seemed to be the easiest solution.
Is there a way to directly calculate U and U^{-1}?
* I guess going to higher precision would help, but is there an easy
way to do this? (I don't want to rewrite LAPACK in terms of GMP or
similar)
* Would using some iterative routine help (ARPACK)? In the end, I need
N/2 eigenvectors with eigenvalue |\lambda|<=1 ...

Any help is greatly appreciated.

To give you more background, let me explain the origin of my problem in
more detail:
The problem comes from physics (electronic transport) where I have to
solve the quadratic eigenvalue problem

( (E-H_0) z - H_{1} z^2 - H_{-1} ) u=0

where E is a number (usually real, but could be complex), H_0 a
Hermitian matrix, H_{-1} is the Hermitian conjugate of H_{1}, z the
desired eigenvalue, u the eigenvector. all the matrices
are square nxn

I then linearize this problem to give a ordinary, 2nx2n eigenvalue
problem:

( 0 1 ) ( u ) ( u )
( -H_1^{-1} * H_{-1} H_1^{-1} * (E-H_0) ) (z u ) = z ( z u )

and this is the tried to solve with zgeev.
My problems for the following matrices (For those interested: it's
electrons in a magnetic field)

H_1=diag( exp(i B 1), exp(i B 2), .... exp(i B n)

(E-4 1 )
( 1 E-4 1 )
E-H_0= ( 1 ..... )
( 1 )
( 1 E-4 )

The problem arises for a wide range of parameters, but only once n is
larger than around 50, for example for B=3e-3 and E=0.5. Unfortunatly,
for typical problems one would like to have n ~100 - 1000 ... and the
parameter range is of importance too (corresponds to a rather medium
strength magnetic field)

Any ideas?
Back to top
Peter Spellucci
science forum Guru


Joined: 29 Apr 2005
Posts: 702

PostPosted: Tue Jul 11, 2006 6:25 pm    Post subject: Re: Numerical diagonalization by not using zgeev? Reply with quote

In article <1152616276.829831.224480@35g2000cwc.googlegroups.com>,
"MWimmer" <michael.wimmer1@gmx.de> writes:
Quote:
Dear group,

I have a problem when doing numerical diagonalization of an unsymmetric
complex NxN matrix H. N is typically of order 100-1000, H is sparse.
Let me briefly outline my problem here, below I will give some more
background: I need not only the eigenvalues, but also the
transformation matrix, that is, do the decomposition

H = U * diag(\lambda_1, ..., _lambda_N) U^{-1}

where \lambda_i are the complex eigenvalues and the columns of U
contain the eigenvectors. After that I apply a transformation to the
eigenvalue matrix that sets all the eigenvalues
with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually,
there's N/2 of each) .After that I do the backtransformation using U
and U^{-1}.
From this backtransformed matrix, I only need some entries and it turns
out in the end that I would only need those eigenvectors with
eigenvalue |\lambda_i| <=1.

I intended to solve this problem by using the LAPACK routine zgeev to
calculate the eigenvalues and the transformation matrix U, whose
columns are the eigenvectors. However, i have found out that for matrix
sizes from N=100 and certain parameter regions (see below), U is
(numerically) singular and inverting it fails.
I was rather surprised, because the eigenvalues do not span several
orders of magnitude (in the most extreme cases 0.1<|\lambda|<10, but
usually more like 0.5<|\lambda|<5), and they are clearly pairwise
different.

I then used zgeevx to see the errors on the eigenvalues and
eigenvectors and found that I started to get problems in inverting the
matrix when the relative errors were around 10^{-4}. That kind of
accuracy would be enough for my purposes, but obviously not good enough
for the eigenvectors to linearly independent.

So I have several questions:
* I'm not an expert in numerical mathematics. Is it clear from the
matrix given below that this problem is ill-conditioned?
* Are there ways to circumvent the problem? I need U^{-1}, well, at


zgeev delivers also the left eigenvectors. but the matrix of the
left eigenvectors is , up to scaling, the inverse of the
right eigenvectors:

Y'*A = diag(lambda)*Y'
A*X = X*diag(lambda)
=>
Y'*A*inv(Y') = inv(X)*A*X= diag(lambda)


?? did you check error indicators from zerred.f ?


Quote:
least I need the duals to the eigenvectors with |\lambda| <=1 and
inverting seemed to be the easiest solution.
Is there a way to directly calculate U and U^{-1}?
* I guess going to higher precision would help, but is there an easy
way to do this? (I don't want to rewrite LAPACK in terms of GMP or
similar)
* Would using some iterative routine help (ARPACK)? In the end, I need
N/2 eigenvectors with eigenvalue |\lambda|<=1 ...

surely not

Quote:

Any help is greatly appreciated.

To give you more background, let me explain the origin of my problem in
more detail:
The problem comes from physics (electronic transport) where I have to
solve the quadratic eigenvalue problem

( (E-H_0) z - H_{1} z^2 - H_{-1} ) u=0

where E is a number (usually real, but could be complex), H_0 a
Hermitian matrix, H_{-1} is the Hermitian conjugate of H_{1}, z the
desired eigenvalue, u the eigenvector. all the matrices
are square nxn

I then linearize this problem to give a ordinary, 2nx2n eigenvalue
problem:

( 0 1 ) ( u ) ( u )
( -H_1^{-1} * H_{-1} H_1^{-1} * (E-H_0) ) (z u ) = z ( z u )

and this is the tried to solve with zgeev.
My problems for the following matrices (For those interested: it's
electrons in a magnetic field)

H_1=diag( exp(i B 1), exp(i B 2), .... exp(i B n)

(E-4 1 )
( 1 E-4 1 )
E-H_0= ( 1 ..... )
( 1 )
( 1 E-4 )

The problem arises for a wide range of parameters, but only once n is
larger than around 50, for example for B=3e-3 and E=0.5. Unfortunatly,
for typical problems one would like to have n ~100 - 1000 ... and the
parameter range is of importance too (corresponds to a rather medium
strength magnetic field)

Any ideas?


i tested matlab polyeig (which does what you did , using lapack)
exactly with the data you gave and n=100 without any problem.
cond(U) about 700, all eigenvalues different in a reasonable range
did you check anything in your coding? nowhere an intermediate
single precision calculation, everything double complex??

hth
peter
Back to top
MWimmer
science forum beginner


Joined: 11 Jul 2006
Posts: 3

PostPosted: Wed Jul 12, 2006 4:19 pm    Post subject: Re: Numerical diagonalization by not using zgeev? Reply with quote

First of all: thank you very much Peter for your answer.

Peter Spellucci wrote:
Quote:
In article <1152616276.829831.224480@35g2000cwc.googlegroups.com>,
"MWimmer" <michael.wimmer1@gmx.de> writes:
Dear group,

I have a problem when doing numerical diagonalization of an unsymmetric
complex NxN matrix H. N is typically of order 100-1000, H is sparse.
Let me briefly outline my problem here, below I will give some more
background: I need not only the eigenvalues, but also the
transformation matrix, that is, do the decomposition

H = U * diag(\lambda_1, ..., _lambda_N) U^{-1}

where \lambda_i are the complex eigenvalues and the columns of U
contain the eigenvectors. After that I apply a transformation to the
eigenvalue matrix that sets all the eigenvalues
with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually,
there's N/2 of each) .After that I do the backtransformation using U
and U^{-1}.
From this backtransformed matrix, I only need some entries and it turns
out in the end that I would only need those eigenvectors with
eigenvalue |\lambda_i| <=1.

I intended to solve this problem by using the LAPACK routine zgeev to
calculate the eigenvalues and the transformation matrix U, whose
columns are the eigenvectors. However, i have found out that for matrix
sizes from N=100 and certain parameter regions (see below), U is
(numerically) singular and inverting it fails.
I was rather surprised, because the eigenvalues do not span several
orders of magnitude (in the most extreme cases 0.1<|\lambda|<10, but
usually more like 0.5<|\lambda|<5), and they are clearly pairwise
different.

I then used zgeevx to see the errors on the eigenvalues and
eigenvectors and found that I started to get problems in inverting the
matrix when the relative errors were around 10^{-4}. That kind of
accuracy would be enough for my purposes, but obviously not good enough
for the eigenvectors to linearly independent.

So I have several questions:
* I'm not an expert in numerical mathematics. Is it clear from the
matrix given below that this problem is ill-conditioned?
* Are there ways to circumvent the problem? I need U^{-1}, well, at


zgeev delivers also the left eigenvectors. but the matrix of the
left eigenvectors is , up to scaling, the inverse of the
right eigenvectors:

Y'*A = diag(lambda)*Y'
A*X = X*diag(lambda)
=
Y'*A*inv(Y') = inv(X)*A*X= diag(lambda)

Yes, I also tried this. It seems to make my problem a bit better, but
the problem is that
the overlap of the rows of Y' and the columns of X for the same
eigenvalue are very small
(order 1e-15-1e-17) for many eigenvalues. If I then rescale, I
introduce a lot of noise from roundoff errors ... but I should probably
investigate further in this direction, this is only a preliminary
finding.

Quote:

?? did you check error indicators from zerred.f ?


zerred.f just checks whether the LAPACK routines do input parameter
checking correctly, right? If I link against my lapack installation,
the program aborts with an error message telling you about an illegal
parameter in zgeev, as zerred tests that. To run zerred.f correctly, I
guess I have to change that behaviour of LAPACK somehow. Do you know
how?

-----snip-----------
Quote:

Any help is greatly appreciated.

To give you more background, let me explain the origin of my problem in
more detail:
The problem comes from physics (electronic transport) where I have to
solve the quadratic eigenvalue problem

( (E-H_0) z - H_{1} z^2 - H_{-1} ) u=0

where E is a number (usually real, but could be complex), H_0 a
Hermitian matrix, H_{-1} is the Hermitian conjugate of H_{1}, z the
desired eigenvalue, u the eigenvector. all the matrices
are square nxn

I then linearize this problem to give a ordinary, 2nx2n eigenvalue
problem:

( 0 1 ) ( u ) ( u )
( -H_1^{-1} * H_{-1} H_1^{-1} * (E-H_0) ) (z u ) = z ( z u )

and this is the tried to solve with zgeev.
My problems for the following matrices (For those interested: it's
electrons in a magnetic field)

H_1=diag( exp(i B 1), exp(i B 2), .... exp(i B n)

(E-4 1 )
( 1 E-4 1 )
E-H_0= ( 1 ..... )
( 1 )
( 1 E-4 )

The problem arises for a wide range of parameters, but only once n is
larger than around 50, for example for B=3e-3 and E=0.5. Unfortunatly,
for typical problems one would like to have n ~100 - 1000 ... and the
parameter range is of importance too (corresponds to a rather medium
strength magnetic field)

Any ideas?


i tested matlab polyeig (which does what you did , using lapack)
exactly with the data you gave and n=100 without any problem.
cond(U) about 700, all eigenvalues different in a reasonable range
did you check anything in your coding? nowhere an intermediate
single precision calculation, everything double complex??

Now that's interesting. Could you perhaps post your matlab program?
Because that's what I tried today in matlab:

E=0.5;
n=100;
B=3e-3;
H0=diag(ones(n,1)*(E-4))+diag(ones(n-1,1),1)+diag(ones(n-1,1),-1);
H1=diag(arrayfun(@(x) -exp(-i*B*x), 1:n));
Hm1=H1';
[U1,e]=polyeig(-Hm1,H0,-H1);
U2=U1*diag(e);
U(1:n,Smile=U1;
U(n+1:2*n,Smile=U2;
cond(U)

ans =

1.8921e+11

B=3e-2;
H0=diag(ones(n,1)*(E-4))+diag(ones(n-1,1),1)+diag(ones(n-1,1),-1);
H1=diag(arrayfun(@(x) -exp(-i*B*x), 1:n));
Hm1=H1';
[U1,e]=polyeig(-Hm1,H0,-H1);
U2=U1*diag(e);
U(1:n,Smile=U1;
U(n+1:2*n,Smile=U2;
cond(U)

ans =

7.9785e+15

First of all, notice that my H1 above is a bit different than the one
from the previous post, as I did some minus signs wrong. But this
doesn't change the outcome significantly, I checked.

(The right H1 should have been H1=diag(- exp(-i B 1), ...) )

All the condition numbers I get are way above the 700 you get.

I also tried to diagonalize the linearization directly with matlab:

H(1:n,1:n)=0;
H(1:n,n+1:2*n)=diag(ones(n,1));
H(n+1:2*n,1:n)=-inv(H1)*Hm1;
H(n+1:2*n,n+1:2*n)=inv(H1)*H0;
[V,e2]=eig(H);
cond(V)

ans =

5.5936e+15

So essentially the same result, which was to expected, as you said that
matlab does use a linearization for solving the polynomial
eigenproblem.

I'm very confused now. The lapack installation on my machine seems to
be OK: For my own code I also compiled the LAPACK from netlib and
linked against that, bypassing the systems LAPACK and BLAS. I got the
very same results.

Do you have any ideas what might cause this behaviour?
Back to top
Peter Spellucci
science forum Guru


Joined: 29 Apr 2005
Posts: 702

PostPosted: Wed Jul 12, 2006 9:11 pm    Post subject: Re: Numerical diagonalization by not using zgeev? Reply with quote

In article <1152721189.548445.69180@b28g2000cwb.googlegroups.com>,
"MWimmer" <michael.wimmer1@gmx.de> writes:
Quote:
First of all: thank you very much Peter for your answer.

Peter Spellucci wrote:
In article <1152616276.829831.224480@35g2000cwc.googlegroups.com>,
"MWimmer" <michael.wimmer1@gmx.de> writes:
Dear group,

I have a problem when doing numerical diagonalization of an unsymmetric
complex NxN matrix H. N is typically of order 100-1000, H is sparse.
Let me briefly outline my problem here, below I will give some more
background: I need not only the eigenvalues, but also the
transformation matrix, that is, do the decomposition

H = U * diag(\lambda_1, ..., _lambda_N) U^{-1}

where \lambda_i are the complex eigenvalues and the columns of U
contain the eigenvectors. After that I apply a transformation to the
eigenvalue matrix that sets all the eigenvalues
with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually,
there's N/2 of each) .After that I do the backtransformation using U
and U^{-1}.
From this backtransformed matrix, I only need some entries and it turns
out in the end that I would only need those eigenvectors with
eigenvalue |\lambda_i| <=1.

I intended to solve this problem by using the LAPACK routine zgeev to
calculate the eigenvalues and the transformation matrix U, whose
columns are the eigenvectors. However, i have found out that for matrix
sizes from N=100 and certain parameter regions (see below), U is
(numerically) singular and inverting it fails.
I was rather surprised, because the eigenvalues do not span several
orders of magnitude (in the most extreme cases 0.1<|\lambda|<10, but
usually more like 0.5<|\lambda|<5), and they are clearly pairwise
different.

I then used zgeevx to see the errors on the eigenvalues and
eigenvectors and found that I started to get problems in inverting the
matrix when the relative errors were around 10^{-4}. That kind of
accuracy would be enough for my purposes, but obviously not good enough
for the eigenvectors to linearly independent.

So I have several questions:
* I'm not an expert in numerical mathematics. Is it clear from the
matrix given below that this problem is ill-conditioned?
* Are there ways to circumvent the problem? I need U^{-1}, well, at


zgeev delivers also the left eigenvectors. but the matrix of the
left eigenvectors is , up to scaling, the inverse of the
right eigenvectors:

Y'*A = diag(lambda)*Y'
A*X = X*diag(lambda)
=
Y'*A*inv(Y') = inv(X)*A*X= diag(lambda)

Yes, I also tried this. It seems to make my problem a bit better, but
the problem is that
the overlap of the rows of Y' and the columns of X for the same
eigenvalue are very small
(order 1e-15-1e-17) for many eigenvalues. If I then rescale, I
introduce a lot of noise from roundoff errors ... but I should probably
investigate further in this direction, this is only a preliminary
finding.


?? did you check error indicators from zerred.f ?


zerred.f just checks whether the LAPACK routines do input parameter
checking correctly, right? If I link against my lapack installation,
the program aborts with an error message telling you about an illegal
parameter in zgeev, as zerred tests that. To run zerred.f correctly, I
guess I have to change that behaviour of LAPACK somehow. Do you know
how?

-----snip-----------

Any help is greatly appreciated.

To give you more background, let me explain the origin of my problem in
more detail:
The problem comes from physics (electronic transport) where I have to
solve the quadratic eigenvalue problem

( (E-H_0) z - H_{1} z^2 - H_{-1} ) u=0

where E is a number (usually real, but could be complex), H_0 a
Hermitian matrix, H_{-1} is the Hermitian conjugate of H_{1}, z the
desired eigenvalue, u the eigenvector. all the matrices
are square nxn

I then linearize this problem to give a ordinary, 2nx2n eigenvalue
problem:

( 0 1 ) ( u ) ( u )
( -H_1^{-1} * H_{-1} H_1^{-1} * (E-H_0) ) (z u ) = z ( z u )

and this is the tried to solve with zgeev.
My problems for the following matrices (For those interested: it's
electrons in a magnetic field)

H_1=diag( exp(i B 1), exp(i B 2), .... exp(i B n)

(E-4 1 )
( 1 E-4 1 )
E-H_0= ( 1 ..... )
( 1 )
( 1 E-4 )

The problem arises for a wide range of parameters, but only once n is
larger than around 50, for example for B=3e-3 and E=0.5. Unfortunatly,
for typical problems one would like to have n ~100 - 1000 ... and the
parameter range is of importance too (corresponds to a rather medium
strength magnetic field)

Any ideas?


i tested matlab polyeig (which does what you did , using lapack)
exactly with the data you gave and n=100 without any problem.
cond(U) about 700, all eigenvalues different in a reasonable range
did you check anything in your coding? nowhere an intermediate
single precision calculation, everything double complex??

Now that's interesting. Could you perhaps post your matlab program?
Because that's what I tried today in matlab:

E=0.5;
n=100;
B=3e-3;
H0=diag(ones(n,1)*(E-4))+diag(ones(n-1,1),1)+diag(ones(n-1,1),-1);
H1=diag(arrayfun(@(x) -exp(-i*B*x), 1:n));
Hm1=H1';
[U1,e]=polyeig(-Hm1,H0,-H1);
U2=U1*diag(e);
U(1:n,Smile=U1;
U(n+1:2*n,Smile=U2;
cond(U)

ans =

1.8921e+11

B=3e-2;
H0=diag(ones(n,1)*(E-4))+diag(ones(n-1,1),1)+diag(ones(n-1,1),-1);
H1=diag(arrayfun(@(x) -exp(-i*B*x), 1:n));
Hm1=H1';
[U1,e]=polyeig(-Hm1,H0,-H1);
U2=U1*diag(e);
U(1:n,Smile=U1;
U(n+1:2*n,Smile=U2;
cond(U)

I think the confusion is here: I tested cond(U1) doing essentially the same
as you (with matlab6.0, the department has no money to upgrade)

hence I tested the wrong thing.

if you look at e, then you will see a large block of nearby eigenvalues
(plot(i,real(e(i))); hold on ; plot(i,imag(i)); )

I check the residuals:
Quote:
for j=1:2*n
lambda=e(j);

x=U1(:,j);
r(:,j)=(-Hm1+lambda*H0-H1*lambda^2)*x;
end
Quote:
norm(r)

ans =

5.2397e-13

hence this seems o.k. but surprise:

Quote:
cond(U1(1:n,1:n))

ans =

3.5870e+10

hence these almost identical eigenvalues create a block of almost singular
eigenvectors and your construction then kills it totally.







Quote:

ans =

7.9785e+15

First of all, notice that my H1 above is a bit different than the one
from the previous post, as I did some minus signs wrong. But this
doesn't change the outcome significantly, I checked.

(The right H1 should have been H1=diag(- exp(-i B 1), ...) )

All the condition numbers I get are way above the 700 you get.

I also tried to diagonalize the linearization directly with matlab:

H(1:n,1:n)=0;
H(1:n,n+1:2*n)=diag(ones(n,1));
H(n+1:2*n,1:n)=-inv(H1)*Hm1;
H(n+1:2*n,n+1:2*n)=inv(H1)*H0;
[V,e2]=eig(H);
cond(V)

ans =

5.5936e+15

So essentially the same result, which was to expected, as you said that
matlab does use a linearization for solving the polynomial
eigenproblem.

I'm very confused now. The lapack installation on my machine seems to
be OK: For my own code I also compiled the LAPACK from netlib and
linked against that, bypassing the systems LAPACK and BLAS. I got the
very same results.

Do you have any ideas what might cause this behaviour?


unfortunately nearby eigenvalues of a nonhermitian matrix are
a problem which always gives such trouble (would cause even more
trouble with an iterative solver like arpack)
to resort to higher precision: netlib/toms
has an automatic translator of f77 programs to one which uses
multiple precision arithmetic plus a package for this arithmetic,
which you could use (works for 32 bit architecture only ?)
or if you are happy enough to have access to an HP risc machine:
they have an option -autodblpad which automatically performs
128 bit arithmetic on a given double precision fortran program
(so you have nothing to do than to set this compiler switch)
i see no trick how to separate the eigenvalues better by a transformation
of the problem

sorry for my error

hth
peter
Back to top
Helmut Jarausch
science forum beginner


Joined: 08 Jul 2005
Posts: 49

PostPosted: Thu Jul 13, 2006 9:10 am    Post subject: Re: Numerical diagonalization by not using zgeev? Reply with quote

MWimmer wrote:
Quote:
Dear group,

I have a problem when doing numerical diagonalization of an unsymmetric
complex NxN matrix H. N is typically of order 100-1000, H is sparse.
Let me briefly outline my problem here, below I will give some more
background: I need not only the eigenvalues, but also the
transformation matrix, that is, do the decomposition

H = U * diag(\lambda_1, ..., _lambda_N) U^{-1}

where \lambda_i are the complex eigenvalues and the columns of U
contain the eigenvectors. After that I apply a transformation to the
eigenvalue matrix that sets all the eigenvalues
with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually,
there's N/2 of each) .After that I do the backtransformation using U
and U^{-1}.
From this backtransformed matrix, I only need some entries and it turns
out in the end that I would only need those eigenvectors with
eigenvalue |\lambda_i| <=1.
SNIP


Have you check if the Schur decomposition can help you?

There you have a unitary matrix U and an upper triangular
matrix T such that A= U*T .
The eigenvalues of A appear on the diagonal of T.
You could manipulated T to get the eigenvalues you
and multiply it by U again.
The benefit: the Schur decomposition is very stable.


--
Helmut Jarausch

Lehrstuhl fuer Numerische Mathematik
RWTH - Aachen University
D 52056 Aachen, Germany
Back to top
MWimmer
science forum beginner


Joined: 11 Jul 2006
Posts: 3

PostPosted: Thu Jul 13, 2006 12:17 pm    Post subject: Re: Numerical diagonalization by not using zgeev? Reply with quote

Helmut Jarausch wrote:
Quote:
MWimmer wrote:
Dear group,

I have a problem when doing numerical diagonalization of an unsymmetric
complex NxN matrix H. N is typically of order 100-1000, H is sparse.
Let me briefly outline my problem here, below I will give some more
background: I need not only the eigenvalues, but also the
transformation matrix, that is, do the decomposition

H = U * diag(\lambda_1, ..., _lambda_N) U^{-1}

where \lambda_i are the complex eigenvalues and the columns of U
contain the eigenvectors. After that I apply a transformation to the
eigenvalue matrix that sets all the eigenvalues
with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually,
there's N/2 of each) .After that I do the backtransformation using U
and U^{-1}.
From this backtransformed matrix, I only need some entries and it turns
out in the end that I would only need those eigenvectors with
eigenvalue |\lambda_i| <=1.
SNIP

Have you check if the Schur decomposition can help you?

There you have a unitary matrix U and an upper triangular
matrix T such that A= U*T .
The eigenvalues of A appear on the diagonal of T.
You could manipulated T to get the eigenvalues you
and multiply it by U again.
The benefit: the Schur decomposition is very stable.

Thanks for your answer.

I take it that you mean I should try to apply my transformation (that
sets some eigenvalues to zero and some to 1) directly to the triangular
matrix T, right? As far as I can see, I would not only need to change
the diagonal, but also the off-diagonal entries and the only way I see
how to do this, is by using the eigenvectors of T.

But the eigenvectors of T are the problem anyway, as LAPACK zgeev does
indeed calculate the Schur decomposition and the eigenvectors from
that.

Still, I need to think about that a bit more, if I can circumvent using
the eigenvectors. My problem would be calculating

\tilde{f}(T) := V* f(D) *V^{-1}

without explicitely calculating V (V are the eigenvectors of T and I
know the action of f on the diagonal matrix D). As far as i can see, V
and V^{-1} should also be triangular, right? Still I don't see a
solution for that problem ...

Michael Wimmer
Back to top
Peter Spellucci
science forum Guru


Joined: 29 Apr 2005
Posts: 702

PostPosted: Fri Jul 14, 2006 2:02 pm    Post subject: Re: Numerical diagonalization by not using zgeev? Reply with quote

In article <1152793076.480334.93380@p79g2000cwp.googlegroups.com>,
"MWimmer" <michael.wimmer1@gmx.de> writes:
Quote:
Helmut Jarausch wrote:
MWimmer wrote:
Dear group,

I have a problem when doing numerical diagonalization of an unsymmetric
complex NxN matrix H. N is typically of order 100-1000, H is sparse.
Let me briefly outline my problem here, below I will give some more
background: I need not only the eigenvalues, but also the
transformation matrix, that is, do the decomposition

H = U * diag(\lambda_1, ..., _lambda_N) U^{-1}

where \lambda_i are the complex eigenvalues and the columns of U
contain the eigenvectors. After that I apply a transformation to the
eigenvalue matrix that sets all the eigenvalues
with |\lambda_i| <=1 to 1 and all the |\lambda_i| >1 to 0. (Actually,
there's N/2 of each) .After that I do the backtransformation using U
and U^{-1}.
From this backtransformed matrix, I only need some entries and it turns
out in the end that I would only need those eigenvectors with
eigenvalue |\lambda_i| <=1.
SNIP

Have you check if the Schur decomposition can help you?

There you have a unitary matrix U and an upper triangular
matrix T such that A= U*T .
The eigenvalues of A appear on the diagonal of T.
You could manipulated T to get the eigenvalues you
and multiply it by U again.
The benefit: the Schur decomposition is very stable.

Thanks for your answer.

I take it that you mean I should try to apply my transformation (that
sets some eigenvalues to zero and some to 1) directly to the triangular
matrix T, right? As far as I can see, I would not only need to change
the diagonal, but also the off-diagonal entries and the only way I see
how to do this, is by using the eigenvectors of T.

But the eigenvectors of T are the problem anyway, as LAPACK zgeev does
indeed calculate the Schur decomposition and the eigenvectors from
that.

Still, I need to think about that a bit more, if I can circumvent using
the eigenvectors. My problem would be calculating

\tilde{f}(T) := V* f(D) *V^{-1}

without explicitely calculating V (V are the eigenvectors of T and I
know the action of f on the diagonal matrix D). As far as i can see, V
and V^{-1} should also be triangular, right? Still I don't see a
solution for that problem ...

Michael Wimmer


what you really need is

sum_{lambda_i <=1 } u_i*v_i'
where u_i are the right and v_i the left eigenvectors of the triangular
matrix T and then further backtransform this using the unitary transformation
which produced the triangular form.
this back transformation could cause no harm.

there is a further trick:
Instead of getting the eigenvectors of T just by backsubstitution, which
yields those enormous roudoff errors due to the many nearby eigenvalues
you could use first a block transformation which gets T into the form

T = [ T1 O ; O T2 ] (matlab notation)
T1 has eigenvalues >1, T2 eigenvalues <=1

this is a further step to the JNF and described e.g. in Golub & van Loan.
(Indeed, in the course of getting the JNF, one applies this for every
eigenvalue cluster and here I propose you cluster the eigenvalues into two)
then, next, you could apply inverse iteration in block form, simulataneously for
the left and the right eigenvectors with a biorthogonalization step,
that means
"simultaneous inverse iteration for nonsymmetric matrices"
which has been described by G.W. Stewart, Num. Math. 25, 1976, 123-136.
this should finally do the job.


hth
peter
Back to top
Google

Back to top
Display posts from previous:   
Post new topic   Reply to topic Page 1 of 1 [7 Posts] View previous topic :: View next topic
The time now is Wed Nov 22, 2017 7:17 am | All times are GMT
Forum index » Science and Technology » Math » num-analysis
Jump to:  

Similar Topics
Topic Author Forum Replies Last Post
No new posts Help in identifying a numerical method Don11135 num-analysis 2 Thu Jul 20, 2006 8:56 pm
No new posts Recommendation for numerical differentiation formulas? 1940LaSalle@gmail.com num-analysis 6 Wed Jul 12, 2006 8:07 pm
No new posts Numerical differentiation formulas? 1940LaSalle@gmail.com Math 2 Wed Jul 12, 2006 8:01 pm
No new posts integrand numerical singular near boundary Axel Vogt num-analysis 2 Wed Jul 12, 2006 6:13 pm
No new posts Numerical Recipes in C - correlation method paul@paullee.com num-analysis 2 Thu Jul 06, 2006 1:51 pm

Copyright © 2004-2005 DeniX Solutions SRL
Other DeniX Solutions sites: Electronics forum |  Medicine forum |  Unix/Linux blog |  Unix/Linux documentation |  Unix/Linux forums  |  send newsletters
 


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.0443s ][ Queries: 16 (0.0061s) ][ GZIP on - Debug on ]