FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups 
 ProfileProfile   PreferencesPreferences   Log in to check your private messagesLog in to check your private messages   Log inLog in 
Forum index » Science and Technology » Math » Recreational
infinite product for exp-function-series?
Post new topic   Reply to topic Page 1 of 1 [9 Posts] View previous topic :: View next topic
Author Message
Gottfried Helms
science forum Guru


Joined: 24 Mar 2005
Posts: 301

PostPosted: Mon Jul 03, 2006 11:21 am    Post subject: Re: infinite product for exp-function-series? Reply with quote

Am 03.07.2006 13:08 schrieb matt271829-news@yahoo.co.uk:
Quote:

Nope, sorry, I just have no idea what you're trying to do.

Apologies if I've wasted your time.... and good luck!

Never mind, that discussion was already helpful, thanks!


Sometimes I take a sheet of paper and try whether one
concept can be applied to another problem - sometimes
there's some nice new view of things coming from that,
sometimes not.
Just sort of recreation...

Thanks again for your comments -

Gottfried Helms
Back to top
matt271829-news@yahoo.co.
science forum Guru


Joined: 11 Sep 2005
Posts: 846

PostPosted: Mon Jul 03, 2006 11:08 am    Post subject: Re: infinite product for exp-function-series? Reply with quote

Gottfried Helms wrote:
Quote:
Am 03.07.2006 00:27 schrieb matt271829-news@yahoo.co.uk:

If I start from

f_0(x) = (1 + A(x)) * remainder = e^x
f_1(x) = (1 - A(x)) * remainder = e^-x

And, reading on, maybe I'm being slow but I can't figure out what
you're doing here. "(1 + A(x)) * remainder = e^x" makes sense to me,
but not "(1 - A(x)) * remainder = e^-x". How do you get to this from
the definition of nexp(x)? What's the connection with what went before?

I'm also still far from convinced that the functions A, B, C etc. are
uniquely determined by the requirements

(1 + A(x)) (1 + B(x^2)) (1 + C(x^4))... = exp(x)
(1 - A(x)) (1 - B(x^2)) (1 - C(x^4))... = 1 - x/1! - x^2/2! + x^3/3! -
x^4/4 + .... = nexp(x)

Perhaps I am wrong...!


Well, maybe, I was a bit sloppy here.

If in the function

(1 + A(x)) (1 + B(x^2)) (1 + C(x^4))... = exp(x)

= 1 + x/1! + x^2/2! + x^3/3! ....

I change the sign of the A(x)-parameter, then I (... well,
I have to see, whether I had in fact a better reason...)
each second sign is expected to be negative.
Then this is just e^-x.

But that assumption may not be justified uniquely by the
statement of the problem.
My first formulation was a bit different; it was to find
coefficients like

(1 + A x) (1 + B x^2 ) (1 + C x^4)... = exp(x)

then I observed, that A,B,... could not be constant but must
be a function of x, and had that

function-of-x function-of-x
A = -------------- B = -------------
x x^2

In notation I cancelled the x,x^2,x^4 from that concept,
so this reduced to the - possibly now misleading, and
even may be incorrect - short form of
(1 + A(x))(1 + B(x^2))....

In fact, what I got in the last posting, seems to be
a simple telescoping product, like

f_0(x) = e^x = (1 + A(x))(1 + B(x))(1 + C(x))...

e^x (e^x + e^-x)/2
= --------------- * ------------------------------*....
(e^x + e^-x)/2 (e^x+ e^ix + e^-x + e^-ix)/4



f_1(x) = (1 - A(x))(1 + B(x))(1 + C(x))...

e^-x (e^x + e^-x)/2
= --------------- * ------------------------------*....
(e^x + e^-x)/2 (e^x+ e^ix + e^-x + e^-ix)/4




f_2(x) = (1 - A(x))(1 - B(x))(1 + C(x))...

e^x (e^ix + e^-ix)/2
= --------------- * ------------------------------*....
(e^x + e^-x)/2 (e^x+ e^ix + e^-x + e^-ix)/4

and so on; written with the product symbol using m as index
with w the 2^m'th complex unit-root


exp(x) =

f_0(x) = 1

____ oo sum({k=0..2^m-2 step 2};e^(w^k x))
* || ---------------------------------
m=1 sum({k=0..2^m-1 };e^(w^k x))





____ 1 sum({k=1..2^m-1 step 2};e^(w^k x))
f_1(x) = || ---------------------------------
m=1 sum({k=0..2^m-1 };e^(w^k x))


____ oo sum({k=0..2^m-2 step 2};e^(w^k x))
* || ---------------------------------
m=2 sum({k=0..2^m-1 };e^(w^k x))





____ 2 sum({k=1..2^m-1 step 2};e^(w^k x))
f_2(x) = || ---------------------------------
m=1 sum({k=0..2^m-1 };e^(w^k x))


____ oo sum({k=0..2^m-2 step 2};e^(w^k x))
* || ---------------------------------
m=3 sum({k=0..2^m-1 };e^(w^k x))

....



nexp(x) =
____ oo sum({k=1..2^m-1 step 2};e^(w^k x))
f_oo(x) = || ---------------------------------
m=1 sum({k=0..2^m-1 };e^(w^k x))

* 1


But I think, that the solution is not unique if one does
not restrict the problem by the assumption, that setting
(1 - A(x))*remainder instead of (1 + A(x))*remainder affects
exactly each second term of the powerseries. Without that
restriction I assume, that other, possibly infinitely many,
solutions are possible.

-----------

But now, as I have a approach/solution, what does it say to me?

The first zero for real x>0 is another constant, at
x ~ 4.43608358091442223318282916733501415893571735877792...
which is not in plouffe's inverter...
What else...

Sigh... :-)

Gottfried Helms

Nope, sorry, I just have no idea what you're trying to do.

Apologies if I've wasted your time.... and good luck!
Back to top
Gottfried Helms
science forum Guru


Joined: 24 Mar 2005
Posts: 301

PostPosted: Mon Jul 03, 2006 6:08 am    Post subject: Re: infinite product for exp-function-series? Reply with quote

Am 03.07.2006 00:27 schrieb matt271829-news@yahoo.co.uk:
Quote:

If I start from

f_0(x) = (1 + A(x)) * remainder = e^x
f_1(x) = (1 - A(x)) * remainder = e^-x

And, reading on, maybe I'm being slow but I can't figure out what
you're doing here. "(1 + A(x)) * remainder = e^x" makes sense to me,
but not "(1 - A(x)) * remainder = e^-x". How do you get to this from
the definition of nexp(x)? What's the connection with what went before?

I'm also still far from convinced that the functions A, B, C etc. are
uniquely determined by the requirements

(1 + A(x)) (1 + B(x^2)) (1 + C(x^4))... = exp(x)
(1 - A(x)) (1 - B(x^2)) (1 - C(x^4))... = 1 - x/1! - x^2/2! + x^3/3! -
x^4/4 + .... = nexp(x)

Perhaps I am wrong...!


Well, maybe, I was a bit sloppy here.

If in the function

(1 + A(x)) (1 + B(x^2)) (1 + C(x^4))... = exp(x)

= 1 + x/1! + x^2/2! + x^3/3! ....

I change the sign of the A(x)-parameter, then I (... well,
I have to see, whether I had in fact a better reason...)
each second sign is expected to be negative.
Then this is just e^-x.

But that assumption may not be justified uniquely by the
statement of the problem.
My first formulation was a bit different; it was to find
coefficients like

(1 + A x) (1 + B x^2 ) (1 + C x^4)... = exp(x)

then I observed, that A,B,... could not be constant but must
be a function of x, and had that

function-of-x function-of-x
A = -------------- B = -------------
x x^2

In notation I cancelled the x,x^2,x^4 from that concept,
so this reduced to the - possibly now misleading, and
even may be incorrect - short form of
(1 + A(x))(1 + B(x^2))....

In fact, what I got in the last posting, seems to be
a simple telescoping product, like

f_0(x) = e^x = (1 + A(x))(1 + B(x))(1 + C(x))...

e^x (e^x + e^-x)/2
= --------------- * ------------------------------*....
(e^x + e^-x)/2 (e^x+ e^ix + e^-x + e^-ix)/4



f_1(x) = (1 - A(x))(1 + B(x))(1 + C(x))...

e^-x (e^x + e^-x)/2
= --------------- * ------------------------------*....
(e^x + e^-x)/2 (e^x+ e^ix + e^-x + e^-ix)/4




f_2(x) = (1 - A(x))(1 - B(x))(1 + C(x))...

e^x (e^ix + e^-ix)/2
= --------------- * ------------------------------*....
(e^x + e^-x)/2 (e^x+ e^ix + e^-x + e^-ix)/4

and so on; written with the product symbol using m as index
with w the 2^m'th complex unit-root


exp(x) =

f_0(x) = 1

____ oo sum({k=0..2^m-2 step 2};e^(w^k x))
* || ---------------------------------
m=1 sum({k=0..2^m-1 };e^(w^k x))





____ 1 sum({k=1..2^m-1 step 2};e^(w^k x))
f_1(x) = || ---------------------------------
m=1 sum({k=0..2^m-1 };e^(w^k x))


____ oo sum({k=0..2^m-2 step 2};e^(w^k x))
* || ---------------------------------
m=2 sum({k=0..2^m-1 };e^(w^k x))





____ 2 sum({k=1..2^m-1 step 2};e^(w^k x))
f_2(x) = || ---------------------------------
m=1 sum({k=0..2^m-1 };e^(w^k x))


____ oo sum({k=0..2^m-2 step 2};e^(w^k x))
* || ---------------------------------
m=3 sum({k=0..2^m-1 };e^(w^k x))

.....



nexp(x) =
____ oo sum({k=1..2^m-1 step 2};e^(w^k x))
f_oo(x) = || ---------------------------------
m=1 sum({k=0..2^m-1 };e^(w^k x))

* 1


But I think, that the solution is not unique if one does
not restrict the problem by the assumption, that setting
(1 - A(x))*remainder instead of (1 + A(x))*remainder affects
exactly each second term of the powerseries. Without that
restriction I assume, that other, possibly infinitely many,
solutions are possible.

-----------

But now, as I have a approach/solution, what does it say to me?

The first zero for real x>0 is another constant, at
x ~ 4.43608358091442223318282916733501415893571735877792...
which is not in plouffe's inverter...
What else...

Sigh... :-)

Gottfried Helms
Back to top
matt271829-news@yahoo.co.
science forum Guru


Joined: 11 Sep 2005
Posts: 846

PostPosted: Sun Jul 02, 2006 10:27 pm    Post subject: Re: infinite product for exp-function-series? Reply with quote

Gottfried Helms wrote:
Quote:
Am 01.07.2006 14:30 schrieb matt271829-news@yahoo.co.uk:
Gottfried Helms

Off the top of my head I would imagine that there are infinitely many
possibilities for A(x), B(x), C(x), D(x)... Starting with any infinite
product representation exp(x) = f1(x)f2(x)f3(x)f4(x)... you can
construct functions A, B, C, D... to replicate this, can't you? Are
there some other conditions that you're trying to satisfy, or some
special form that you need for the functions?

(I glibly said "starting with any infinite product representation" as
if I could generate examples at will, which is not true. But there
must, in principle, be infinitely many I would have thought?)


Hmm, I don't think, that there are more than one solutions possible.

My goal is to find one, which makes sense in the (1+A(x)(1+B(x^2))...
way as well in the (1-A(x))(1-B(x^2).... way. The goal was -
but may be this is ridiculous... - to find a representation,
which gives in the first way:

1 + x/1! + x^2/2! + x^3/3! + x^4/4 + .... = exp(x)

and the other way

1 - x/1! - x^2/2! + x^3/3! - x^4/4 + .... = nexp(x)

where the signs follow the pattern of signs of

(1 - x)(1 - x^2)(1-x^4)(1-x^Cool ...

when expanded to an infinite polynomial.

If I start from

f_0(x) = (1 + A(x)) * remainder = e^x
f_1(x) = (1 - A(x)) * remainder = e^-x

And, reading on, maybe I'm being slow but I can't figure out what
you're doing here. "(1 + A(x)) * remainder = e^x" makes sense to me,
but not "(1 - A(x)) * remainder = e^-x". How do you get to this from
the definition of nexp(x)? What's the connection with what went before?

I'm also still far from convinced that the functions A, B, C etc. are
uniquely determined by the requirements

(1 + A(x)) (1 + B(x^2)) (1 + C(x^4))... = exp(x)
(1 - A(x)) (1 - B(x^2)) (1 - C(x^4))... = 1 - x/1! - x^2/2! + x^3/3! -
x^4/4 + .... = nexp(x)

Perhaps I am wrong...!


Quote:

then (1 + A(x))/(1 - A(x)) = e^(2x)

(1 + A(x)) = e^(2x) * (1 - A(x))

A(x) (e^2x + 1) = e^(2x) - 1

e^(2x) - 1 e^x - e^-x
A(x) = ----------- = ----------- = tanh(x)
e^(2x) + 1 e^x + e^-x

A(x) is completely determined.

The next occurs analoguously, but the fiddling with terms is
really boring for C(x), or D(x)...

.....

Maybe that's all just crazy ... :-)

Gottfried Helms
Back to top
Gottfried Helms
science forum Guru


Joined: 24 Mar 2005
Posts: 301

PostPosted: Sun Jul 02, 2006 7:27 pm    Post subject: Re: infinite product for exp-function-series? Reply with quote

Am 02.07.2006 14:49 schrieb matt271829-news@yahoo.co.uk:
Quote:
Gottfried Helms wrote:
Am 01.07.2006 14:30 schrieb matt271829-news@yahoo.co.uk:
Gottfried Helms
Off the top of my head I would imagine that there are infinitely many
possibilities for A(x), B(x), C(x), D(x)... Starting with any infinite
product representation exp(x) = f1(x)f2(x)f3(x)f4(x)... you can
construct functions A, B, C, D... to replicate this, can't you? Are
there some other conditions that you're trying to satisfy, or some
special form that you need for the functions?

(I glibly said "starting with any infinite product representation" as
if I could generate examples at will, which is not true. But there
must, in principle, be infinitely many I would have thought?)

Hmm, I don't think, that there are more than one solutions possible.

My goal is to find one, which makes sense in the (1+A(x)(1+B(x^2))...
way as well in the (1-A(x))(1-B(x^2).... way. The goal was -
but may be this is ridiculous... - to find a representation,
which gives in the first way:

1 + x/1! + x^2/2! + x^3/3! + x^4/4 + .... = exp(x)

and the other way

1 - x/1! - x^2/2! + x^3/3! - x^4/4 + .... = nexp(x)

where the signs follow the pattern of signs of

(1 - x)(1 - x^2)(1-x^4)(1-x^Cool ...

when expanded to an infinite polynomial.

Oh. I didn't realise that you already had a definition of this "nexp"
function. I assumed from your original post that you were trying to
derive it from the A, B, C, D... that satisfied exp(x) = (1 + A(x))(1 +
B(x))(1 + C(x^4))(1 + D(x^Cool )... Hence my point that I didn't think
A, B, C, D, ... were uniquely determined by this equation (alone).

(But if, as you say, you want "to investigate nexp(x) better" then why
not just use the definition above?)

Hmmm, just fiddling. I was interested, whether this type
of conversion in an infinite product is possible at all.
Now it seems so, but I don't know, whether there is any
use for it - currently it just fills an empty space in
my NT-lego-box... :-)

I guess now, it is something like

e^x - e^-x (e^x + e^-x) - (e^ix + e^-ix)
( 1 + ----------)(1 + -------------------------------)(1 + ...)=exp(x)
e^x + e^-x (e^x + e^-x) + (e^ix + e^-ix)

and

e^x - e^-x (e^x + e^-x) - (e^ix + e^-ix)
( 1 - ----------)(1 - -------------------------------)(1 - ...)=nexp(x)
e^x + e^-x (e^x + e^-x) + (e^ix + e^-ix)


where the n'th denominator is
with w the 2^n 'th complex unit root

denominator = sum(k=0..2^n-1; e^(kw))
= sum(k=0,2,4,,2^n-2; e^(kw)) + sum(k=1,3,5,2^n-1; e^kw)

nominator = sum(k=0,2,4,,2^n-2; e^(kw)) - sum(k=1,3,5,2^n-1; e^kw)


but this surely needs some more attention, how to deal with the
complex unit roots; may be they must occur as coefficients as well...

Gottfried Helms
Back to top
matt271829-news@yahoo.co.
science forum Guru


Joined: 11 Sep 2005
Posts: 846

PostPosted: Sun Jul 02, 2006 12:49 pm    Post subject: Re: infinite product for exp-function-series? Reply with quote

Gottfried Helms wrote:
Quote:
Am 01.07.2006 14:30 schrieb matt271829-news@yahoo.co.uk:
Gottfried Helms

Off the top of my head I would imagine that there are infinitely many
possibilities for A(x), B(x), C(x), D(x)... Starting with any infinite
product representation exp(x) = f1(x)f2(x)f3(x)f4(x)... you can
construct functions A, B, C, D... to replicate this, can't you? Are
there some other conditions that you're trying to satisfy, or some
special form that you need for the functions?

(I glibly said "starting with any infinite product representation" as
if I could generate examples at will, which is not true. But there
must, in principle, be infinitely many I would have thought?)


Hmm, I don't think, that there are more than one solutions possible.

My goal is to find one, which makes sense in the (1+A(x)(1+B(x^2))...
way as well in the (1-A(x))(1-B(x^2).... way. The goal was -
but may be this is ridiculous... - to find a representation,
which gives in the first way:

1 + x/1! + x^2/2! + x^3/3! + x^4/4 + .... = exp(x)

and the other way

1 - x/1! - x^2/2! + x^3/3! - x^4/4 + .... = nexp(x)

where the signs follow the pattern of signs of

(1 - x)(1 - x^2)(1-x^4)(1-x^Cool ...

when expanded to an infinite polynomial.

Oh. I didn't realise that you already had a definition of this "nexp"
function. I assumed from your original post that you were trying to
derive it from the A, B, C, D... that satisfied exp(x) = (1 + A(x))(1 +
B(x))(1 + C(x^4))(1 + D(x^Cool )... Hence my point that I didn't think
A, B, C, D, ... were uniquely determined by this equation (alone).

(But if, as you say, you want "to investigate nexp(x) better" then why
not just use the definition above?)

Quote:

If I start from

f_0(x) = (1 + A(x)) * remainder = e^x
f_1(x) = (1 - A(x)) * remainder = e^-x

then (1 + A(x))/(1 - A(x)) = e^(2x)

(1 + A(x)) = e^(2x) * (1 - A(x))

A(x) (e^2x + 1) = e^(2x) - 1

e^(2x) - 1 e^x - e^-x
A(x) = ----------- = ----------- = tanh(x)
e^(2x) + 1 e^x + e^-x

A(x) is completely determined.

The next occurs analoguously, but the fiddling with terms is
really boring for C(x), or D(x)...

.....

Maybe that's all just crazy ... :-)

Gottfried Helms
Back to top
Gottfried Helms
science forum Guru


Joined: 24 Mar 2005
Posts: 301

PostPosted: Sun Jul 02, 2006 9:34 am    Post subject: Re: infinite product for exp-function-series? Reply with quote

Am 01.07.2006 14:30 schrieb matt271829-news@yahoo.co.uk:
Quote:
Gottfried Helms

Off the top of my head I would imagine that there are infinitely many
possibilities for A(x), B(x), C(x), D(x)... Starting with any infinite
product representation exp(x) = f1(x)f2(x)f3(x)f4(x)... you can
construct functions A, B, C, D... to replicate this, can't you? Are
there some other conditions that you're trying to satisfy, or some
special form that you need for the functions?

(I glibly said "starting with any infinite product representation" as
if I could generate examples at will, which is not true. But there
must, in principle, be infinitely many I would have thought?)


Hmm, I don't think, that there are more than one solutions possible.

My goal is to find one, which makes sense in the (1+A(x)(1+B(x^2))...
way as well in the (1-A(x))(1-B(x^2).... way. The goal was -
but may be this is ridiculous... - to find a representation,
which gives in the first way:

1 + x/1! + x^2/2! + x^3/3! + x^4/4 + .... = exp(x)

and the other way

1 - x/1! - x^2/2! + x^3/3! - x^4/4 + .... = nexp(x)

where the signs follow the pattern of signs of

(1 - x)(1 - x^2)(1-x^4)(1-x^Cool ...

when expanded to an infinite polynomial.

If I start from

f_0(x) = (1 + A(x)) * remainder = e^x
f_1(x) = (1 - A(x)) * remainder = e^-x

then (1 + A(x))/(1 - A(x)) = e^(2x)

(1 + A(x)) = e^(2x) * (1 - A(x))

A(x) (e^2x + 1) = e^(2x) - 1

e^(2x) - 1 e^x - e^-x
A(x) = ----------- = ----------- = tanh(x)
e^(2x) + 1 e^x + e^-x

A(x) is completely determined.

The next occurs analoguously, but the fiddling with terms is
really boring for C(x), or D(x)...

......

Maybe that's all just crazy ... :-)

Gottfried Helms
Back to top
matt271829-news@yahoo.co.
science forum Guru


Joined: 11 Sep 2005
Posts: 846

PostPosted: Sat Jul 01, 2006 12:30 pm    Post subject: Re: infinite product for exp-function-series? Reply with quote

Gottfried Helms wrote:
Quote:
I tried to convert the series-expansion
of the exp(x)-function into an infinite
product of the form:


exp(x) = (1 + a x)(1 + b x)(1 + c x^4)(1 + d x^Cool....

with the focus to be able to formulate a
sort of complementary function


nexp(x) = (1 - a x)(1 - b x)(1 - c x^4)(1 - d x^Cool....


It is relatively obvious, that the coefficients a,b,c
canot be constants, but must be functions of x, like A(x),
B(X), C(X)

such that
exp(x) = (1 + A(x))(1 + B(x))(1 + C(x^4))(1 + D(x^Cool )....


What I've got so far is
-x x
e - e sinh(x)
A(x) = ---------- = - ------- = - tanh(x)
-x x cosh(x)
e + e


and
ix -ix x -x
(e + e ) - (e + e ) sin(x) - sinh(x)
B(x) = ------------------------ = ------------------ = ??? a name not known...
ix -ix x -x
(e + e ) + (e + e ) cos(x) - cosh(x)

and a somehow handy recursion scheme, but the notation
of the calculations increases too much, so that I could
not formulate a more general expression for the terms.

I think, with a symbolic program this would be easy. I
assume, with j as the square-root of i it should be
something like

jx -jx ix -ix
(e + e ) - (e + e )
C(x) = ----------------------------
ix -jx ix -ix
(e + e ) + (e + e )


I aproximated that function by the series representation with
appropriate signs and found, that it is oscillating, and the
first zero with positive x is about
4.43608358091442223318282916733501415893571735877792
a constant, which I could not find in plouffe's inverter or OEIS...

To investigate nexp(x) better I would like to find a general
expression for the functions A(x),B(x), C(x) first.

Any ideas appreciated.

Gottfried Helms

Off the top of my head I would imagine that there are infinitely many
possibilities for A(x), B(x), C(x), D(x)... Starting with any infinite
product representation exp(x) = f1(x)f2(x)f3(x)f4(x)... you can
construct functions A, B, C, D... to replicate this, can't you? Are
there some other conditions that you're trying to satisfy, or some
special form that you need for the functions?

(I glibly said "starting with any infinite product representation" as
if I could generate examples at will, which is not true. But there
must, in principle, be infinitely many I would have thought?)
Back to top
Gottfried Helms
science forum Guru


Joined: 24 Mar 2005
Posts: 301

PostPosted: Fri Jun 30, 2006 12:05 pm    Post subject: infinite product for exp-function-series? Reply with quote

I tried to convert the series-expansion
of the exp(x)-function into an infinite
product of the form:


exp(x) = (1 + a x)(1 + b x)(1 + c x^4)(1 + d x^Cool....

with the focus to be able to formulate a
sort of complementary function


nexp(x) = (1 - a x)(1 - b x)(1 - c x^4)(1 - d x^Cool....


It is relatively obvious, that the coefficients a,b,c
canot be constants, but must be functions of x, like A(x),
B(X), C(X)

such that
exp(x) = (1 + A(x))(1 + B(x))(1 + C(x^4))(1 + D(x^Cool )....


What I've got so far is
-x x
e - e sinh(x)
A(x) = ---------- = - ------- = - tanh(x)
-x x cosh(x)
e + e


and
ix -ix x -x
(e + e ) - (e + e ) sin(x) - sinh(x)
B(x) = ------------------------ = ------------------ = ??? a name not known...
ix -ix x -x
(e + e ) + (e + e ) cos(x) - cosh(x)

and a somehow handy recursion scheme, but the notation
of the calculations increases too much, so that I could
not formulate a more general expression for the terms.

I think, with a symbolic program this would be easy. I
assume, with j as the square-root of i it should be
something like

jx -jx ix -ix
(e + e ) - (e + e )
C(x) = ----------------------------
ix -jx ix -ix
(e + e ) + (e + e )


I aproximated that function by the series representation with
appropriate signs and found, that it is oscillating, and the
first zero with positive x is about
4.43608358091442223318282916733501415893571735877792
a constant, which I could not find in plouffe's inverter or OEIS...

To investigate nexp(x) better I would like to find a general
expression for the functions A(x),B(x), C(x) first.

Any ideas appreciated.

Gottfried Helms
Back to top
Google

Back to top
Display posts from previous:   
Post new topic   Reply to topic Page 1 of 1 [9 Posts] View previous topic :: View next topic
The time now is Fri Oct 20, 2017 2:22 pm | All times are GMT
Forum index » Science and Technology » Math » Recreational
Jump to:  

Similar Topics
Topic Author Forum Replies Last Post
No new posts Generating function for Mathieu functions cosmicstring@gmail.com Math 1 Fri Jul 21, 2006 8:39 am
No new posts Choice function over finite sets Peter Webb Math 5 Fri Jul 21, 2006 3:28 am
No new posts A series related to the Hilbert transform larryhammick@telus.net Math 0 Fri Jul 21, 2006 3:28 am
No new posts Function from Taylor series? Nathan Urban Research 1 Thu Jul 20, 2006 12:48 am
No new posts Function not in L_1 {[0,1]}, but satisfies ...? atkrunner@hotmail.com Math 12 Thu Jul 20, 2006 12:46 am

Copyright © 2004-2005 DeniX Solutions SRL
Other DeniX Solutions sites: Electronics forum |  Medicine forum |  Unix/Linux blog |  Unix/Linux documentation |  Unix/Linux forums  |  send newsletters
 


Powered by phpBB © 2001, 2005 phpBB Group
[ Time: 0.1730s ][ Queries: 20 (0.1446s) ][ GZIP on - Debug on ]