Author 
Message 
Konrad Viltersten science forum addict
Joined: 27 Jun 2005
Posts: 79

Posted: Sat Jul 01, 2006 5:45 am Post subject:
Re: Are those variances equal?



Quote:  Suppose you start off with this expression.
V[W_t / t^2]
and you wish to show that it tends towards 0 as t>oo.
What we tried is this rephrasal.
(1/t) * V[W_t / t]
Does it hold?
And then, is it possible to use the fact that
lim t>oo (W_t / t) = 0 (a.s.)
and do a rewriting as follows?
lim t>oo (V[W_t / t]) = V[lim t>oo (W_t / t)]
1. V(a W_t) = a^2 V(W_t) for any scalar a.

True, but how does it work if we have a process, let's
say a Wiener process like {W_t/t}_t>=0. Is it still OK
to regard the t as a scalar? I'd say so, because it's a
deterministic value but i'm a little unsure...
Quote:  2. Convergence a.s. cannot be converted to convergence in variance
without some kind of "dominated convergence."

Allright, i take that as a "definitely maybe". I'll look into
what condition i have and hopefully something will popup.
Thank you.

Vänligen
Konrad

Sleep  thing used by ineffective people
as a substitute for coffee
Ambition  a poor excuse for not having
enough sence to be lazy
 

Back to top 


Greg Heath science forum beginner
Joined: 24 Feb 2005
Posts: 14

Posted: Fri Jun 09, 2006 1:46 pm Post subject:
Re: Normal distribution function with skew and kurtosis



Phil Sherrod wrote:
Quote:  The probability density function for the normal distribution with mean 'm'
and standard deviation 's' is:
1/(s*sqrt(2*Pi)) * exp((Xm)^2 / 2s^2)
How can this formula be generalized to include skew and kurtosis?

Go to Google Groups and search on
johnsontransformations
Hope this helps.
Greg 

Back to top 


Elmer science forum beginner
Joined: 16 Jan 2006
Posts: 7

Posted: Mon Jan 16, 2006 9:51 pm Post subject:
Re: Probability and Spontaneous Proteins



Javriol wrote:
Quote:  Elmer wrote:
Tudman Todmorden wrote:
(snip)
But any event that
has one chance in just 1050 is dismissed by mathematicians as
never happening.
(snip)
Bull. Please name any mathematician who claims this.
1/10^1050 does not, will not, and never will, equal zero.
ask any..

What? Ask any mathematician and they will not say this. This is simply a
lie. There is no way that
1/10^1050 = 0
It does not. 

Back to top 


Javriol science forum beginner
Joined: 16 Jan 2006
Posts: 1

Posted: Mon Jan 16, 2006 5:48 pm Post subject:
Re: Probability and Spontaneous Proteins



Elmer wrote:
Quote:  Tudman Todmorden wrote:
(snip)
But any event that
has one chance in just 1050 is dismissed by mathematicians as
never happening.
(snip)
Bull. Please name any mathematician who claims this.
1/10^1050 does not, will not, and never will, equal zero.

ask any.. 

Back to top 


manuel1109 science forum beginner
Joined: 21 Nov 2005
Posts: 3

Posted: Mon Nov 21, 2005 4:03 am Post subject:
Re: Does poker betting exaggerate skill differences?



I believe the roulette example is commonly known as the Gambler's Dilema.
http://www.everything2.com/index.pl?node=the%20gambler%27s%20dilemma
As for the poker example I would suppose that having the cushion of money
is the major benefit, and that you are right, someone who has little skill
at bluffing and counting hands would benefit from a quick game.
On Thu, 20 Oct 2005 01:15:42 0400, Mark Spahn wrote:
Quote: 
"Pavel314" <Pavel314@NOSPAM.comcast.net> wrote in message news:WMSdndCwYeeqY8veRVnsg@comcast.com...
Mark Spahn" <mspahn@localnet.com> wrote in message news:11l6nr3et21lnb2@corp.supernews.com...
Suppose five friends A, B, C, D, E have a weekly evening
of poker in which they each start with $10 and play until
one player has won all the money (a gain of $40).
Their skills differ so that their respective probabilities of
winning a hand of poker are .22, .21, .20, .19, .18.
Are their probabilities of winning for the evening (= 20 hands;
is that a reasonable number?) identical to their probabilities
of winning a hand?
During the course of a poker evening, as one player gains
more money than the others, does his having more money
to bet confer an advantage, so his probability of winning
for the evening rises? If so, then the players' probabilities
of winning an evening of poker might be, say,
.24, .22, .20, .18, .16.
Can anyone shed some light on this question?
 Mark Spahn
Mark,
I wrote a program in Ubasic to simulate the five friends playing poker. The program simulated 10,000 weekly games or 192.3 years. The program assumes the same bet on each hand. When a player loses all his money, he's out of the game and the others play on. When the final player has the entire $50, the game is over. Higher skill on the perhand level definitely leverages your odds at the evening game level.
RESULTS
Player Win
Probability____$1 Bet__________$5 Bet_______$10 Bet
.18 1.47% 5.86% 17.95%
.19 5.32% 10.60% 18.70%
.20 13.11% 17.82% 20.45%
.21 29.25% 27.99% 20.86%
.22 50.85% 37.73% 22.04%
In my earlier reply to your problem, I said that I expected the less skilled players to have better odds with larger bets but I never expected the disproportionate results on the $1 bet games. It seems that over the long haul, skill gives you the ability to build up the cash cushion which helps you survive runs of bad luck later in the game.
Just as a test, I set everyone at the same skill level. Even at the $1 bet level the results are fairly even, which increases my confidence in my simulation program.
Player Win
Probability____$1 Bet
.20 19.92%
.20 20.48%
.20 19.73%
.20 20.07%
.20 19.80%
Finally, I let four players have equal skill level and set the fifth at a significantly higher level. The results are as you might expect:
Player Win
Probability____$1 Bet
.18 2.98%
.18 3.12%
.18 2.69%
.18 3.15%
.28 88.06%
The moral is that if you're against better opponents, put everything on one test and trust to luck; if you are the most skilled, try to draw the contest out to bring your skill into play.
E.G., in roulette the house is the more "skilled" player by virtue of having the odds in its favor, I believe 52% for the house to 48% for the player. So put all your chips on one roll and cross your fingers.
Any suggestions on how to apply this lesson to the stock market?
Paul
Paul,
Wow, bravo! I am impressed, and just as surprised as you at how much the ability to accumulate a bankroll magnifies the handwinning probability into a much larger gamewinning probability. I remember reading somewhere that Richard Nixon, when he learned poker while in the navy, observed and studied many, many games before he actually began to bet money. It looks like that was a good plan, because the skill of a neophyte player, even if only slightly below par, will be decisively overwhelmed by the better skill of the other players.
 Mark
!DOCTYPE HTML PUBLIC "//W3C//DTD HTML 4.0 Transitional//EN"
HTML><HEAD
META httpequiv=ContentType content="text/html; charset=iso2022jp"
META content="MSHTML 6.00.2900.2722" name=GENERATOR
STYLE></STYLE
/HEAD
BODY bgColor=#ffffff
DIV><FONT face="Times New Roman"></FONT> </DIV
BLOCKQUOTE dir=ltr
style="PADDINGRIGHT: 0px; PADDINGLEFT: 5px; MARGINLEFT: 5px; BORDERLEFT: #000000 2px solid; MARGINRIGHT: 0px"
DIV>"Pavel314" <<A
href="mailto:Pavel314@NOSPAM.comcast.net">Pavel314@NOSPAM.comcast.net</A
wrote in message <A
href="news:WMSdndCwYeeqY8veRVnsg@comcast.com">news:WMSdndCwYeeqY8veRVnsg@comcast.com</A>...</DIV
DIV>Mark Spahn" <<A
href="mailto:mspahn@localnet.com">mspahn@localnet.com</A>> wrote in message
A
href="news:11l6nr3et21lnb2@corp.supernews.com">news:11l6nr3et21lnb2@corp.supernews.com</A>...</DIV
BLOCKQUOTE dir=ltr
style="PADDINGRIGHT: 0px; PADDINGLEFT: 5px; MARGINLEFT: 5px; BORDERLEFT: #000000 2px solid; MARGINRIGHT: 0px"
DIV><FONT face="Times New Roman">Suppose five friends A, B, C, D, E have a
weekly evening</FONT></DIV
DIV><FONT face="Times New Roman">of poker in which they each start with
$10 and play until</FONT></DIV
DIV><FONT face="Times New Roman">one player has won all the money (a gain
of $40).</FONT></DIV
DIV><FONT face="Times New Roman">Their skills differ so that their
respective probabilities of</FONT></DIV
DIV><FONT face="Times New Roman">winning a hand of poker are .22, .21, .20,
.19, .18.</FONT></DIV
DIV><FONT face="Times New Roman">Are their probabilities of winning for the
evening (= 20 hands;</FONT></DIV
DIV><FONT face="Times New Roman">is that a reasonable number?) identical
/FONT><FONT face="Times New Roman">to their probabilities </FONT></DIV
DIV><FONT face="Times New Roman">of winning a hand?</FONT></DIV
DIV><FONT face="Times New Roman">During the course of a poker evening, as
one player gains</FONT></DIV
DIV><FONT face="Times New Roman">more money than the others, does his
having more money</FONT></DIV
DIV><FONT face="Times New Roman">to bet confer an advantage, so his
probability of winning</FONT></DIV
DIV><FONT face="Times New Roman">for the evening rises? If so, then
the players' probabilities</FONT></DIV
DIV><FONT face="Times New Roman">of winning an evening of poker might be,
say,</FONT></DIV
DIV><FONT face="Times New Roman">.24, .22, .20, .18, .16.</FONT></DIV
DIV><FONT face="Times New Roman">Can anyone shed some light on this
question?</FONT></DIV
DIV><FONT face="Times New Roman"></FONT> </DIV
DIV><FONT face="Times New Roman"> Mark Spahn</FONT></DIV></BLOCKQUOTE
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT face="Times New Roman">Mark,</FONT></DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT face="Times New Roman">I wrote a program in Ubasic to
simulate the five friends playing poker. The program simulated 10,000 weekly
games or 192.3 years. The program assumes the same bet on each hand. When a
player loses all his money, he's out of the game and the others play on. When
the final player has the entire $50, the game is over. Higher skill on the
perhand level definitely leverages your odds at the evening
game level. </FONT></DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT><FONT
face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT face="Times New Roman">RESULTS</FONT></DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT face="Times New Roman">Player Win </FONT></DIV
DIV dir=ltr><FONT face="Times New Roman">Probability____$1 Bet__________$5
Bet_______$10 Bet</FONT></DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT
face="Times New Roman">.18
1.47%
5.86%
17.95%</FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.19
5.32% 10.60%
18.70% </FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.20 13.11%
17.82%
20.45%</FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.21
29.25% 27.99%
20.86% </FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.22
50.85%
37.73%
22.04%</FONT></DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT face="Times New Roman">In my earlier reply to your problem,
I said that I expected the less skilled players to have better odds with
larger bets but I never expected the disproportionate results on the $1 bet
games. It seems that over the long haul, skill gives you the ability to build
up the cash cushion which helps you survive runs of bad luck later in the
game.</FONT></DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT face="Times New Roman">Just as a test, I set everyone at
the same skill level. Even at the $1 bet level the results are fairly even,
which increases my confidence in my simulation program.</FONT></DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr
DIV dir=ltr><FONT face="Times New Roman">Player Win </FONT></DIV
DIV dir=ltr><FONT face="Times New Roman">Probability____$1 Bet</FONT></DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT
face="Times New Roman">.20
19.92%
/FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.20
20.48% </FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.20
19.73% </FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.20
20.07% </FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.20
19.80% </FONT></DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT face="Times New Roman">Finally, I let four players have
equal skill level and set the fifth at a significantly higher level. The
results are as you might expect:</FONT></DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT face="Times New Roman"
DIV dir=ltr><FONT face="Times New Roman">Player Win </FONT></DIV
DIV dir=ltr><FONT face="Times New Roman">Probability____$1 Bet</FONT></DIV
DIV dir=ltr><FONT face="Times New Roman"></FONT> </DIV
DIV dir=ltr><FONT
face="Times New Roman">.18 2.98%
/FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.18 3.12% </FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.18 2.69% </FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.18 3.15% </FONT></DIV
DIV dir=ltr><FONT
face="Times New Roman">.28
88.06% </FONT></DIV
DIV dir=ltr> </DIV
DIV dir=ltr><FONT face=Arial size=2>The moral is that if you're against
better opponents, put everything on one test and trust to luck; if you are the
most skilled, try to draw the contest out to bring your skill into
play.</FONT></DIV
DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV
DIV dir=ltr><FONT face=Arial size=2>E.G., in roulette the house is the more
"skilled" player by virtue of having the odds in its favor, I believe 52% for
the house to 48% for the player. So put all your chips on one roll and cross
your fingers. </FONT></DIV
DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV
DIV dir=ltr><FONT face=Arial size=2>Any suggestions on how to apply this
lesson to the stock market?</FONT></DIV
DIV dir=ltr><FONT face=Arial size=2></FONT> </DIV
DIV dir=ltr><FONT face=Arial size=2>Paul</FONT></DIV
DIV dir=ltr> </DIV
DIV dir=ltr><FONT face=Arial size=2>Paul,</FONT></DIV
DIV dir=ltr><FONT face=Arial size=2>Wow, bravo! I am impressed, and
just as surprised as you at how much the ability to accumulate a bankroll
magnifies the handwinning probability into a much larger gamewinning
probability. I remember reading somewhere that Richard Nixon, when he
learned poker while in the navy, observed and studied many, many games before
he actually began to bet money. It looks like that was a good
plan, because the skill of a neophyte player, even if only slightly below
par, will be decisively overwhelmed by the better skill of the other
players.</FONT></DIV
DIV dir=ltr><FONT face=Arial size=2> Mark</FONT></DIV
DIV dir=ltr><FONT face=Arial
size=2> </FONT></DIV></FONT></DIV></DIV></BLOCKQUOTE></BODY></HTML

== Posted via Newsgroups.com  Usenet Access to over 100,000 Newsgroups ==
Get Anonymous, Uncensored, Access to West and East Coast Server Farms at!
== Highest Retention and Completion Rates! HTTP://WWW.NEWSGROUPS.COM == 

Back to top 


Mark Spahn science forum addict
Joined: 07 Jul 2005
Posts: 62

Posted: Thu Oct 20, 2005 5:15 am Post subject:
Re: Does poker betting exaggerate skill differences?



"Pavel314" <Pavel314@NOSPAM.comcast.net> wrote in message news:WMSdndCwYeeqY8veRVnsg@comcast.com...
Mark Spahn" <mspahn@localnet.com> wrote in message news:11l6nr3et21lnb2@corp.supernews.com...
Suppose five friends A, B, C, D, E have a weekly evening
of poker in which they each start with $10 and play until
one player has won all the money (a gain of $40).
Their skills differ so that their respective probabilities of
winning a hand of poker are .22, .21, .20, .19, .18.
Are their probabilities of winning for the evening (= 20 hands;
is that a reasonable number?) identical to their probabilities
of winning a hand?
During the course of a poker evening, as one player gains
more money than the others, does his having more money
to bet confer an advantage, so his probability of winning
for the evening rises? If so, then the players' probabilities
of winning an evening of poker might be, say,
.24, .22, .20, .18, .16.
Can anyone shed some light on this question?
 Mark Spahn
Mark,
I wrote a program in Ubasic to simulate the five friends playing poker. The program simulated 10,000 weekly games or 192.3 years. The program assumes the same bet on each hand. When a player loses all his money, he's out of the game and the others play on. When the final player has the entire $50, the game is over. Higher skill on the perhand level definitely leverages your odds at the evening game level.
RESULTS
Player Win
Probability____$1 Bet__________$5 Bet_______$10 Bet
.18 1.47% 5.86% 17.95%
.19 5.32% 10.60% 18.70%
.20 13.11% 17.82% 20.45%
.21 29.25% 27.99% 20.86%
.22 50.85% 37.73% 22.04%
In my earlier reply to your problem, I said that I expected the less skilled players to have better odds with larger bets but I never expected the disproportionate results on the $1 bet games. It seems that over the long haul, skill gives you the ability to build up the cash cushion which helps you survive runs of bad luck later in the game.
Just as a test, I set everyone at the same skill level. Even at the $1 bet level the results are fairly even, which increases my confidence in my simulation program.
Player Win
Probability____$1 Bet
.20 19.92%
.20 20.48%
.20 19.73%
.20 20.07%
.20 19.80%
Finally, I let four players have equal skill level and set the fifth at a significantly higher level. The results are as you might expect:
Player Win
Probability____$1 Bet
.18 2.98%
.18 3.12%
.18 2.69%
.18 3.15%
.28 88.06%
The moral is that if you're against better opponents, put everything on one test and trust to luck; if you are the most skilled, try to draw the contest out to bring your skill into play.
E.G., in roulette the house is the more "skilled" player by virtue of having the odds in its favor, I believe 52% for the house to 48% for the player. So put all your chips on one roll and cross your fingers.
Any suggestions on how to apply this lesson to the stock market?
Paul
Paul,
Wow, bravo! I am impressed, and just as surprised as you at how much the ability to accumulate a bankroll magnifies the handwinning probability into a much larger gamewinning probability. I remember reading somewhere that Richard Nixon, when he learned poker while in the navy, observed and studied many, many games before he actually began to bet money. It looks like that was a good plan, because the skill of a neophyte player, even if only slightly below par, will be decisively overwhelmed by the better skill of the other players.
 Mark 

Back to top 


Pavel314 science forum addict
Joined: 29 Apr 2005
Posts: 78

Posted: Thu Oct 20, 2005 2:17 am Post subject:
Re: Does poker betting exaggerate skill differences?



Mark Spahn" <mspahn@localnet.com> wrote in message news:11l6nr3et21lnb2@corp.supernews.com...
Suppose five friends A, B, C, D, E have a weekly evening
of poker in which they each start with $10 and play until
one player has won all the money (a gain of $40).
Their skills differ so that their respective probabilities of
winning a hand of poker are .22, .21, .20, .19, .18.
Are their probabilities of winning for the evening (= 20 hands;
is that a reasonable number?) identical to their probabilities
of winning a hand?
During the course of a poker evening, as one player gains
more money than the others, does his having more money
to bet confer an advantage, so his probability of winning
for the evening rises? If so, then the players' probabilities
of winning an evening of poker might be, say,
.24, .22, .20, .18, .16.
Can anyone shed some light on this question?
 Mark Spahn
Mark,
I wrote a program in Ubasic to simulate the five friends playing poker. The program simulated 10,000 weekly games or 192.3 years. The program assumes the same bet on each hand. When a player loses all his money, he's out of the game and the others play on. When the final player has the entire $50, the game is over. Higher skill on the perhand level definitely leverages your odds at the evening game level.
RESULTS
Player Win
Probability____$1 Bet__________$5 Bet_______$10 Bet
..18 1.47% 5.86% 17.95%
..19 5.32% 10.60% 18.70%
..20 13.11% 17.82% 20.45%
..21 29.25% 27.99% 20.86%
..22 50.85% 37.73% 22.04%
In my earlier reply to your problem, I said that I expected the less skilled players to have better odds with larger bets but I never expected the disproportionate results on the $1 bet games. It seems that over the long haul, skill gives you the ability to build up the cash cushion which helps you survive runs of bad luck later in the game.
Just as a test, I set everyone at the same skill level. Even at the $1 bet level the results are fairly even, which increases my confidence in my simulation program.
Player Win
Probability____$1 Bet
..20 19.92%
..20 20.48%
..20 19.73%
..20 20.07%
..20 19.80%
Finally, I let four players have equal skill level and set the fifth at a significantly higher level. The results are as you might expect:
Player Win
Probability____$1 Bet
..18 2.98%
..18 3.12%
..18 2.69%
..18 3.15%
..28 88.06%
The moral is that if you're against better opponents, put everything on one test and trust to luck; if you are the most skilled, try to draw the contest out to bring your skill into play.
E.G., in roulette the house is the more "skilled" player by virtue of having the odds in its favor, I believe 52% for the house to 48% for the player. So put all your chips on one roll and cross your fingers.
Any suggestions on how to apply this lesson to the stock market?
Paul 

Back to top 


Joe science forum beginner
Joined: 25 Jun 2005
Posts: 22

Posted: Fri Aug 26, 2005 10:44 pm Post subject:
Re: question about linear regression



"Dan Akers" <digikey@webtv.net> wrote in message
news:490430E6D61342@storefull3131.bay.webtv.net...
Quote:  Joe wrote;
"I am using 100200 X,Y data pairs. So there can be several values of Y
for any given value of X. Is that what you mean by nonlinear?"
_____________________________________
Re;
I just reread you latest post and I realize that I did not comprehend on
my first read, that you sometimes have multiple Y values for some X
values. If Y is indeed dependent on X, well, that right there tells you
that there is some issue with repeatability. Hence the empirical data
has a probabilistic error band associated with it.
Given that, I would suggest that you calculate the Y error terms (Y1Y2)
for each X, where Y1 is the empirical and Y2 the calculated or
predicted, plug the summation of these into the standard deviation
equation I posted earlier to get the standard deviation of the dependent
variable Y, and then present the data with the primary calculated curve
along with the probabilistic range of your choosing.
Dan Akers

Hi Dan,
Thank you for the ideas on the standard deviation. I think that is the same
as the standard error that my new book mentions. Still, it doesn't look to
me like a line is going to be able to model this data. I looked at a
scatterplot as you suggested, and it looks like a plane might be better than
a line. I need to backtrack first and do some analysis of correlation on
just the data before I proceed with this any further.
Joe 

Back to top 


Joe science forum beginner
Joined: 25 Jun 2005
Posts: 22

Posted: Fri Aug 26, 2005 10:41 pm Post subject:
Re: question about linear regression



"Duncan Smith" <buzzard@urubu.freeserve.co.uk> wrote in message
news:deknh9$m3l$1@newsm1.svr.pol.co.uk...
Quote:  Joe wrote:
"Dan Akers" <digikey@webtv.net> wrote in message
news:11828430CEF19132@storefull3135.bay.webtv.net...
Joe wrote;
snip
"I would like to make my best fit equation fit the data a little better,
can anyone suggest a method to accomplish this?
I am currently in the process of measuring the actualpredicted errors
(absolute value of), but I am not really sure what to do with those
errors or how to minimize them.
Any suggestions, sources, web sites, or tutorials greatly appreciated."
_____________________________________
Re;
Why are you using linear regression to model what seems to be a
nonlinear relation? Try a polynomial fit; quadratic, cubic, or even
higher order if you have the data...
Dan Akers
Hi Dan,
Thank you for the suggestions. As I said in my last post, I am using a
program called curvefit 1.3. It will automatically model over 30
equations
to the data. It does polynomials, quadratics, reciprocal logs, just about
any model you could think of. It even does interpolation (4 different
types)
You can also define your own models and have it find the coefficients. I
am
using 100200 X,Y data pairs. So there can be several values of Y for
any
given value of X. Is that what you mean by nonlinear? When I look at
some
of the other models, they seem to be the same. They can't predict to one
decimal place. Maybe I am expecting too much from these models.
Maybe. If there's variability in the data (which there clearly is),
then there's a limit to how accurate / precise the predictions can be.
It's easy to find a model that will predict the data you're using to fit
it (exactly), but that will usually be an atrocious model for predicting
'new' data. For some X you have several values for Y. Are these Y
values invariably within 0.1 of each other? If not, how could you ever
find a model that will guarantee to give predicted values within 0.1 of
the true value of Y?
Duncan

Hi Duncan,
Well, that was sort of my point (pardon the pun). When I look at that many
data sets on a scatter plot, it looks more like a plane might define it
better, but I am not sure how to fit data to a plane, rather than a line.
No, the Y values are not within 0.1 of each other. That was why I was
thinking of using a look up table instead of trying to use an equation. My
new statistics book that I picked up yesterday has me intrigued with ideas
on covariance and correlation, so I am going to run those tests first, and
try to do this step by step. It also discusses multiple regression at the
end of the book, but I have no idea what that is, yet. It is a little more
advanced than my old book from college.
Joe 

Back to top 


Duncan Smith science forum beginner
Joined: 29 Apr 2005
Posts: 21

Posted: Thu Aug 25, 2005 1:19 pm Post subject:
Re: question about linear regression



Joe wrote:
Quote:  "Dan Akers" <digikey@webtv.net> wrote in message
news:11828430CEF19132@storefull3135.bay.webtv.net...
Joe wrote;
snip
"I would like to make my best fit equation fit the data a little better,
can anyone suggest a method to accomplish this?
I am currently in the process of measuring the actualpredicted errors
(absolute value of), but I am not really sure what to do with those
errors or how to minimize them.
Any suggestions, sources, web sites, or tutorials greatly appreciated."
_____________________________________
Re;
Why are you using linear regression to model what seems to be a
nonlinear relation? Try a polynomial fit; quadratic, cubic, or even
higher order if you have the data...
Dan Akers
Hi Dan,
Thank you for the suggestions. As I said in my last post, I am using a
program called curvefit 1.3. It will automatically model over 30 equations
to the data. It does polynomials, quadratics, reciprocal logs, just about
any model you could think of. It even does interpolation (4 different types)
You can also define your own models and have it find the coefficients. I am
using 100200 X,Y data pairs. So there can be several values of Y for any
given value of X. Is that what you mean by nonlinear? When I look at some
of the other models, they seem to be the same. They can't predict to one
decimal place. Maybe I am expecting too much from these models.

Maybe. If there's variability in the data (which there clearly is),
then there's a limit to how accurate / precise the predictions can be.
It's easy to find a model that will predict the data you're using to fit
it (exactly), but that will usually be an atrocious model for predicting
'new' data. For some X you have several values for Y. Are these Y
values invariably within 0.1 of each other? If not, how could you ever
find a model that will guarantee to give predicted values within 0.1 of
the true value of Y?
Duncan 

Back to top 


Pavel314 science forum addict
Joined: 29 Apr 2005
Posts: 78

Posted: Wed Aug 03, 2005 12:06 am Post subject:
Re: "Americans tap wine over beer"



"Ray Calvert" <gsinews@sbcglobal.net> wrote in message
news:%tcFe.447$aT1.336@newssvr19.news.prodigy.com...
Quote:  I read the article but did not see anything indicating that people were
considering chemicals when they were picking their poison. Maybe I missed
something. I wonder about the numbers though, They indicate Americans
drink about 24 gallons of beer and 2 gallons of wine per year. I assume
they mean for those who drink each, not an average of all Americans or of
all drinkers. That would indicate that people who drink beer drink about
235 beers a year while people who drink wine only drink about 50 glasses a
year. Wine drinkers must not reach for their alcohol of choice very often.
I certainly do more than my share by those standards.

50 glasses a year? That's less than one per week. I would guess that that's
the average for all Americans, not just wine drinkers.
From http://www.winexmagazine.com/archives/xercize.htm
"Per capita wine consumption in the United States in 1995 was: 2.13
gallons/adult"
which is close to your 2 gallons per year and for all adults, not just wine
drinkers. Of course, the average (mean) is more meaningful if you have the
standard deviation of the distribution. There's a lot of good statistical
information about wine consumption at
http://repositories.cdlib.org/cgi/viewcontent.cgi?article=1049&context=ucscecon
Table 2 gives the average and standard deviation of wine share but I'll have
to read this a few more times while sober to fully understand it. I
crossposted to alt.sci.math.probability and sci.stat.math to see if any of
the stat wizards there could read anything into this. It looks like the
USCanada share is increasing while the standard deviation as a ratio to
mean is decreasing, indicating a tightening up of the wine consumption
trends. (I really like the wine consumption by latitude graphs.)
Another interesting fact from that site:
"The approximate ratio of beer advertising to wine advertising is: 10 to 1"
Which is in the neighborhood of your 24 gallons of beer to 2 gallons of wine
per year, or 121. It looks like it pays to advertise.
Paul 

Back to top 


alan truelove science forum beginner
Joined: 09 Jun 2005
Posts: 8

Posted: Wed Jun 29, 2005 10:44 pm Post subject:
Re: Probability question



As promised a correct solution (I hope) for "three rows of objects"
[The general problem (any number of rows of objects) is pretty
difficult]
I haven't bothered to type up the derivation of the formulas but will
certainly do so if anyone wants it.
I would be glad to hear of any blunders, or better method
I find it hard to believe that no one has solved this before, so I
will communicate with a few old Cambridge buddies ..
To the original poster (rec.org.mensa)  see what you have started!
   
3 groups of n objects each are presented; select one object from each
group (a 'triad')
Given the first object, the (correct) other two are unique.
Repeat until all triads have been selected.
P(0,n,x) is prob. of getting exactly x correct triads
P(d,n,x) is prob. of getting x triads correct, with in each group 
n 'good' objects ,plus d 'dummy' objects none of which can form part
of a correct triad.
Quote:  '       
Results: (for n up to 20)  i.e. P(0,n,x) 
Problem with 3 rows of objects:
Check n=no. of objs Probs for x=no.of correct triads
Sum in each row x = n to 0
1 2 0.25 0 0.75
1 3 0.028 0 0.25 0.722
1 4 0.002 0 0.031 0.181 0.786
1 5 2 zeros 0.002 0.018 0.157 0.822
1 6 3_zeros 0.001 0.013 0.137 0.849
1 7 4_zeros 0.001 0.01 0.121 0.868
1 8 6_zeros 0.008 0.109 0.883
1 9 7_zeros 0.006 0.098 0.896
1 10 8_zeros 0.005 0.09 0.905
1 11 9_zeros 0.004 0.082 0.913
1 12 10_zeros 0.003 0.076 0.92
1 13 11_zeros 0.003 0.071 0.926
1 14 12_zeros 0.003 0.066 0.931
1 15 13_zeros 0.002 0.062 0.936
1 16 14_zeros 0.002 0.058 0.94
1 17 15_zeros 0.002 0.055 0.943
1 18 16_zeros 0.002 0.052 0.946
1 19 17_zeros 0.001 0.05 0.949
1 20 18_zeros 0.001 0.047 0.951
'match across 3 sets of 20
' alan j truelove July 1 05
'571 242 0153 'alan_truelove@hotmail.com
Imports System.Math
Public Class Form1
Inherits System.Windows.Forms.Form
#Region " Windows Form Designer generated code "
Public Sub New()
MyBase.New()
'This call is required by the Windows Form Designer.
InitializeComponent()
'Add any initialization after the InitializeComponent() call
End Sub
'Form overrides dispose to clean up the component list.
Protected Overloads Overrides Sub Dispose(ByVal disposing As
Boolean)
If disposing Then
If Not (components Is Nothing) Then
components.Dispose()
End If
End If
MyBase.Dispose(disposing)
End Sub
'Required by the Windows Form Designer
Private components As System.ComponentModel.IContainer
'NOTE: The following procedure is required by the Windows Form
Designer
'It can be modified using the Windows Form Designer.
'Do not modify it using the code editor.
Friend WithEvents Button1 As System.Windows.Forms.Button
<System.Diagnostics.DebuggerStepThrough()> Private Sub
InitializeComponent()
Me.Button1 = New System.Windows.Forms.Button
Me.SuspendLayout()
'
'Button1
'
Me.Button1.Location = New System.Drawing.Point(24, 32)
Me.Button1.Name = "Button1"
Me.Button1.Size = New System.Drawing.Size(144, 64)
Me.Button1.TabIndex = 0
Me.Button1.Text = "Button1"
'
'Form1
'
Me.AutoScaleBaseSize = New System.Drawing.Size(5, 13)
Me.ClientSize = New System.Drawing.Size(292, 266)
Me.Controls.Add(Me.Button1)
Me.Name = "Form1"
Me.Text = "Form1"
Me.ResumeLayout(False)
End Sub
#End Region
Public Sub Button1_Click(ByVal sender As System.Object, ByVal e As
System.EventArgs) Handles Button1.Click
Dim i, j, iii, jjj, ii, jj As Integer
ReDim P(21, 100, 100)
ReDim pstr(100)
ReDim ptot(100)
nmax = 20
dmax = 20
myline2 = " Problem with 3 rows of objects:"
ts6.WriteLine(myline2)
myline2 = "Check no. of objs Probs for x=no.of correct
triads"
ts6.WriteLine(myline2)
myline2 = " Sum in each row x = n to 0"
ts6.WriteLine(myline2)
'P(0,n,x)= 1/n^2 of P(0,n1,x1) + 3(n1)1,n2,x) +
(n1)(n2)P(2,n3,x)
'P(1,n,y)=1/(n+1)^2 of P(0,n,x) + 3n P(1,n1,x) + n(n1)
P(2.n2,x)
'P(2,n,z)=1/(n+2)^2 of 4 P(1,n,x) + 5n P(2,n1,x) + n(n1)
P(3,n2,x)
'P(3,n,x)=1/(n+3)^3 of 9 P(2,n,x) + 7n P(3,n1,x) + n(n1)
P(4,n2,x)
'P(d,n,x)= 1/(n+d)^2 of d^2 P(d1,n,x) + (2d + 1) P(d,n1,) +
n(n1) P(d+1,n2,x)
' P(d,n,x) is the prob of getting x correct triads, choosing
from n 'good' entries, and
' d 'dummy' entries on each row (dummies cannot match up)
P(0, 1, 1) = 1 'if only one col, pr get 1 triad is 1
P(0, 1, 0) = 0 'if only 1 col, pr get 0 triads is nil
For n = 1 To nmax
P(0, n, n  1) = 0
' prob getting exactly n1 triads out of n cols (no
dummies)
Next
'    
For d = 0 To dmax
P(d, 0, 0) = 1
'if there are 0 good entries (any no of dummies) the prob
of getting 0 triads is 1
P(d, 1, 1) = (d + 1) ^ (2)
'if there is 1 good entry (d dummies) the prob of getting the
1 triads is 1/(d+1)^2
P(d, 1, 0) = 1  (d + 1) ^ (2)
'if there is 1 good entry (any no of dummies) the prob of
getting 0 triads is 1 prob of getting the 1 triad
'     
For n = 1 To nmax
P(d, 0, n) = 0 'obviously the prob of getting >0
triads is nil
'     
'prob of getting all n triads out of n good cols
'1/(n+d) * 1/(n+d1) .. * 1/(d+1)
'get (d+n)!/d!
P(d, n, n) = (d + 1)
For jj = 2 To n
P(d, n, n) = P(d, n, n) * (jj + d)
Next
' we need 1 over the above, squared
P(d, n, n) = P(d, n, n) ^ (2)
'   
Next
Next
'             
For n = 2 To 20 '0 'dummies', i.e. P(0,n,x) are
the probs of getting x correct triads out of n objects (in each row)
d = 0
mystr = Round(P(d, n, n), 3)
xx = 0 'counts zero for print line
If mystr = "0" Then
pstr(d) = " 2 zeros "
xx = 2
Else
pstr(d) = mystr & " 0 "
End If
ptot(d) = P(d, n, n)
'note that P(d,n,n) was calculated above
' and P(d,n,n1) is always zero
For x = n  2 To 0 Step 1
P(d, n, x) = 0 'ready to calculate this value
If (x > 0) Then P(d, n, x) = P(d, n  1, x  1) / n ^ 2
'   
If n > 1 Then P(d, n, x) = P(d, n, x) + _
3 * (n  1) * P(d + 1, n  2, x) / n ^ 2
'   
If n > 2 Then P(d, n, x) = P(d, n, x) + _
(n  1) * (n  2) * P(d + 2, n  3, x) / n ^ 2
mystr = Round(P(d, n, x), 3)
If mystr = "0" Then
xx = xx + 1
pstr(d) = xx & "_zeros "
Else
pstr(d) = pstr(d) & " " & mystr
End If
ptot(d) = ptot(d) + P(d, n, x)
Next
myline = ptot(0) & " =sum; n = no. of obj. each row= " _
& n & " ;probs(no. corr. triads, runs n to 0) " & pstr(0)
myline2 = Round(ptot(0), 3) & " " & n & " " & pstr(0)
'the string of probs for x=n down to x=0
ts6.WriteLine(myline2)
'MsgBox(myline)
'    
'We now calculate the P(d,n,x) for d>0, which quantities are defined
'at the beginning. These are just used in the iterative calculation.
For d = 1 To dmax  1 'probably not necess for d to go
this high
pstr(d) = ""
ptot(d) = 0
For x = n To 0 Step 1
P(d, n, x) = d ^ 2 * P(d  1, n, x) / (n + d) ^ 2
If x < n Then P(d, n, x) = P(d, n, x) + _
(2 * d + 1) * n * P(d, n  1, x) / (n + d) ^ 2
If n > 1 And x < n  1 Then P(d, n, x) = P(d, n,
x) + _
n * (n  1) * P(d + 1, n  2, x) / (n + d) ^ 2
ptot(d) = ptot(d) + P(d, n, x)
pstr(d) = pstr(d) & " " & Round(P(d, n, x), 3)
Next 'end of the x loop
' MsgBox(ptot(d) & " " & d & " " & n & " d, n " &
pstr(d))
Next 'end of d = 1 to dmax1 loop
'     
Next 'end of the loop n = 2 to 20
ts6.Close()
End 'stop the program
End Sub
End Class
'      
Module global
Public ts6 As System.IO.StreamWriter = _
System.IO.File.CreateText("C:/mathpuzz/results.doc")
Public P(,,), ptot() As Single
Public nfact, nplusdfact, dfact As Single
Public d, dmax, n, nmax, x, xx, kkk As Integer
Public pstr(0), mystr, myline, myline2 As String
End Module
'         

Back to top 


Scott Hemphill science forum beginner
Joined: 09 Jun 2005
Posts: 21

Posted: Thu Jun 16, 2005 11:27 pm Post subject:
Re: Probability question, was Re: Mensa Forgot Another Possibility!



Jon Haugsand <jonhaug@ifi.uio.no> writes:
Quote:  * Scott Hemphill
I'll put in a plug for _Concrete Mathematics_. It is my favorite math
text. I discovered it after I was out of school, and it contains all
the useful math I had somehow missed.
Yes I know. Somehow, I only read first half of Knuth's vol 1 in "The
Art of Computer Programming". Your first derivation of the
derangement formula is digested with satisfaction. Thanks. Made me
look into convolution in general where I found the following joyful
web site: http://www.jhu.edu/~signals/discreteconv/

Thanks for the link!
Scott

Scott Hemphill hemphill@alumni.caltech.edu
"This isn't flying. This is falling, with style."  Buzz Lightyear 

Back to top 


Jon Haugsand science forum beginner
Joined: 03 May 2005
Posts: 37

Posted: Thu Jun 16, 2005 7:58 pm Post subject:
Re: Probability question, was Re: Mensa Forgot Another Possibility!



* Scott Hemphill
Quote:  I'll put in a plug for _Concrete Mathematics_. It is my favorite math
text. I discovered it after I was out of school, and it contains all
the useful math I had somehow missed.

Yes I know. Somehow, I only read first half of Knuth's vol 1 in "The
Art of Computer Programming". Your first derivation of the
derangement formula is digested with satisfaction. Thanks. Made me
look into convolution in general where I found the following joyful
web site: http://www.jhu.edu/~signals/discreteconv/

Jon Haugsand
Dept. of Informatics, Univ. of Oslo, Norway, mailto:jonhaug@ifi.uio.no
http://www.ifi.uio.no/~jonhaug/, Phone: +47 22 85 24 92 

Back to top 


Scott Hemphill science forum beginner
Joined: 09 Jun 2005
Posts: 21

Posted: Thu Jun 16, 2005 2:10 pm Post subject:
Re: Probability question, was Re: Mensa Forgot Another Possibility!



Jon Haugsand <jonhaug@ifi.uio.no> writes:
Quote:  * Scott Hemphill
That's why I gave a reference. There are some smart and easy explanations
depending on your background. I'll give one below which uses generating
functions.
Thanks. I actually have a lot of background, but somehow generating
functions have escaped my education. Time to look into it.

I'll put in a plug for _Concrete Mathematics_. It is my favorite math
text. I discovered it after I was out of school, and it contains all
the useful math I had somehow missed.
You can also solve the original problem using an inversion formula.
If you have a function f defined in terms of g:
f(n) = sum(k=0,n) (1)^k choose(n,k) g(k)
Then g is implicitly defined in terms of f. This formula can be inverted
to solve for g:
g(n) = sum(k=0,n) (1)^k choose(n,k) f(k)
There are a variety of ways to prove this, but I like to use generating
functions, because they help me to prove identities involving "choose()"
(binomial coefficients) that I can't remember.
The generating function of a sequence is a way of dealing with the sequence
all at once. The generating function of a sequence (a0, a1, a2, ...) is
a0 + a1*z + a2*z^2 + ....
When you multiply two functions together:
(a0 + a1*z + a2*z^2 + ...) * (b0 + b1*z + b2*z^2 + ...)
you get:
(a0*b0 + (a0*b1+a1*b0)*z + (a0*b2+a1*b1+a2*b0)*z^2 + ...)
The coefficient of z^n is sum(k=0,n) a_k*b_(nk), so this function is
the generating function of the sequence c_n = sum(k=0,n) a_k*b_(nk).
The sequence c_n is the convolution of a_n and b_n. Whenever you have
a sum of terms with some factors involving k and some factors involving
nk, you can recognize a convolution.
Starting with
f(n) = sum(k=0,n) (1)^k choose(n,k) g(k)
f(n) = sum(k=0,n) (1)^k n!/((nk)!k!) g(k)
f(n)/n! = sum(k=0,n) (1)^k g(k)/k! * 1/(nk)!
Let F(z) be the generating function of f(n)/n! and G(z) be the generating
function of g(n)/n!. The right half of this last equation is the convolution
of the sequences (1)^n g(n)/n! and 1/n!.
The generating function of (1)^n g(n)/n! is G(z) and the generating
function of 1/n! is e^z.
So
F(z) = G(z) e^z
Solving for G(z) by substituting z > z:
G(z) = F(z) e^z
This is the same equation as the previous one with the roles of F and G
reversed! So now we can follow the same steps we used to get the previous
equation in reverse order to arrive at:
g(n) = sum(k=0,n) (1)^k choose(n,k) f(k)
(Using this inversion to solve the derangement problem is left as an
exercise.)
Scott

Scott Hemphill hemphill@alumni.caltech.edu
"This isn't flying. This is falling, with style."  Buzz Lightyear 

Back to top 


Google


Back to top 



The time now is Wed Oct 17, 2018 5:51 pm  All times are GMT

