Author 
Message 
Richard J. Fateman science forum addict
Joined: 04 May 2005
Posts: 81


Back to top 


Klueless science forum beginner
Joined: 30 May 2005
Posts: 19

Posted: Fri Jun 23, 2006 5:41 am Post subject:
Re: CAS and interval computation and pseudovalues like infinity, undefined



"Richard J. Fateman" <fateman@eecs.berkeley.edu> wrote in message news:e7f7c9$1bih$1@agate.berkeley.edu...
Quote:  http://www.cs.berkeley.edu/~fateman/papers/interval.pdf

Near the bottom of page 2, I think you need: $[w_0,w_1]\supseteq[z_0,z_1]$.
There is a solution in this paper, NInterval = 4tuple, but I am left unclear what
is the problem. There is the *possible* advantage that the Fateman interval
system expressions may be subjected to more clever algebraic simplification
by virtue of the 4th coordinate memory expressions. Is the proposed advantage
really demonstrated in any numerical routine of any consequence, e.g. calculation
of Sin(x), NewtonRaphson, etc.? I don't see it.
Imagine: Richard Fateman compares the results of using the 2NInterval,
3NInterval, 4NInterval versions of selected numerical algorithms with each other
given the same 2NInterval or 1NInterval inputs (1NInterval = ordinary floats).
While the 4NInterval *might* produce more precise results at a fixed working
precision, it is expected the burden of extra simplication would slow it down
compared to 2NInterval arithmetic. The 2NInterval arithmetic, instead of using
the 4th coordinate memory expressions, would just rely on a higher temporary
working precision, to try to beat the 4NInterval arithmetic competitor. The higher
temporary working precision means slower operations too, but maybe not as
slow as the algebraic simplication slow downs in the 4NInterval calculations.
Infinity. Intervals of the type [x,infinity) or (infinity,x] union [y,infinity) .
http://www.cs.utep.edu/intervalcomp/ Relevant web site 

Back to top 


Christopher Creutzig science forum Guru Wannabe
Joined: 03 May 2005
Posts: 107

Posted: Fri Jun 23, 2006 7:27 am Post subject:
Re: CAS and interval computation and pseudovalues like infinity, undefined



Richard J. Fateman wrote:
While I do understand your goal to make the interval evaluations of
x*x and x^2 return identical results, I fail to see the mathematical
structure (as opposed to implementation, which you write a lot about)
behind your extended intervals. In my humble opinion, this is a lot of
loss for the rather slight gain to reduce the dependency problem in a
few select cases. I doubt your approach will be able to evaluate x^2+x+1
as tightly as (x+1/2)^2+3/4 without finding this representation
symbolically, to name just the standard example. Finding tight
enclosures for the range of a polynomial over an interval is known to be
NP complete, see for example V.Kreinovich: “Range Estimation is NP hard
for \epsilon^2 Accuracy and Feasible for \epsilon^{2\delta}”. I do not
see your fourtuples solving this problem, and most places where a CAS
resolves to numerical methods are not polynomial anyway.
Sure, the standard interval notation by itself does not lead to a ring
or something algebraically nice like it, but there are extensions
leading to one, such as “improper intervals,” cf. Popova, Ullrich,
“Simplification of SymbolicNumerical Interval Expressions”. But for all
such extensions, as far as I know them, still x*x is a subset of x^2,
and often a proper subset. Getting more standard structure makes the
finding and the analysis of algorithms easier, which looks to me like a
good thing. That symbolic manipulation may change the interval
evaluation of an expression over the same input interval may or may not
be surprising, interesting, or useful, but I think it is unavoidable
(for reasonable processing times) in general and is less of a concern
than the much less controllable variations in floating point evaluations.
Apart from the chapter on intervals, I did not find new points in this
paper (but I couldn't cite a reference for the indexed infinities, which
I don't think are such a good idea, for very much the same reasons as
above; also note that, unless I missed a point, after
a:=\infty_1+\infty_2, b:=\infty_1+\infty_2, ab still won't simplify to
0). True, a lot of simplifications carried out by just about any CAS are
wrong as soon as the variables take on values from outside the complex
numbers (a*b=b*a? Not for matrices ...), but still these simplifications
are important to perform whenever they are allowed, and in most cases,
it is hard to find out if they are allowed. In this situation, a CAS
designer must make arbitrary choices and most of them seem to have
reached a very similar conclusion: Perform all simplifications
automatically which are valid for all complex numbers for which the
expression is defined, provide some assumption mechanism for the user to
specify restrictions on the domain of variables, and perform additional
simplifications if you can deduce it is safe to do so. (And find a way
to deal with all those users complaining that they have to make
assumptions to get sqrt(x^2)=abs(x).)
Regards,
Christopher Creutzig 

Back to top 


robert.dodier@gmail.com science forum beginner
Joined: 02 Oct 2005
Posts: 26

Posted: Fri Jun 23, 2006 5:37 pm Post subject:
Re: CAS and interval computation and pseudovalues like infinity, undefined



Richard J. Fateman wrote:
IMNSHO interval computations are a weak approximation to
what we really want, which is a probability distribution over the
value of an expression. Such a result is difficult and/or expensive,
but in any event, that is what approximations should be compared to.
The right way to go about this, again IMNSHO, is essentially a whole
expression probabilistic analysis. The program BUGS [1] is not too
far from what we want here. However its computations are strictly
numerical (Markov chain Monte Carlo); it would be better to return
symbolic results when possible.
Laws of probability dictate what to do to combine partial results.
An automatable analysis can determine whether the information at
hand is sufficient or whether additional information (in the form of
dependencies and/or distributions) is needed to compute a specified
result. In the case of x * y, it is necessary to specify how x and y
are related (or that they are not related); a correct probabilistic
analysis cannot be carried out without that information.
Intervals, by construction, keep only the support of a probability
distribution and don't remember where is the mass. But it seems
possible to me that intervals could at least get the dependency
stuff right. It looks like Fateman's proposal above is
heading in that direction.
FWIW
Robert Dodier
[1] http://www.mrcbsu.cam.ac.uk/bugs 

Back to top 


rjf science forum beginner
Joined: 05 May 2006
Posts: 5

Posted: Fri Jun 23, 2006 6:26 pm Post subject:
Re: CAS and interval computation and pseudovalues like infinity, undefined



Thanks for your comments.
The disadvantage of computing tighter intervals is that the methods
proposed (and I cannot claim to be the first to find any of the methods
that I mention) is that the individual computation of the result of an
operation, say + or *, is longer. Probably much longer.
The advantage is that the tighter intervals may lead to faster
convergence of certain sequences of operations. In particular, finding
zeros of functions by a socalled NB method (Newton Iteration +
Bisection) can take far fewer steps, and thus may overcome the
increased cost per operation. Imagine you are doing Newton Iteration
on interval x[i] and the next step toward the answer x[i+1] you get is
WIDER. If the initial interval x[0] encloses the zero, so does x[i]
and so does x[i+1]. Compute the intersection of x[i] and x[i+1]. That
is no wider than either of those, and still contains the zero. So use
that. Or bisect the interval x[i] and use the information that the
zero is in one or the other half. Anyway, some of the references
claim, and have examples that show, that tighter bounds, even costing
more, can give answers faster. Increasing the precision of the
endpoints is unlikely to be of much use; decreasing the width is the
goal.
I guess I should make this clearer.
The problem of finding the sharpest enclosure is provably hard in
general, but the problem is optimally solvable for quadratic univariate
polynomials (always) by completing the square, the example given in
Christopher's note. It is also possible to solve a multivariate
problem optimally in some cases by using linear programming.
I think that the fact that some task is in NP, or even known to be
exponential, cannot be used, all by itself, as a reason to dismiss it
as impractical. If that were the case, we would throw up our hands in
dismay at polynomial division (socalled "synthetic" division), since
problems like (x^n1)/(x+1) have answers that are exponential in length
as a function of n.
The issue of noting that x*x is the same as x^2 is, so far as I can
tell, solved not only by my program but by the Taylor Method of Berz.
My program is susceptible to a (far more costly) feature, which would
take the polynomial and find all roots within the interval, and compute
very tight bounds; even without such hacks, it finds optimal results
for expressions of the form a*x^n+b where a,b are scalars and n is an
integer. It could easily be fixed for completing the square, and in
fact I think I'll do that this afternoon.
Doing arithmetic on inf_1+inf_2 when it has been replaced by inf_3
would not be very good. Vital information has been lost. The time to
replace inf_1+inf_2 by a new named infinity is when (and only when)
there are no occurrences of inf_1 or inf_2 anywhere in the CAS. As
long as they are hanging around, it is imprudent to rename inf_1+inf_2.
The kind of information needed is fairly easy to obtain in a Lisp
system using "weak hash tables".
Regarding Robert's comment on arithmetic with distributions: it seems
to me the interval computation goal is to avoid catastrophies by
assuring, through guaranteed reliable operations, that certain
constraints hold. It would be a different computation to say that
certain catastrophies were more or less improbable, assuming (a) you
really DO know the probability distribution of all "inputs" and that
(b) you really DO know the combinational calculus for them. That
generally means you act as though the inputs are independent, although
you have no way of determining that.
The rules for computing with "evidence" have many variations (e.g.
DempsterShafer). To me, Baysian probability theory is far more
comfortable, but without prior probabilities, it seems to me it will
have problems as well, and trying to put together a long logical chain
using a CAS may be misleading more than helpful. But this is not an
area I've studied.
Thanks again for the comments!!
RJF 

Back to top 


Christopher Creutzig science forum Guru Wannabe
Joined: 03 May 2005
Posts: 107

Posted: Sat Jun 24, 2006 6:53 pm Post subject:
Re: CAS and interval computation and pseudovalues like infinity, undefined



rjf wrote:
Quote:  The disadvantage of computing tighter intervals is that the methods
proposed (and I cannot claim to be the first to find any of the methods
that I mention) is that the individual computation of the result of an
operation, say + or *, is longer. Probably much longer.

What seems to me to be missing is a comparison with established
methods to the same end, especially centered and multicentered forms
such as Taylor forms. You mention Taylor forms, but make no real
comparison with them; as I read it, you propose to include Taylor form
calculations (or polynomials over the whole initial range, which is
clearly inferior to a centered form in terms of both computation time
and overestimation, unless the polynomial is again evaluated as a
centered form) into the lowlevel arithmetic operators? I fail to see
why calculating lots of intermediate intervals that are never used is
useful. Or are you trying to get rid of an explicit function call
“evaluate this over that interval, using that center”?
Quote:  The advantage is that the tighter intervals may lead to faster
convergence of certain sequences of operations. In particular, finding

Sure. I'm certainly not talking against reducing overestimation.
I just find automatic differentiation easier to grasp, analyze
mathematically, and use than what I believe to have understood from your
proposal.
Quote:  increased cost per operation. Imagine you are doing Newton Iteration
on interval x[i] and the next step toward the answer x[i+1] you get is
WIDER. If the initial interval x[0] encloses the zero, so does x[i]

When we talk about Newton iteration in an interval context, it is
usually understood that x[i+1] = N(x[i]) intersect x[i], where N is the
Newton operator. So while I understand your point, strictly speaking,
this situation cannot arise. (And just for completeness: your argument
does not just hold for exactly one zero. It's true for any number of
zeroes, including none.)
Note that Newton iteration is a perfect example where automatic
differentiation is much faster than extending the fundamental interval
arithmetics: Typically, you'd find centered forms for f and f' (both at
once, with one run of AD, obviously) and reuse them a couple of times
before recalculating with a new center (strictly speaking, the “center”
need not even be in the interval considered) to find sharper remainder
bounds.
Quote:  more, can give answers faster. Increasing the precision of the
endpoints is unlikely to be of much use; decreasing the width is the
goal.

True. If I said something different, I certainly did not mean it.
Quote:  I think that the fact that some task is in NP, or even known to be
exponential, cannot be used, all by itself, as a reason to dismiss it
as impractical. If that were the case, we would throw up our hands in

Certainly not. But reducing interval computation problems to
polynomials and regarding those as “simple” is a very attractive
pitfall. Sure, polynomials are simpler than many other functions, I just
wanted to point out that their simplicity may not suffice.
Symbolic computations have a lot to give to interval analysis. For
example, it is numerically impossible to show the boundedness of
sin(x)/x, but a CAS can do that. Taylor forms can do that, if employed
in a careful manner. I don't wish to imply that I didn't find the
connection between the two subjects interesting, quite the contrary.
Quote:  The issue of noting that x*x is the same as x^2 is, so far as I can
tell, solved not only by my program but by the Taylor Method of Berz.

As well as by any other centered form, I think. (Yet it is not
necessarily true if x is an interval or some other set with more than
one element. But that's a different topic; we are talking about
enclosing the evaluation of a complexvalued function of a complex
variable for all values in an interval at the moment, where x*x=x^2.)
Quote:  Doing arithmetic on inf_1+inf_2 when it has been replaced by inf_3
would not be very good. Vital information has been lost. The time to
replace inf_1+inf_2 by a new named infinity is when (and only when)
there are no occurrences of inf_1 or inf_2 anywhere in the CAS. As

I misread that part as not having any other references anywhere in the
current expression, my apologies. In that case, my quarrel is with the
expression swell this proposal will create.
Quote:  long as they are hanging around, it is imprudent to rename inf_1+inf_2.
The kind of information needed is fairly easy to obtain in a Lisp
system using "weak hash tables".

Shouldn't inf_1+inf_2 be renamed at the moment they only ever appear
in this combination? (And how do weak hash tables help? After all, you
still need to have at least one reference to inf_1 and inf_2 in some
expression to make any useful change.)
Quote:  Regarding Robert's comment on arithmetic with distributions: it seems
to me the interval computation goal is to avoid catastrophies by
assuring, through guaranteed reliable operations, that certain
constraints hold. It would be a different computation to say that

That is one of the goals, sure. As far as I know, it has been the
raison d'être for intervals in the beginning. Another important
application is to perform some computation (say, a simulation) for a
range of possible input values, all at once. (There has been a paper on
simulating the statics of a rack under all permissible load
distributions, for example. I'll look up the reference if anyone needs
it. I just hope I used understandable and reasonably correct English
terms ...) Things like enclosing all roots of an expression is a third
application.
If intervals are not what Robert wants, fine. As far as I can tell
(and I haven't been around in the sixties to watch), they were never
meant to model probability distributions, although some people have
devised ways of using multiple intervals to get a rough approximation of
probability distributions. There certainly are applications for which
for some people intervals are a sufficient tool.
High Regards,
Christopher 

Back to top 


Craig Carey science forum beginner
Joined: 18 Jun 2005
Posts: 41

Posted: Sun Jun 25, 2006 7:22 pm Post subject:
Re: CAS and interval computation and pseudovalues like infinity, undefined



On Thu, 22 Jun 2006 15:59:21 0700, "Richard J. Fateman" wrote:
The topic of doing bounds arithmetic can an instance of a wider
theory. That wider theory can implement type checking.
An aim is to not add a lot of empty function declarations to
existing symbolic algebra code.
Eg. if these are defined in the before state:
function "*" (X, Y : Expr) returns Expr is ... ;
function "sin" (X, Y : Expr) returns Expr is ... ;
Then after X is made more complex, eg. by representing bounded types
or returning units of horizontal space only if that is the input, then
there is no need to rewrite the "sin" since it can delegate to "*"
decisions on what to do what the two taint fields.
Some madeup syntax to make more complex the primitive symbolictype:
type Phys_DimEnum is (ET, ED);
subtype Distance is Expr but tainted with ED;
subtype Time is Expr but tainted with ET
Designers of SQL databases might regard it as adding a new field.
Here is an example of SQL
SELECT STUDENT.NAME, STUDENT.ROLLNO, STUDENT.GRADE FROM STUDENT
WHERE STUDENT.GRADE = 9.0;
(From a webpage of the Indian Institute of Technology, Bombay,
describing the Delitedb database using little memory:
http://www.cse.iitb.ac.in/infolab/projects/DELite/index.html
About 942 calls to malloc, which seems unsafe for Windows,
where allocating is slower.)
Ada 95 offers a little of what might be sought: if X is small
record type then X'Class can describe beigger records containing
a field of type X. So "sin" could be written as this:
function "sin" (X, Y : Expr'Class) returns Expr'Class is ...
Ada actually allows that despite how the size of the returned
record seems unknown.
When I tried that once, the source code started to become verbose
with type conversions.
I shall write Expr^ for a symbolic type containing unknown extra
database fields (instead of "Expr'Class").
function Moved (Location : Expr^[1], Time : Expr^[2])
return Expr^[1].
That forces the result to have the type of the 1st parameter.
Run time or compile time checking would prevent the 2 parameters
from being both tainted (=widened) and equal, eg.
T : ... Time type;
Z := Moved (T, T);  Not allowed
A new overloaded "*" multiply function header would be written
giving two:
function "*" (X, Y : Expr) return Expr;
function "*" (X, Y : Bounded_Expr) return Expr;  Not foreseen
An aim of not added a lot of overloaded function declarations that
maybe do nothing, succeeds:
function "sin" (X, Y : Expr^[1]) return Expr^[1];
I wrote this after noting that SQL can extract records. But for
a symbolic algebra program, select has a problem with
numnumeric Booleans, (eg. "0<p"). That suggests an is_numeric
taint. Also there is a need for an 'answermaybewrong' taint
(added field).
Craig Carey 

Back to top 


Richard J. Fateman science forum addict
Joined: 04 May 2005
Posts: 81

Posted: Tue Jun 27, 2006 10:21 pm Post subject:
Re: CAS and interval computation and pseudovalues like infinity, undefined



Christopher Creutzig wrote:
Quote:  rjf wrote:
The disadvantage of computing tighter intervals is that the methods
proposed (and I cannot claim to be the first to find any of the methods
that I mention) is that the individual computation of the result of an
operation, say + or *, is longer. Probably much longer.
What seems to me to be missing is a comparison with established methods
to the same end, especially centered and multicentered forms such as
Taylor forms. You mention Taylor forms, but make no real comparison with
them; as I read it, you propose to include Taylor form calculations (or
polynomials over the whole initial range, which is clearly inferior to a
centered form in terms of both computation time and overestimation,
unless the polynomial is again evaluated as a centered form) into the
lowlevel arithmetic operators? I fail to see why calculating lots of
intermediate intervals that are never used is useful. Or are you trying
to get rid of an explicit function call “evaluate this over that
interval, using that center”?

In most cases the polynomial will have to be reevaluated. Whether as a
centered form or not is, I think, irrelevant. For example, let
x=[0,1]. Then compute x*(1x). A naive computation would lead to the
result [0,1]*[0,1] > [0,1].
In the system I've written, x*(1x) would be computed by taking the
associated polynomials, multiplying them together to get x^2+x, and
then, since this is univariate quadratic, completing the square to get
1/4(x+1/2)^2, then evaluating that to get [0,1/4].
If the intermediate forms are not needed, I have no problem with a
primitive library call that says
polynomial_evaluate_at_interval(....some polynomial..., ...some
interval...)
It is certainly one of the routines already programmed. There are
various more or less elaborate ways of programming this.
A Taylor method of appropriate degree would produce the same result.
A Taylor method of lower degree would produce a result that would
possibly be wider.
.... snip...
If f is a polynomial, you hardly need AD to compute f'. A simple
recurrence computing both f and f' can easily be found. In fact, a
simple computation producing a rigorous bound on the error in f(x) can
be produced as well, in a time comparable to evaluating f(x). So if
computing f(x) takes time T, you can get f(x), f'(x), and error(f(x))
in about 3T.
Quote: 
more, can give answers faster. Increasing the precision of the
endpoints is unlikely to be of much use; decreasing the width is the
goal.
True. If I said something different, I certainly did not mean it.
I think that the fact that some task is in NP, or even known to be
exponential, cannot be used, all by itself, as a reason to dismiss it
as impractical. If that were the case, we would throw up our hands in
Certainly not. But reducing interval computation problems to
polynomials and regarding those as “simple” is a very attractive
pitfall. Sure, polynomials are simpler than many other functions, I just
wanted to point out that their simplicity may not suffice.

Computing nonpolynomial expressions using intervals can either be done
in a piecemeal naive fashion, say f(x)=x*log(x)*sin(x) each piece
separately computed, or in some kind of Taylor series around a chosen
point. But taylor series are not much different from polynomials.
The Taylor methods of interval arithmetic literature seem to fix the
degree ahead of time. Otherwise they are much like what I describe.
(Though I just programmed another version of arithmetic with multiple
variables along the lines of "affine arithmetic" for intervals. I think
this is included, along with Taylor methods in what you are calling
centered methods).
Quote: 
Symbolic computations have a lot to give to interval analysis. For
example, it is numerically impossible to show the boundedness of
sin(x)/x, but a CAS can do that. Taylor forms can do that, if employed
in a careful manner. I don't wish to imply that I didn't find the
connection between the two subjects interesting, quite the contrary.
The issue of noting that x*x is the same as x^2 is, so far as I can
tell, solved not only by my program but by the Taylor Method of Berz.
As well as by any other centered form, I think.

As long as you have some symbolic representation of the dependencies,
you should be able to handle this case.
(Yet it is not
Quote:  necessarily true if x is an interval or some other set with more than
one element.

I think it is a mistake to use an interval to represent a set of
distinct points. A CAS should be able to represent sets so that they
are subject to only those operations appropriate for sets.
....snip..
In that case, my quarrel is with the
Quote:  expression swell this proposal will create.

If you do too much arithmetic with infinities and the expressions of
concern grow too large, you can choose to replace them with
[inf,inf]+i*[inf,inf] or something similar. Perhaps
undefined+i*undefined.
Quote: 
long as they are hanging around, it is imprudent to rename inf_1+inf_2.
The kind of information needed is fairly easy to obtain in a Lisp
system using "weak hash tables".
Shouldn't inf_1+inf_2 be renamed at the moment they only ever appear in
this combination? (And how do weak hash tables help? After all, you
still need to have at least one reference to inf_1 and inf_2 in some
expression to make any useful change.)
You are right, a weak hash table won't help directly. I think that 
something more subtle will be needed, which might be constructed via a
"hook" in a garbage collector. That is, a certain objects will be marked
when there are only one pointer to them.
Quote: 
Thanks again for comments. 


Back to top 


Christopher Creutzig science forum Guru Wannabe
Joined: 03 May 2005
Posts: 107

Posted: Wed Jun 28, 2006 8:32 am Post subject:
Re: CAS and interval computation and pseudovalues like infinity, undefined



Richard J. Fateman wrote:
Quote:  In most cases the polynomial will have to be reevaluated. Whether as a
centered form or not is, I think, irrelevant. For example, let

Assuming you want to evaluate x^3+x^22*x+1, it does not look
reasonable to me to first find a tight enclosure of x^3+x^2 just to
throw the result away afterwards. That was my point when asking about
putting this dependency analysis in the overloading of operators. This
is not really an issue if the analysis takes place at compilation stage,
but with CAS I usually think of interactive and/or interpreted
evaluation of at least some of the formulas.
And with polynomials of degree > 2, I do think the evaluation type is
important. Sure you can find the extreme values for a univariate
polynomial, but most realworld problems are multivariate and this
approach may be too slow to be of use. Then again, it may not.
Quote:  If f is a polynomial, you hardly need AD to compute f'. A simple

True. But you are essentially doing it nevertheless. And for large
degrees, it may be worth while truncating the polynomial just like an
infinite Taylor series. It depends on the problem.
Quote:  be produced as well, in a time comparable to evaluating f(x). So if
computing f(x) takes time T, you can get f(x), f'(x), and error(f(x))
in about 3T.

I'm probably not interested in these three values. What I'm interested
in is an enclosure of f(center(x)) and one of f'(x), for x an interval,
to find a sharper enclosure of f(x) than with a direct computation.
Quote:  Computing nonpolynomial expressions using intervals can either be done
in a piecemeal naive fashion, say f(x)=x*log(x)*sin(x) each piece
separately computed, or in some kind of Taylor series around a chosen
point. But taylor series are not much different from polynomials.

The method certainly uses polynomials. The difference is that we
actually get a polynomial in two variables, center(x) for the initial
part and x for the remainder bound.
Quote:  (Though I just programmed another version of arithmetic with multiple
variables along the lines of "affine arithmetic" for intervals. I think
this is included, along with Taylor methods in what you are calling
centered methods).

That name is actually not my invention, I've seen it used at least by
Neumaier in “Interval Methods for Systems of Equations” and by Jaulin et
al. in “Applied Interval Analysis”. (Both books are recommended for
everyone implementing interval arithmetics.) Affine arithmetic, as I
know the term, is not a centered form (although you can use centered
forms with AA), it is a different way of storing quantities, keeping
more structural information than intervals do.
Quote:  The issue of noting that x*x is the same as x^2 is, so far as I can
tell, solved not only by my program but by the Taylor Method of Berz.
As well as by any other centered form, I think.
As long as you have some symbolic representation of the dependencies,
you should be able to handle this case.

It is not really necessary to have this information in symbolic form.
Automatic differentiation reduces the information stored to keep
computing times and memory consumption within tight bounds (no
intermediate expression swell, sounds like Nirvana from the CAS point of
view to me), but the reduced information should be sufficient for this
particular example.
Quote:  (Yet it is not
necessarily true if x is an interval or some other set with more than
one element.
I think it is a mistake to use an interval to represent a set of
distinct points. A CAS should be able to represent sets so that they
are subject to only those operations appropriate for sets.

I was not suggesting to represent discrete sets as intervals. What I
meant was that x*x=x^2 is in general wrong for sets with more than one
element, be it the set {1, 1} or the uncountable set that is the
interval [1,1].
Quote:  Thanks again for comments.

Thank you for the opportunity to let us see some unfinished thoughts
by someone else than ourselves.
Regards,
Christopher 

Back to top 


rjf science forum beginner
Joined: 05 May 2006
Posts: 5

Posted: Wed Jun 28, 2006 6:31 pm Post subject:
Re: CAS and interval computation and pseudovalues like infinity, undefined



Christopher Creutzig wrote:
Quote:  Assuming you want to evaluate x^3+x^22*x+1, it does not look
reasonable to me to first find a tight enclosure of x^3+x^2 just to
throw the result away afterwards. That was my point when asking about
putting this dependency analysis in the overloading of operators. This
is not really an issue if the analysis takes place at compilation stage,
but with CAS I usually think of interactive and/or interpreted
evaluation of at least some of the formulas.

I agree that you can save effort by looking at a whole expression and
compiling it into an interval evaluation without intermediate steps.
The right way to do this is to try to find one (or more) formulas that
minimize the width. Transforming expressions into "single use
expressions" when possible.
Quote: 
And with polynomials of degree > 2, I do think the evaluation type is
important. Sure you can find the extreme values for a univariate
polynomial, but most realworld problems are multivariate and this
approach may be too slow to be of use. Then again, it may not.

This is hard to judge in advance: you pay for a tighter interval. How
much is it worth?
Quote: 
If f is a polynomial, you hardly need AD to compute f'. A simple
True. But you are essentially doing it nevertheless. And for large
degrees, it may be worth while truncating the polynomial just like an
infinite Taylor series. It depends on the problem.
be produced as well, in a time comparable to evaluating f(x). So if
computing f(x) takes time T, you can get f(x), f'(x), and error(f(x))
in about 3T.
I'm probably not interested in these three values. What I'm interested
in is an enclosure of f(center(x)) and one of f'(x), for x an interval,
to find a sharper enclosure of f(x) than with a direct computation.

I do not see, offhand, that computing an interval for f' is less work
than computing an interval for f. It is commonly thought that f' is
simpler than f, but this is generally not true. Our impresions are
based on the common case of polynomials, but consider that d(u*v*w) =
u'*v*w+u*v'*w +u*v*w' etc. so the growth may be substantial, and in
the case of intervals, the "dependency" problem may be worse. (perhaps
socalled logarithmic derivatives would look simpler. )
Quote: 
Thank you for the opportunity to let us see some unfinished thoughts
by someone else than ourselves.

It is great to get comments. The originally referenced draft paper
will be revised.
RJF 

Back to top 


Christopher Creutzig science forum Guru Wannabe
Joined: 03 May 2005
Posts: 107

Posted: Thu Jun 29, 2006 9:19 am Post subject:
Re: CAS and interval computation and pseudovalues like infinity, undefined



rjf wrote:
Quote:  I agree that you can save effort by looking at a whole expression and
compiling it into an interval evaluation without intermediate steps.
The right way to do this is to try to find one (or more) formulas that
minimize the width. Transforming expressions into "single use
expressions" when possible.

Unfortunately, the optimal form depends on the interval. See
Koshelev, “Every Superinterval of the Function Range Can Be an
IntervalComputations Enclosure,” Reliable Computing 6: 219223, 2000.
For every input interval, there is an optimal formulation, but except
for very special formulas such as quadratic univariate polynomials,
there is no form that gives tight enclosures for all possible inputs. If
we are lucky, our formula only needs a reasonably low number of formulas
for tight enclosures, and one of the niceties of intervals is that we
can use the intersection of all of them as the final result.
Quote:  be produced as well, in a time comparable to evaluating f(x). So if
computing f(x) takes time T, you can get f(x), f'(x), and error(f(x))
in about 3T.
I'm probably not interested in these three values. What I'm interested
in is an enclosure of f(center(x)) and one of f'(x), for x an interval,
to find a sharper enclosure of f(x) than with a direct computation.
I do not see, offhand, that computing an interval for f' is less work
than computing an interval for f.

It is not. But what I described is one of the cheapest and most
general ways of getting a usually tighter evaluation than naïvely
putting an interval into the formula of f. A first degree Taylor form,
if you want to look at it that way.
Quote:  It is commonly thought that f' is
simpler than f, but this is generally not true. Our impresions are
based on the common case of polynomials, but consider that d(u*v*w) =
u'*v*w+u*v'*w +u*v*w' etc. so the growth may be substantial, and in

Certainly not polynomially bounded. That is why we need AD, after all.
Regards,
Christopher 

Back to top 


Google


Back to top 



The time now is Thu Jun 22, 2017 6:34 pm  All times are GMT

